title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Leveraging automatic strategy discovery to teach people how to select better projects | Reject | Summary: This works addresses two problems: first, the problem of solving a specific class of information-gathering decision processes, and second, applying the resulting algorithm in the real world by creating an intelligent tutoring system. The algorithm is a straightforward greedy optimizer that gathers information based on maximizing one-step value of information. The intelligent tutor blends the algorithm with a UI that gives humans feedback on their decisions by comparing them to the algorithm's; the paper shows that the humans implicitly learn from this feedback and improve their decision making abilities. The paper concludes with a few small comparisons of the algorithm to other algorithms.
Strengths: + The paper is well-written
+ The problem seems reasonably well formulated
+ The algorithm is natural and reasonable
+ The algorithm seems to perform well compared to alternatives
+ The "intelligent tutor" is a bit of a misnomer, but does seem to improve human decision making ability
Weaknesses: - This paper blends two distinct ideas, and does not take either very deep. The MGPS algorithm is straightforward, and the authors spend very little time (for example) analyzing its properties or connecting it seriously with the vast literature on decision processes. (For example, I'm surprised not to see any connection to bandits). I think you could have written a whole paper about MGPS -- establishing properties, contrasting with other algorithms, etc. For example, a regret bound or similar form of theoretical analysis would have been nice.
On the other hand, the "intelligent tutor" seems like a very simple UI that gives very basic feedback. While the authors do demonstrate some effectiveness, it too is not investigated deeply. There is no comparison, for example, to other forms of UI, to other forms of feedback, etc.
So, it's hard to say: is this paper about MGPS, or intelligent tutor UI/UX design? And while I appreciate that the authors probably have a vision of "solving a real-world problem" with this combination of ideas, I have to say that the problem formulation seems pretty far away from something that could actually be used in a business decision support system.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1) I am surprised by the findings of the paper. While the authors repeatedly refer to MGPS as generating "optimal strategies" and humans as using "simple heuristics", this terminology is hard to justify, since MGPS is itself a greedy heuristic. I would have intuitively thought that humans would also try to maximize one-step information gain. My question is this: do the authors have any sense of what strategy humans were using before exposure to the tutor? Why is their intuitive heuristic so bad? Can they (or you) somehow articulate what the humans are learning?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors do not explicitly discuss the limitations of their algorithm/UI in a dedicated section.
I do not see any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We agree that the strategies discovered by MGPS are not “optimal strategies” as MGPS makes use of a greedy heuristic and will reword the manuscript accordingly.
MGPS identifies a near-optimal greedy planning heuristic by approximating the value of computation for each available planning action. It is plausible that humans also try to maximize one-step information gain, but their ability to do so is very limited because they don’t know the probabilistic structure of the environment. Consequently, to the extent that people attempt to maximize the one-step information gain, they do so in a highly suboptimal way:
1) Looking at the click agreement measure, humans don’t seem to learn the strategy taught by our tutor on their own (as click agreement for the “no feedback” condition is lower than for the “choice tutor” condition).
2) We analyzed in which percentage of test trials participants investigated the most important criteria (criteria 5, “organizational readiness) first, an important component of the near-optimal strategy discovered by MGPS. In the “MGPS tutor” condition, participants investigate the criteria as their first planning action in 67% of test trials, while in the “no tutor” control condition, participants only do so in 44% of the test trials.
3) Similarly, we also investigated if participants query the most reliable experts (experts 2 and 6) first, another component of MGPS’ strategy. In the “MGPS tutor” condition, participants do this in 76% of test trials, while participants in the “no tutor” condition only achieve query the most reliable experts in 55% of test trials.
4) These results indicate that participants not taught by the MGPS tutor fail to prioritize the most important criteria and the most reliable experts very early, only selecting the “correct” first planning action (i.e. the correct criteria and experts) in 33% of test cases.
5) Moreover, participants assigned to the “no feedback” condition performed slightly less planning operations on average (3.6 planning operations compared to 3.9 for the “dummy tutor” and 4 for the “choice tutor” condition). This indicates they learned a worse strategy for deciding when to stop planning.
Prior work on strategy discovery produced similar results, where humans performed worse than one-step planning (i.e. [14]).
Additionally, prior work on strategy discovery produced similar results, where humans performed worse than one-step planning (i.e. [14]). | Summary: The problem this paper tries to tackle is how to improve human decision making in the specific problem of selecting a project between a candidate set of existing projects. The strategy to improve the human's decision making is to build an agent that can solve the project selection itself by framing as a POMDP and then having that agent serve as a tutor during a teaching phase. The agent is learned by first framing the problems as a POMDP with Guassian states and the paper proposes a myopic algorithm to select actions (actions allow you to acquire information about each project on a specific dimension of the Guassian). The main evaluation of the paper is with a user study where human participants are trying to select between multiple projects in terms of estimated performance. The authors show that participants who received training with the tutor performed better than participants without a tutor or with a dummy tutor.
Strengths: - very well-designed and rigorous user study that shows that the agent and the tutoring was effective.
Weaknesses: - (main reason for low rating) out of scope of NeurIPS: this paper seems like a bad fit for NeurIPS, the paper does not contribute new algorithms that are broadly applicable and does not evaluate methods on broadly recognized datasets or benchmarks. This papers models the problem of human project selection as a PODMP and proposes a relatively straightforward procedure and evaluates on a synthetic task of project selection. I don't see any insights that the community might benefit from. I think this would be a strong paper for a conference that suits it more.
A bad (but still useful) heuristic is to look at the references where I count a single NeurIPS paper (from 2010) and one UAI paper (among a lot of management and behavioral science citations that don't include other human-ai conferences like CHI).
- why not display the recommendation of the AI agent during test time as a baseline? since the AI agent performs well at the task, participants should be able to see its recommendations at test time.
- the proposed algorithm while adequate for the problem is not a generalizable solution for the broad family of problems that are of interest in the human-AI space. In particular, the myopic approximation is very limiting. While section 6 does evaluate it against (a relatively old) baseline in PO-UCT, it limits the baseline to 5000 steps because of runtime constraints, however, it might be possible to optimize the performance of PO-UCT to be faster.
- novelty: the method is a modification over MGPO [14] which introduces the myopic approximation, the proposed method MGPS modifies it to make it suitable for project selection. The authors do a good job of comparing to MGPO, but novelty overall is limited.
- value of tutoring for project selection: why not just follow the recommendations of MGPS? the tutoring is specific to the problem domain, thus I don't understand anything that the human can take away from the tutoring to future tasks except the one they will encounter. Moreover, for the real world problem presented, it is a one shot decision problem, so tutoring is not as well motivated.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: (from weaknesses) why not display the recommendation of the AI agent during test time as a baseline? since the AI agent performs well at the task, participants should be able see it's recommendations at test time.
for section 6, how are the different environments of the project selection task generated?
Comments:
- in title and sections, first letter of each word must be in capital
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: limitations are well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Question 1: Our intention behind the human training experiment was to teach people how to use the decision strategy discovered by MGPS themselves, as opposed to replacing the human decision maker or providing a tool that is to be used in an online fashion. Showing the recommended action during test time would therefore fail to evaluate to what extent the participant learned to apply the decision strategy, as we would expect participants to simply follow the recommendations instead of planning for themselves.
Question 2: Apologies for not including this in the main manuscript, we will add it in the revision process. An environment instance is generated by (1) sampling ground truth rewards for each project’s criteria from the criteria’s reward distribution (which is the same distribution as the initial belief state), and (2) randomly sampling the expert’s guesses based on the expert’s reliability (precision parameter) and the criteria’s ground truth value.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for your rebuttal. I have raised my score to 4, the reason for the low score is that I don't think the approach proposed has enough novelty over prior work and that the user study evaluation is not of general interest. | Summary: This paper focuses on the problem of project selection (how does a human choose which, among a set of possible projects, is the best one to pursue). To address this problem, they develop an algorithm called MGPS that discovers a rational greedy strategy for solving this problem, and then they attempt to teach this strategy to a human via an intelligent tutoring system. The approach is evaluated in a real world project selection scenario.
- page 2, line 64: "selection.To" -> "selection. To"
- page 3, line 127: "we introduce explain our general" -> "we introduce our general"
Strengths: It is interesting to see a paper that goes all the way from finding an algorithm to solve a problem, to teaching humans based on that strategy and then evaluating human performance after being taught.
Weaknesses: The same strength of the paper seems to also be its main weakness: because the authors try to tackle a whole large problem end to end, I did not find each if the individual pieces that ground breaking. Perhaps if the paper focused on just one of the problems (finding optimal strategies, or just better techniques for teaching the learned strategies), maybe a stronger contribution would arise, by more systematically addressing one problem. But as it stands, the paper seems two-headed, with limited contributions, as each of the two problems is only dealt with shallowly.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Why limit yourself to a greedy approach? Given Algorithm 1, it is trivial to define a version that does a fixed amount of look-ahead (say, k steps) to find the optimal action considering the next k actions, rather than just the next (k=)1, as the current one does.
- How come PO-UCT took so long to run for just 5000 steps? That is ~500 steps per second, which is extremely slow for what I would expect. Was a standard implementation used?
- It also surprises me that PO-UCT underperforms MGPS, is there is something I do not understand. If PO-UCT is using the UCB bandit strategy, and there are, say, 500 possible actions at the root node (which are the 500 actions that MGPS will consider, and then select the best), during the first 500 iterations, PO-UCT will systematically select each of those 500, until each of them is expanded at least once, before moving on to the exploration/exploitation phase of UCB. Hence, at this point, shouldn't it be equivalent to MGPS? (they both have explored all actions), and then PO-UCT become better and better after this point? What am I missing?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Question 1:
- We briefly experimented with a multistep version of MGPS when designing the algorithm. For numbers of steps larger than 1, the computational complexity increases rapidly, as it requires discretizing the belief state updates and searching through an exponentially increasing state space. Additionally, we did not observe a large increase in performance from doing this, indicating that greedy strategies are sufficiently resource-rational in our environment distribution.
- Our main motivation behind limiting MGPS to a single step and evaluating runtimes in general is that our intelligent tutor requires MGPS to run while the learner interacts with it. This requires the computation to be carried out online in an efficient and scalable manner. Precomputing optimal actions is impossible due to the exponential nature of possible belief states.
Question 2 and 3:
- The main limitation of PO-UCT is that it is exploring the state of belief states, where sampling an action only reveals a noisy observation based on the current belief state. MPGS, on the other hand, does not rely on sampling as it calculates the expected value of computation directly.
- Our project selection environment does not directly fit into existing POMDP frameworks, which led us to implement PO-UCT ourselves. There is likely to be room for optimization in both our MGPS and PO-UCT implementation. The 500 steps per second are a result of the fact that each step requires to compute the posterior belief state using Bayesian inference. | Summary: The authors pose the problem of teaching decision-makers how to take a single action (picking a project from among a set of projects) based upon costly advice from experts across different, weighted criteria. The authors develop a reinforcement learning approach to creating a tutor that approximately solves the MDP (using a myopic approximation). The proposed tutor outperforms a baseline on a banking dataset. The tutor is also shown to improve decision-making of subjects in an online study, who get to learn from the tutor by watching the tutor make decisions.
Strengths: +Section 3 is very clearly written. Well done (though Line 162 could have used O-notation).
+The paper clearly presents the algorithm.
+The paper is addressing an important problem of developing a tutoring system for solving MDPs
+The paper has strong empirical results vs. PO-UCT.
+The paper shows statistically significant results in a user-study. It is nice that a user-study was done.
Weaknesses: -The terms h, e, \lambda, N, and R_{total} are not sufficiently defined in Lines 41-50. Perhaps it would be better to give a more complete description later in the paper and abstract the presentation here to make it more intuitive just with words.
-The comment on brainstorming in Line 86 ignores relevant literature on the wisdom of the crowd, the science behind brainstorming and focus groups, etc.
-The statistical analysis does not report testing for the meeting of a Gaussian assumption for the confidence intervals. Details of the Box approximation should be provided. Further, it would have been better to also report p-values in that table.
-For the user study, Table 1 should report how optimal (the RR) the MGPS tutor would be if run automatically (no human intervention) and how poorly a random, automatic system would be for the RR-score.
-I am uncertain that the authors are reporting the degrees of the freedom of the F-test. The F-test has two degrees of freedom, but only 1 seems to be indicated in Lines 295-299.
-I can appreciate that the authors might have thought that the results in lines 329-345 indicate that MGPS > PO-UCT and therefore not included PO-UCT as a baseline in the user study; however, that is a debatable decision. It could be that the behavior of PO-UCT is more intelligible by users, and users with PO-UCT could have outperformed those trained by MGPS. As such, I recommend the user study be re-ran (to account for cohort effects) with the PO-UCT baseline and randomizing the allocation of participants to the conditions.
-The paper isn't exactly "tutoring" participants. Rather, the system is providing recommendations (or making decisions) and the users have to reverse engineering the actual strategy. Considering that the strategy itself is not constrained to be a set of if-then rules, it is unclear what exactly is being learned or how. It would make the paper better to have actually analyzed (by collecting the data from users) what users are learning and thinking. I would recommend looking at methods in explainable Artificial Intelligence.
Note:
-I recommend the "Dummy" tutor be called a random tutor -- a "random tutor" is a more clear description of that condition.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: -Arguably, the object-level decision is not "really" an MDP as there is no sequential decision-making. Could the authors comment on why the MDP formulation was adopted for this formulation rather than a multi-arm bandit formulation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Limitations are reasonably discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Question 1: In this case, a multi-armed bandit formulation would be sufficient to model the object-level decision. As the multi-armed bandit problem is a special case of the more general MDP, we chose the MDP formulation to keep our environment model as general as possible. MGPS does not require a one-step object-level task, and this formulation allows the approach to be used on sequential project selection problems in the future.
Thank you for pointing out a number of smaller errors in our manuscript, we will update the article accordingly. Regarding using PO-UCT as a baseline for the user study: one of the advantages of MGPS is that it can be run online in our tutor to compute feedback depending on the learner’s current belief state. It is not possible to precompute this feedback, as the space of possible belief states is exponentially explosive. MGPS runs significantly slower, which makes it much less practical for use in online tutoring sessions.
---
Rebuttal Comment 1.1:
Title: Response by Reviewer
Comment: Thanks to the reviewers for answering one of my questions and acknowledging the writing issues. I also appreciate the discussion regarding PO-UCT. However, it is unclear why it is "not possible" to approximately compute feedback. It would seem a variety of online multi-armed bandit approaches could work as well.
I look forward to the reviewers discussing additional points of feedback from my original review.
---
Reply to Comment 1.1.1:
Comment: Thank you for your additional feedback.
The statistical test we are using is nonparametric and does not require the data to follow a Gaussian distribution. The confidence intervals provided in the table are not part of the statistical analysis and are only meant to give the reader an idea of the range of scores. The reported statistic has only one degree of freedom because the denominator degree of freedom is set to infinity. Further details about the used statistical test can be found in the documentation of the nparLD package (https://search.r-project.org/CRAN/refmans/nparLD/html/f1.ld.f1.html).
The tutor is tutoring participants in the sense that it is letting users make choices between different actions and then provides feedback based on the chosen action’s resource-rationality. The recommendations are simplifying the problem by isolating specific aspects of the project selection task (e.g. learning which expert to query) but the main learning mechanism is the provided feedback.
Thank you for pointing out the missing link to literature on the wisdom of the crowds and the unclear description of some of the parameters. We will add these and the requested measures regarding the human experiment to Table 1 in the revision process (both p-values and the RR-score of MGPS are already given in the manuscript text). We don’t report the RR-score of a random baseline, as we already normalized the reported scores against it (i.e. its score would be 0). | Rebuttal 1:
Rebuttal: Response to concerns about fit and novelty
Since multiple reviewers expressed similar concerns regarding the novelty of MGPS and the intelligent tutor, we will address these in a single response. The novelty of our work lies in (1) the development of a new strategy discovery algorithm (MGPS), (2) formalizing the project selection problem, (3) the development of a new intelligent tutor that can improve human decision-making on the project selection task. Below, we briefly elaborate on the significance and substance of these three innovations:
1. Novelty of MGPS: MGPS contains several technical advancements that extend the scope of strategy discovery methods to the project selection setting. The main advances compared to the previous state-of-the-art strategy discovery method [14] are: MGPS estimates the expected value of computation for discrete expert guesses, selects computations between multiple experts with varying reliabilities, plans within a fixed budget, and evaluates a project based on multiple criteria of different importance.
2. Novelty of the problem/application: We provided the first formal model of a practically important problem, namely discovering optimal strategies for project selection under time and information constraints, and developed a benchmark for it. Unlike previous formulations of the project selection problem, our formulation captures that the decision has to be made within a limited time and that this makes it impossible to gather all relevant information. To achieve this, we modelled the task of project selection as a metalevel-MDP. This constitutes the first metalevel-MDP model of a real-world task and estimates important parameters (e.g. expert reliability, project outcomes, criteria importance) from real data. This significantly increases the real-world relevance of this line of research by advancing it from highly artificial toy problems toward realistic decision problems that organizations face in the real world. We believe our work to be a stepping stone to improving human decision-making in relevant real-world tasks.
3. Novelty of the MGPS Tutor: Our intelligent tutor used in the human experiment differed from the one introduced in [14] in multiple ways: the tutor teaches participants planning strategies in the project selection task, a more realistic application than the scenarios used in prior work that added multiple new features (e.g. planning limitations, choosing between multiple experts with different reliabilities). To teach strategies in the project selection task, the tutor relies on MGPS to compute the approximate value of computation of meta-level actions and features a new shaping schedule varying the type of selection (between projects, experts, and criteria).
We believe these advancements make our work suitable for NeurIPS.
Moreover, we believe that our submission falls well within the broad scope of NeurIPS. This is evident from the fact that NeurIPS has previously published several articles on improving human decision-making, including “Reliable Decision Support using Counterfactual Models” and “Closing the loop in medical decision support by understanding clinical decision-making: A case study on organ transplantation”, numerous applications of AI to teaching people skills and knowledge, including “Assistive Teaching of Motor Control Tasks to Humans”, “Understanding the Role of Adaptivity in Machine Teaching: The Case of Version Space Learners”, “Machine Teaching of Active Sequential Learners”, “Optimal Teaching for Limited-Capacity Human Learners”, “Curriculum Design for Teaching via Demonstrations: Theory and Application”, “Learner-aware Teaching: Inverse Reinforcement Learning with Preferences and Constraints”, “Learning to Teach with Dynamic Loss Functions” “Automatic Discovery of Cognitive Skills to Improve the Prediction of Student Learning”, as well as cognitive science research on human teaching, including “Showing versus doing: Teaching by demonstration” and “How Do Humans Teach: On Curriculum Learning and Teaching Dimension”. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper addresses the problems of (1) sequential decision-making of information gathering through asking experts for information about the rewards for different projects (as meta-reasoning towards project selection), and (2) teaching people how to make near optimal decisions in the same problem through training in an intelligent tutoring system. Specifically, novel problem aspects are considered compared to recent work on similar problems, including the availability of different experts of different reliabilities (instead of one information source) and multiple criteria for evaluating the quality/reward of a project. An algorithm is developed for generating solutions guiding information gathering. That solution is used to guide the training of humans through an Intelligent Tutoring System (ITS). Experimental results highlight the benefits of the approach through both better training of humans in an ITS (measured through better rewards earned) and better performance in simulation of agent reasoning than the problem modeled as a belief MDP solved by PO-UCT.
Strengths: The primary strengths of this paper include:
S1) The problem addressed is one that is relevant in the human-AI applications and it considers novelties not previously addressed that are important real-world complexities. The problem of sequential decision-making will be of interest to the planning community at NeurIPs.
S2) The paper is well written and easy to follow.
S3) Modeling the problem as a POMDP is appropriate, and the MCDM component seems to be appropriately modeled.
S4) I appreciated the use of two very different sets of experiments -- both training humans within an ITS and simulating the approach directly. The ITS experiment was well designed and the evaluation was very carefully conducted and convincing.
Weaknesses: The primary weaknesses of this paper include:
W1) The main contributions identified in the abstract are incremental, seemingly adding some environment complexity and solution adaption to [14], rather than being entirely novel.
W2) While the choice of a POMDP was appropriate, I wasn't entirely sure why the authors relied more on the belief MDP formulation rather than a true POMDP with observations separate from belief state transitions.
The problem being solved is gathering information to ultimately choose the best task to accomplish. This is very closely related to prior POMDP usage for problems like preference elicitation where the agent gathers information about which is the user's main preference or task they want the agent to perform.
Boutilier,C. 2002.A POMDP Formulation of Preference Elicitation Problems.In Proceedings of AAAI'02, pp. 239–246.
Doshi, F., & Roy, N. 2008. The Permutable POMDP: Fast Solutions to POMDPs for Preference Elicitation. In Proceedings of AAMAS'08, pp. 493–500.
In those models, the state space is the set of possible tasks/preferences, and actions either (1) query an information source (e.g., the user) for observations used to update the agent's Bayesian beliefs about the top preference/task, or (2) end information gathering to perform the perference/task the agent thinks is the top. That is fundamentally the same as the problem being addressed here, but the details are slightly different. Instead, your state space is belief states over the details of the tasks, from which a top one is selected. It's not clear to me why the former wouldn't work in this situation and what the advantage is in your formulation, which would help strengthen the novelty of the work. (Note: what makes your paper different from [14] also makes it different from those works).
Information gathering in POMDPs in general also have special formulations, such as the \rho-POMDP and equivalent POMDP-IR, and I would think your problem model would also fit nicely in the POMDP-IR, but neither is considered in your related work.
\rho-POMDP: Araya-Lopez, M., Buffet, O., Thomas, V., & Charpillet, F. 2010. A POMDP Extension with Belief-Dependent Rewards. In Proceedings of NIPS'10.
POMDP-IR: Spaan, M.T.J., Veiga, T.S., & Lima, P.U. Decision-theoretic planning under uncertainty with information rewards for active cooperative perception. Journal of Autonomous Agents and Multiagent Systems, 29(6):1157-1185.
W3) I also wasn't sure why you chose PO-UCT as your baseline? Its the simpler version of POMCP (whereas POMCP would have been more appropriate if you had explicit observations in your model). Belief MDPs can also be considered continuous state MDPs, and there have been many advancements in Monte Carlo Tree Search planning in that area since PO-UCT, such as:
Sunberg, Z. & Kochenderfer, M. 2017. Online algorithms for POMDPs with continuous state, action, and observation spaces. arXiv, 2017. doi: 10.48550/ ARXIV.1709.06196. URL https://arxiv.org/ abs/1709.06196.
Finally, the \rho-POMDP has a MCTS solution that would also be relevant that is more recent:
Thomas, V., Hutin, G., & Buffet, O. 2020.. Monte information-oriented planning. In Proceedings ECAI’20.
W4) I also didn't quite understand the novelty of the ITS experiment compared with [14]. Was the only difference the change in the meta-reasoning algorithm, or were there other differences?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1) Why did you use your specific formulation of the POMDP problem instead of the state-of-the-art \rho-POMDP or POMDP-IR for guiding information gathering?
Q2) Why did you choose PO-UCT as your baseline?
### Post-Rebuttal ###
I thank the authors' for their rebuttal and the ensuing conversation. They helped me better understand the research and its place in the literature.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: These were appropriate addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Question 1:
- Our work is grounded in prior work on metareasoning and strategy discovery which has been modeled with metalevel-MDPs in the past, making it the natural candidate for our adapted problem setting (e.g. [4], [6], [10], [14]).
- We chose the POMDP framework because it was sufficient to model all relevant aspects of the project selection problem, and we didn’t see the need to adopt a more complex framework. Both of the suggested POMDP variants are aimed at settings where object-level rewards are inaccessible to evaluate agents, and the agent is instead rewarded for information gathering directly. In our setting, the object-level reward signal is given through the selection of a project which does not depend on the agent’s belief state. The advantage of our POMDP formulation is that it doesn’t require the design of additional information rewards or information communication (“commit”) actions. It is plausible that our approach could be reframed in an adapted \rho-MDP by creating a reward function that rewards the value of information gain similar as it is computed in MGPS. However, this would require to externalize a large component of our strategy discover algorithm into the environment, making it unclear how to learn efficiently in this domain and making comparisons between algorithms more difficult.
Question 2:
POMCP combines PO-UCT with Monte-Carlo belief state updates, in which the belief state is approximated with particle filters. As our environment allows us to compute the belief state updates directly, we were able to apply PO-UCT to the problem directly without requiring an approximation to the belief state.
---
Rebuttal Comment 1.1:
Title: RE: Rebuttal by Authors
Comment: I thank the authors for their rebuttal and for the clarifications to all reviewers that answered some of my questions buried elsewhere in the review. I especially better understand how the user-study is novel over [14].
I think your answer to Q2 makes perfect sense for why you'd use PO-UCT as the baseline over POMCP. I was also wondering if there was a reason you didn't consider more recent MCTS algorithms for POMDPs (e.g., DESPOT, HyP-DESPOT, \alpha-DESPOT, PUCT, etc.). While there are still plenty of advances coming from extending POMCP, so it's not a bad place to start, POMCP is also not the state-of-the-art, and I'm unsure whether a more recent algorithm might have done better with a limited computational budget, especially since PO-UCT seems to be converging to values close to your algorithm.
With regards to Q1, I would think that the POMDP-IR wouldn't have a problem modeling object-level rewards since its reward function is a combination of the standard R(s, a) reward model plus information rewards (which is one of the reasons people choose to use this model over the equivalent \rho-POMDP). Modeling the rewards for selecting a project would be done through R(s, a), where the action space would include an action for choosing each project (and that action would have 0 information reward, whereas interacting with the experts would have their own actions that have information rewards but no object-level rewards). Such a decision process might not outperform your approach, but it seems like a natural baseline that would help the reader better evaluate your model, especially since even the simpler POMDP with older POMCP seems to be converging to the same performance.
While I see how your work is novel in your problem setting compared to [4, 5, 10, 14], I'm still not sure how it is novel compared to other uses of information gathering with POMDPs. The Doshi and Roy (2008) paper I mentioned in my review also uses a POMDP to guide metalevel information gathering (to ascertain a user's preference) before receiving an object-level reward for acting on it, it considers information sources of different quality (two ways of gathering information from the user, equivalent to asking different noisy experts), and has a limited budget for information gathering before acting, which are several of the novelties you've highlighted in your rebuttal over past problems in your particular problem setting. Can you further clarify?
---
Reply to Comment 1.1.1:
Comment: Thank you for your further feedback and pointing out how we can incorporate past work as additional baselines. We agree that the problem could be rephrased in such a way, although we still believe that manually designing useful information rewards would externalize an important component of the strategy that MGPS discovers automatically in the full metalevel-MDP formulation.
The main difference we see to Doshi and Roy (2008) is that the metalevel-MDP formulation does not assume symmetry in the model (e.g. some properties of specific projects could be harder to evaluate than other projects) and therefore solves a more general problem as the permutable POMDP. For example, the metalevel-MDP structure used in [10] makes it necessary to evaluate specific goals first, as they are drawn from a wider distribution of rewards. Due to the wide range of possible belief states in the project selection task, we are also skeptical that the proposed solution algorithm based on value iteration would be feasible to run with our tutoring system, as we require the value of information gathering actions to be efficiently computed online. | null | null | null | null | null | null |
The Target-Charging Technique for Privacy Analysis across Interactive Computations | Accept (poster) | Summary: Drawing inspiration from the Sparse Vector Technique (SVT), the authors introduce a privacy analysis framework called Target Charging Technique (TCT). TCT operates by primarily accounting for the privacy loss incurred through queries that hit pre-specified target sets. This framework relies on the concept of q-targets, which model a structural property of the distribution of privacy mechanism outputs. The authors show that q-targets appear in a variety of use cases, leading to natural applications of TCT. Of particular interest is the application of TCT in the privacy analysis of one-shot k-top selection and conditional releases. Also, the authors theoretically demonstrate that the parameter degradation of TCT improves over known techniques.
Strengths: 1. The problem is rather natural, as well as the proposed algorithm
2. The framework is flexible enough to accommodate interesting applications, e.g., one-shot k-top selection and conditional releases. Furthermore, it can be generalized to multiple targets.
3. The technical part of the paper seems to be solid.
Weaknesses: The reviewer acknowledges that the primary focus of the present paper is to establish the foundations of TCT and prove some of its theoretical properties. However, the absence of numerical experiments poses a significant weakness. Considering the authors' expressed intention for TCT to be adopted in practice, it is important to supplement the theoretical analysis with concrete evidence through the inclusion of some simulations. Such simulations would provide valuable insights into the practical implications and potential benefits of employing TCT in real-world scenarios.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: As highlighted in the Weaknesses Section, the primary limitation of the present paper lies in the absence of numerical experiments that demonstrate the benefits of the proposed framework. It would be advantageous for the reader if the authors could include some illustrative simulations, even if they are relatively simple in nature. This addition would provide valuable insights into the practical advantages of the proposed framework.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors' treatment of the limitations of the proposed method could be improved. Specifically, there is a lack of empirical experimentation to showcase the benefits of TCT.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
Response to “weaknesses”: Please see our general response with numerical comparisons and a demonstration of the practicality of TCT and concrete improvements over prior work. We hope this will at least partially address the concern! We plan to incorporate this in the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for adding a numerical comparison against prior work. I slightly raised my score.
---
Reply to Comment 1.1.1:
Comment: Thank you! | Summary: This paper introduces the Target Charging Technique (TCT) privacy analysis framework that focuses on providing better privacy utility trade-off on sensitive dataset with multiple differential private algorithm access. The essential idea is to only pay budget cost to the positive target and negative targets become free. TCT framework can improve several classical DP algorithms such as private Top-K, conditional release, private learning with non-private models etc.
Strengths: This paper studies a very important problem and proposes a very generalized framework that works for several classical algorithms and with many applications. The framework provide tighter privacy-utility trade off by considering the amortized overheads over user desired target. When the number of hits is large, the $log(1/\delta)$ privacy charge amortized to O(1) per target hit.
The intuition behind this paper is well presented in which if the computations do not touch user targets then the overhead should be small.
Weaknesses: The assumption in q-Target definition seems to be stringent. How to ensure the randomized algorithm M satisfy such condition?
While it is a formal paper, it would still benefits with some empirical evidence.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: See weakness.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
Response to “weaknesses”:
1. [q-target] We agree that the definition is formal. We do not ensure that this holds (there is always a target with $q=1$ but this is not very interesting) but instead show how to use its existence in privacy analysis. We then demonstrate that there are natural q-targets, in particular “not prior” (all outcomes but a selected one in any private algorithm). Not-prior is our workhorse in multiple applications but we also demonstrate there is a value in this generality (rather than limiting ourselves to “not prior” targets). For example, in the “BetweenThreshold” application we demonstrate the $q$-value of the “between” outcome (the target) depends on the width.
2. [empirical evidence] Please see our general response with numerical comparisons and a demonstration of the practicality of TCT and concrete improvements over prior work. We plan to incorporate it in the paper. | Summary: Introduced Target-Charging Technique (TCT) for privacy analysis over interactive private computations
Strengths: 1. Introduced Target-Charging Technique (TCT) for privacy analysis over interactive private computations
Weaknesses: 1. They could add experiment part
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Thank you author because the best presentation as far as I have seen. Even after this, I will also use comment feature for algorithm or coding snippet of my paper.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: 1. I am missing experimental result.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
Response to “weaknesses”: Please see our general response with numerical comparisons and a demonstration of the practicality of TCT and concrete improvements over prior work. | Summary: This paper proposes Target Charging Techniques: a method that generalizes sparse vector technique and top-k selection from private candidates. Specifically their goal is to obtain better privacy guarantees in the settings where only a small number of computations are *successful*. One of their main ideas is to consider a prior on the output that has high probability (some sort of a default output), and only pay in privacy if the output of the algorithm is not equal to that prior.
They suggest a definition "$q$-target" for a subset of events $T$ and a private algorithm $M$. Roughly speaking, this definition helps us measure the actual privacy exposure when the algorithm $M$ hits an event in T. For every $1/q$ private accesses we're going to have $1$ hit (we'll have $ M(D)$ in $T$).
Suppose we have a dataset $D$, a set of $\epsilon$-DP algorithms $A_i$'s and events $T_i$, and a threshold $\tau$. They show an algorithm that outputs $A_i(D)$'s until $\tau$ many of $A_i(D)$'s are in $T_i$ and then stops is $(\tau / q \epsilon, \exp(O(\tau))$ private.
In the rest of the paper they apply their technique to multiple use cases.
Some examples of the use cases of this technique include: private learning when we want to release only if the answer has high quality, greedy coverage, and above threshold tests.
Strengths: This paper provides a simple framework for accounting for privacy that is applicable to a number of problems in dp such as top-k selection and sparse vector technique with individual privacy charging. These problems mostly had tailor-made algorithms and analysis prior this work. The framework in this paper recovers some of the previously known bounds on these problems and improves on some of them, often through a simpler analysis.
Weaknesses: I expected the authors to include at least one setting where this framework significantly improves upon prior work on a specific application. This paper provides a pipeline that is simple, nice, and will probably be useful. Still, I am not convinced that the applications of this framework provide a significant improvement in terms of results compared to prior work.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: suggestion: add some text around definition 2.1 to explain what's going on in the definition and to help interpret lemma 2.2
Lemma 2.2 talks about $A_i$'s and $\tau_i$'s, but algorithm 1 only has $A$ and $\tau$.
In line 374: can you give a more complete explanation of what their result is and what are the guarantees that you obtain?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
Response to “weaknesses”: Please see our general response regarding numerical comparisons and a demonstration of the practicality of TCT and concrete improvements over prior work.
Response to Questions:
-- Definition 2.1 formalizes the notion of a “target” that was discussed in the introduction. We will add a backward reference to the informal use in the intro.
-- We modified the statements of Lemma 2.2 to remove the explicit index notation. It is the case that new algorithm-target pairs $(A_i,\tau_i)$ are input in each iteration but we made the notation consistent.
-- Line 374: please see additional details we provide on “SVT with individual privacy charging” in the general response. The idea in [KMS] [20] is to do a more fine-grained privacy analysis of threshold tests on counting queries that keeps a budget for each item (that is charged only when the item “participates”) instead of one overall budget. This allows us to answer more queries with the same privacy budget. Compared to the algorithm (and analysis) of [20], our Algorithm 5 with TCT analysis is simpler (adds noise once), more informative within the privacy budget as it can also release actual estimates with above-threshold responses, and is vastly more accurate: can add much smaller noise for the same privacy budget (see general response).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. After having read the review of other reviewers and the authors' rebuttal I have updated my review. I think the improvement over Kaplan-Mansour-Stemmer is interesting. I suggest that the authors present these improvements more clearly in the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your review and comments! We will incorporate your suggestions in our revision. | Rebuttal 1:
Rebuttal: We are grateful to the reviewers for their comments. All reviewers expressed a desire to see some numerical comparisons of the gains of TCT with respect to prior work. In this comment, we place our results in this context and plan to revise our presentation accordingly. Responses to additional comments are provided individually.
– “BetweenThresholds” queries (see Section 2.6):
For dataset $D = \{x_1,\dots, x_n\}$ and threshold $0\leq t_l<t_r\leq 1$, we process a sequence of queries specified by functions $f$ from the domain to $[0,1]$. The aim is to compare $\frac{1}{n} \sum_{i=1}^{n} f(x_i)$ to the thresholds and return "below","between", and "above". We include a plot that demonstrates significant improvement over prior work. The plot reports the number of "target hit" queries ($y$-axis) that we can perform with an overall privacy budget of $\varepsilon=1$ subject to accuracy requirement of relative error $0.01$ with confidence $1-1/n$. This as a function of the dataset size $n$ ($x$-axis). "Target hits" are “between” responses. Note a factor of 95 improvement of TCT over the method of Bun-Steinke-Ullman [3]. We also performed an “optimistic” analysis of the constants in BSU [3], that can be interpreted as an “upper bound” on what is achievable with their approach. TCT gained a factor of 6 improvement even compared with that optimistic bound.
TCT offers additional advantages over [3]: Our algorithm is simpler and natural (add Laplace noise and compare with thresholds). Importantly, it allows for asymptotically narrower gaps between the thresholds compared with [3]: [3] requires a gap of
$|t_r -t_l | >= \frac{12(\log(1/\varepsilon) + \log(1/\delta))}{\varepsilon}$, whereas TCT allows for gaps as small as $2/\varepsilon$ with almost no decrease in the $q$-value. This is particularly significant in applications where we are interested in a single threshold and incurring privacy loss only on queries that are close to that threshold. The narrower gap means much fewer target hits.
– SVT with Individual Privacy Charging (Section 2.8 for details):
We compare TCT with the prior work of Kaplan Mansour Stemmer [20]. We can achieve the same privacy guarantees with much lower levels of noise: [20] uses standard deviation $O( \sqrt{log(1/\delta)} \log(1/\varepsilon) /\varepsilon)$ whereas TCT uses $O(1/\varepsilon)$. We can not do an empirical comparison because [20] only provided asymptotic analysis with “constants” that appear much larger (we have the same constants as without individual charging). We can do a forgiving (to [20]) comparison by ignoring constants: If we take $\varepsilon=0.01$ and $\delta = 10^{-8}$ the noise magnitude with TCT is smaller by a factor of 20.
Additionally, our algorithm is simpler and the privacy analysis is few lines compared with several pages. Moreover, TCT analysis allows for releasing estimated counts with no additional privacy charge.
– General gains of TCT
The two examples (BetweenThresholds and Individual charging) are in the “noisy Lipschitz” setting where there is prior work to compare with. Zooming out a bit, TCT is a broad and flexible framework. Its primary advantage is that it allows for incurring privacy loss only on “target hits” in natural settings that are outside “noisy Lipschitz” regime. These include one-shot top-k selections and private tests with general private algorithms (aka not necessarily noisy Lipschitz). The relevant prior work here is either changing the algorithm or applying DP composition over $n$. TCT gains are significant as $k$ can be much much smaller than $n$ (the total number of tests or the total number we select from). Moreover, our analysis does not hide large constants. Lemma 2.2 states simplified privacy bounds for TCT with some asymptotic notation, but In the supplementary we included exact expressions in the statement of Theorem B.4. We include numerical examples in Remark B.5: TCT analysis generally gains when $n/k > 2.5$ when there are hundreds of target hits. The needed ratio is larger with fewer target hits. For 10 target hits we need $n/k > 12$ to gain over composition. These regimes are in the practical realm. In applications such as PATE (see section 2.7.1) one aims to incur privacy loss on a small fraction of examples.
Pdf: /pdf/6266b1014a4993bfcaa16d466e1f858636908c8a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents a new privacy analysis framework called Target Charging Technique (TCT). Designed for interactive scenarios where sensitive datasets are frequently accessed using differentially private algorithms, TCT offers a unique perspective. Unlike traditional composition schemes where privacy assurances degrade with multiple data accesses, TCT provides an alternative approach. It allows computations that avoid hitting a specified target to occur without significant cost but imposes a minor overhead on computations that do meet their targets. Furthermore, TCT generalizes tools such as the sparse vector technique and top-k selection from private candidates and extends their remarkable privacy enhancement benefits from noisy Lipschitz functions to general private algorithms.
Strengths: 1. This paper is well written. The details of movtivation, methodology, proofs are described clearly.
2. The insight of this paper is quite reasonable and the proposed approach looks novel.
Weaknesses: 1. The related work section is missing.
2. The effectiveness of this approach is not clear to me. What are the main improvements offered by this method?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It seems that this work is very promising in theory. Have you tested it on some real-world scenarios? Will it become open-source?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
Response to Question: Please see our general response regarding numerical comparisons and a demonstration of the practicality of TCT and improvement over prior work. As for open-source: TCT privacy analysis is simple to add and we hope it will be incorporated in DP libraries.
Response to “Weaknesses”:
1. Our work is related to vast literature on SVT and selection and we included many citations and discussions of related work throughout the introduction. The prior work on SVT that can be viewed as a predecessor of Target Charging is the paper by Hardt and Rothblum (FOCS 2010) [18]. All that said, we are very happy to include a related work section if this is the preferred format.
2. Please see our general response where we numerically demonstrate the improvements over prior work.
---
Rebuttal Comment 1.1:
Title: Reply to Authors
Comment: Thanks for your rebuttal. My concerns have been addressed and I raise my rating to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you! | Summary: The submission studies a privacy analysis framework for interactive computations. The main contribution is extending the conventional differential privacy analysis framework, namely the sparse vector technique, which counts the privacy cost only for the positive response. Particularly, the proposed framework, namely the target charging technique, generalizes from the specific constraints (e.g., threshold) of positive response to general expressions that are not necessarily threshold. The overall analyses are regarding the $(\epsilon,\delta)$-DP.
Strengths: The paper is well-organized. I think the problem of privacy analysis for interactive computing is an essential topic and the extension/generalization made in this work is significant.
I think that the privacy analysis shown in Lemma 2.2 for a general system is important in DP literature.
The list of application scenarios of the proposed framework is well-categorized and shows the applicability of the proposed technique in practice.
Weaknesses: I think the results and definitions are hard to read. In particular, it is hard to see how one can distinguish the q value of a q-target. It is also not straightforward to understand the notPrior target.
I know this paper is a theory paper, but it would be good to show some experimental results of the application of the main results, which makes the paper more sound.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1) As written in the weaknesses part, I cannot see how the q value is attained from the target setting in practice. Could you provide a (toy) example for the q-target?
2) What happens if the $\epsilon$ is not too small (e.g. $\epsilon=10$ in Lemma 2.3)? It seems the resulting $\epsilon$ would be very large, which is not acceptable in general. Is there any way to compensate for it?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
Answers:
As for experimental results, see our general response!
Question 1: $q$-targets: We list examples of useful q-targets. For private testing, the $q$-target can be (selected one of) true/false responses. For Top-$k$ selection, the $q$-targets are mapped to being in the top-$k$. For “between thresholds” tests, the $q$-target is simply a between outcome. The only place where the $q$- target is “manufactured” is in our “boundary wrapper” method.
$q$-values are derived for each of the cases. Nearly all of our use cases directly map to the “not prior” q-value that (for small $\varepsilon$) is close to $1/2$. The “between thresholds” application includes an exact calculation of $q$ as a function of the “gap” between the thresholds.
Question 2: Note that the value of $\varepsilon$ in the context of Lemma 2.3 is that of a single call to a DP algorithm on our dataset. This value is typically very small $\varepsilon \ll 1$ and therefore $q\approx ½$. In the TCT setting, we assume that there are multiple applications of private algorithms to our dataset, and tens (or hundreds) of them are “target hits.” The overall privacy cost is a composition over target hits. We can set out overall privacy budget to have $\varepsilon=1$ or if one wishes $\varepsilon=10$. But regardless, the privacy parameter values of each call are small.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply and the rebuttal. Those help me have a better understanding of the contribution.
I raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you! | null | null | null | null |
A Theory of Multimodal Learning | Accept (poster) | Summary: This paper aims to provide a theoretical framework for multimodal learning, that aims to explain why multimodal learning is sometimes better than unimodal learning even when the model is only applied on unimodal tasks. To do this, the authors measure the generalization bounds of multimodal learning on the excess population with Gaussian average, and showed multimodal has a superior generalization bound. They also try to formulate the problem with connection and heterogeneity, and show when multimodal is better than unimodal under their evaluation metrics.
Strengths: 1. Motivation is strong. Theoretical understanding of multimodal learning is an important yet less investigated problem.
2. The theoretical investigation is interesting and relatively novel to me. It is particularly interesting to characterize/formulate the multimodal problem into connection and heterogeneity.
Weaknesses: 1. The authors nicely concluded their weakness in their limitations section, I think the below points are the major weakness that are non-neglectable (although pointed out in the final section): The work needs more natural assumptions, it currently requires too many hypothesis, and it needs more realistic examples.
2. Related to the above point -- I think the results would be much stronger if it can be backed up by some toy experiments with synthetic datasets to demonstrate how the proof are useful in real-world cases. The current theoretical results are nice, but that content is not sufficient enough to be published on its own for NeurIPS.
3. The presentation/writing of the paper could be further improved (by a lot). e.g. Line 17 ’Diamond Sutra’; line 224 "The story is not yet complete"; line 245 "Let’s".
Overall, I think this is an interesting work that needs further iterations/improvements to be able to be published at NeurIPS.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: NA
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the constructive feedback and the thoughtful questions posed. We'll address each of your concerns below.
**More natural assumptions:** our work is devoted to providing a general theory of multi-modal learning, which inevitably comes at the cost of some loss in practicality. We feel the Lipschitz assumption is the major one, which other reviewers also agree upon. In general, we believe it's necessary to jump out of the current hypothesis-composition mindset to relax the Lipschitz assumption, since this assumption is also known to be a fundamental limit of representation learning generalization bounds.
Within our existing framework, assuming zero training loss of $\mathcal{G}$ on $X'$ ($\mathcal{R}(\mathcal{G},X')=0$ which is natural in practice) can partially mitigate this problem. The $L$-dependent term is then $O(\frac{L}{\sqrt{m}})$ where $m$ the size of unlabeled data is huge enough to lighten the effect of $L$ (see line 444). Moreover, if we consider the supervised setting, assuming zero training loss will immediately reduce the $L$-dependent term to zero. Future explorations could focus on modifying the analysis for this setting.
**Toy experiments:** in our opinion, the benefit of toy experiments with synthetic data somewhat marginal. Our theory is on explaining an existing phenomenon instead of proposing a better algorithm, thus re-validating the theory seems less significant to us: toy experiments can only serve as a sanity check to what our theory has rigorously proven. On the other hand, large-scale experiment is beyond the scope of the current work, and such phenomenon is already widely observed on large-scale practices of multi-modal learning. For these reasons, we believe experiment is more of a complement rather than a necessity to our theory.
Is there a specific experimental setup that the reviewer would like to see?
**Theory is not sufficient:** we believe the theoretical results are significant enough on their own. Here are a few points raised also by you and other reviewers: the problem studied is important yet less investigated, which became even more urgent due to recent empirical success; the proposed theory is intuitive and original; the technical contributions are solid.
**Writing:** thank you for catching some typos we missed. We will correct them and make a pass of the paper to improve writing.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the detailed response. I read the other reviewers' comments as well and agreed that the theory, as it is rigorously proven, might be sufficient by itself. The experimental setup that I would like to see as a reader and as a reviewer is also only toy experiments, which might be compensated by the simple examples at the beginning as pointed out by other reviewer.
However, as I personally am not familiar with the other works assumptions well enough, I will leave it for the AC to decide (1) if the assumptions used in the paper are natural/general enough such that the theoretical results are of sufficient value; (2) As the paper considers multimodal generalization properties instead of multimodal optimization, if the corresponding results are of sufficient value.
Thus I will increase my score from 4 to 5 and decrease my confidence score from 3 to 2.
---
Reply to Comment 1.1.1:
Comment: Thanks for your feedback! We will add an experiment on the example used in Section 2.1 as you suggested (it's very simple and will be easy to implement). | Summary: This paper introduces a theoretical framework aimed at elucidating the phenomenon wherein a multimodal neural network, trained on multimodal data, can exhibit strong performance on unimodal data. The framework incorporates Gaussian averaging and operates within a semi-supervised context. The findings demonstrate that under the conditions of heterogeneity and connectivity, it is possible to identify a bridging function $\hat{g}$ and a projection function $\hat{f}$ that enable comparable generalization error on unimodal data, as if the network were provided with multimodal data.
Strengths: Clear presentation, nice illustrative sinusoidal function, solid technical contribution.
Weaknesses: Some clarifications are needed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Overall, I believe the paper is of good quality and deserves acceptance. The questions I'm about to raise are intended for discussion purposes and should be considered by the authors. Lastly, I have some suggestions for improving the writing, particularly in areas that could benefit from greater clarity, especially for readers encountering the content for the first time.
1) I recommend that the authors rephrase the explanations regarding Equation (1). Based on Section 3 with T=1, my understanding is that we solve for $\hat{f_1}$ and $\hat{g}$ using ERM, substituting them back into $L(\hat{g},\hat{f}_1)$ as shown between lines 192 and 193, which is identical to Equation (1). In other words, we learn $\hat{f}_1$ and $\hat{g}$ on multimodal data. However, the current wording in line 137 states that we are attempting to minimize Equation (1) with respect to $\hat{f}$ and $\hat{g}$, which may confuse readers. When I first encountered Equation (1), I assumed that the second term had no impact because it does not depend on $\hat{f}$ and $\hat{g}$, leading me to believe that Equation (1) simplifies to solving for $\hat{f}$ and $\hat{g}$ based solely on $x$.
2) If my understanding is correct, I suggest explicitly writing out $s_{t1}=(x_{t1},y_{t1},z_{t1})$ in line 186.
3) The learning framework presented in this section assumes learning the bridging function $\hat{g}$ on a set of unlabeled data and the projection functions $\hat{f_t}$ on the labeled dataset. While this is certainly a valid learning framework, it seems to me that the most common one might be a fully supervised setting where paired data $(x_i, y_i, z_i)$ are given.
4) In line 8, the authors state that a multimodal network trained on a multimodal dataset can outperform a unimodal network even on unimodal tasks, and this paper aims to demonstrate that. However, in my understanding, this paper actually shows that a multimodal network can perform comparably on a unimodal dataset **as if** multimodal data is provided, as indicated by Equation (1). These two statements differ.
5) The model considered by the authors takes the form of $f(x,y\approx g(x))$. It seems to deviate from what is typically used in the literature. A common form found in the literature is $f(g(x),h(y))$, where $g$ and $h$ are two encoders, and $f$ could represent early/middle/late fusion.
6) I would like to bring an important topic to the attention of the authors. There have been several explorations on multimodal knowledge distillation. Under this topic, it would be highly interesting to theoretically analyze the performance of multimodal (or unimodal) teachers (or students). For further reference, please see the following paper: https://arxiv.org/abs/2206.06487. It would be fascinating to extend this work to knowledge distillation settings as a future area of research.
7) I would suggest the authors to include literature on multi-view learning as Ref [16] did. Because that is closely related to multimodal learning, or some might argue their theory are essentially the same.
8) It will be good to include numerical experiments to support the theorems (or at least the claim in line 258-260).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors clearly stated the limitation of the work in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your thorough feedback and constructive suggestions! We will address each of your questions individually and please feel free to let us know if you have further concerns.
**Q1:** thank you for highlighting this. Your understanding is accurate, and we will revise our language to make this clearer and prevent potential confusion.
**Q2:** we agree with your suggestion and will incorporate this change in our revised manuscript.
**Q3:** we agree that a fully supervised setting is more common, however, we decided to explore the more general semi-supervised setting for a few reasons: first, using unlabeled data can provide sharper bounds when available and utilized; second, in practice, labeled multi-modal data is rarer than unlabeled data, and the latter is crucial to the empirical success of recent large-scale multi-modal models; lastly, the supervised setting can be subsumed by the semi-supervised setting by partitioning the data.
**Q4:** you have precisely interpreted what Theorem 4 states. However, in line 8, we were also implicitly taking the separation between multi-modal and uni-modal (Theorem 7) into account. We will make the statement more precise.
**Q5:** we focus on a general theory, thus we only consider the simplest case $f(x,y)$. Notice that the encoder form can be subsumed by the general form, by setting the hypothesis class to include $f(g,h)$. This opens up an interesting research question, that under what assumption (for example, when data is low-rank) the encoder form can lead to better generalization bounds.
**Q6:** we greatly appreciate your suggestion. We will delve into the referenced materials and investigate the potential application of our theory in knowledge distillation.
**Q7:** we will expand our discussion on multi-view learning, and make clear distinctions where necessary as you suggested.
**Q8:** we feel re-validating the theorems with toy experiments somewhat marginal, as this work provides a theoretical explanation rather than a better algorithm. On the other hand, large-scale experiment is beyond the scope of the current work. We agree experiments might be more useful for the principle part (line 258-260), but for practical purposes modification of the algorithm is needed, and we leave it to future study.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarification
Comment: I value the clarifications provided by the authors and find no more questions. To recap, this paper adds valuable contributions to the theoretical realm of multimodal learning. Considering potential subjective preferences, I am inclined to believe that the mathematical rigor alone warrants its publication.
---
Reply to Comment 1.1.1:
Comment: Thanks for your evaluation and support of our work! All the clarifications will be reflected in this paper accordingly. | Summary: This study proposes a new theoretical foundation for multimodal learning. In particular, regarding the phenomenon that models trained in multiple modalities perform better on single-modality tasks than fine-tuned single-modality models, this paper proposes that multimodal learning is a composition of connection and label predictor, and shows that generalization performance is better than single-modality learning when both connection and heterogeneity are present.
Strengths: - This paper is well-organized and very easy to understand.
- Although I am not an expert in learning theory, I have checked throughout and found the theory to be valid. In particular, it is interesting and novel to compare the generalization bound with the usual unimodal learning case by setting multimodal learning as a composition of connection and label predictor.
Weaknesses: - In this study, multimodal learning is considered to be prediction from unimodal data, consisting of prediction by connections from one modality to another and label prediction from them. However, this setting may appear somewhat unnatural when viewed from the perspective of normal multimodal learning. It would be valuable to have a more in-depth discussion on this aspect.
- In multimodal learning, it is common to consider more than two modalities. It would be even more insightful to explore the potential implications and extensions of this theory when applied to settings involving multiple modalities.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I would appreciate it if the authors could respond to the above-mentioned weaknesses.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors explain the limitations of this theory very clearly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your constructive feedback and inquiries!
**Unnatural setting:** we agree that the mapping + predictor learning process we described, may not perfectly align with practical multi-modal learning scenarios. However, the goal of our work is to propose a general theoretical framework, therefore clarity comes with a degree of departure from real-world applicability. The perspective we offer on multi-modal learning is one of many, and we believe that it provides an intuitive way to understand the mechanisms underlying the success of multi-modal learning.
In addition, beyond the theorem statement, our theory may have implications to more general scenarios: the process of learning connections between modalities could be implicitly occurring in empirical multi-modal learning. This poses an intriguing research question: can we establish similar generalization bounds for a more practical "one-pass" algorithm, which does not require learning connections and predictors separately?
**Multiple modalities:** this work is focused on the case of two modalities for the sake of clarity, but our theoretical framework can be extended to accommodate multiple modalities in a similar way. In this case, the ERM algorithm would learn a mapping from a subset of modalities to all modalities, which involves only one hierarchy as in the two-modality case. Therefore our analysis naturally carries over to this new setting. In particular, the hypothesis class $\mathcal{G}$ will include mappings from the subset to all modalities.
If a more complicated relationship graph between multiple modalities with more hierarchies is considered and the data don't contain full modalities, we believe new tools beyond the current representation learning framework are required, potentially from the field of graph theory. | Summary: The paper establishes theoretical bounds for generalization in multimodal learning, where functions mapping between two modalities and to the label are learned. The authors demonstrate that multimodal learning achieves better generalization by decoupling the learning of hypotheses and provide insights regarding the connection and heterogeneity between modalities.
Strengths: 1. The paper aims at addressing an important topic and offers theoretical insights into the superiority of multimodal learning over unimodal learning, particularly regarding the decoupling of hypothesis learning.
2. The use of simple examples at the beginning enhances the readability of the paper and helps illustrate certain aspects of the main ideas.
Weaknesses: 1. The motivating example appears to be restrictive as it relies on F containing only the ground truth mapping from modality Y to label Z, and by the construction the map from Y to Z is exatly the only `hard part' of the learning. It might be slightly better to use the harder example in Remark 2. How would the comparison between multimodal and unimodal learning be in this case?
2. Theorem 7 only shows the existence of cases where multimodal learning surpasses unimodal learning, without delving into a deeper discussion on the required conditions for such cases. As a result, it remains unclear whether these instances solely rely on trivial constructions, such as letting F be a class containing a single function, which is exactly the true map between Y and Z, with learning class G (map between X and Y) constructed as a trivial task. More importantly, from proof of theorem 7, this is exactly how the existence is proved, i.e., by taking the motivating example in section 2.1. Such constructions imply that (1) learning the mapping between modalities is exceedingly simple, and (2) one can directly determine the relationship between a modality and the labels without the need for learning, both of which are impossible in practice. This greatly limits the paper's ability to capture the underlying mechanisms behind the success of multimodal learning. On direction for improvement could be to seek for more realistic instances.
3. The authors claim that heterogeneity (e.g., lines 167, 255) is one of the two factors leading to the superiority of multimodal learning. However, heterogeneity appears to be more of a consequence rather than the underlying reason. The statement in line 167, explaining heterogeneity as 'multimodal data being easier to learn than unimodal'. Furthermore, in definition 6, heterogeneity is directly defined as the difference between population risks of learning with single and multimodal approaches. This raises my concerns about circular reasoning within the argument. As a result, section 4 and the arguments regarding heterogeneity currently lack clarity and informativeness.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: In addition to the questions raised in Weaknesses, I am also curious if the framework presented in the paper can offer insights or potentially be extended to provide analysis for other methods, such as CLIP, where the objective is to learn shared representations for both modalities instead of learning a map from one modality to another.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have adequately discussed limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the constructive feedback and the thoughtful questions posed. We'll address each of your concerns below.
**The choice of the example:** we discussed the simpler example purely for the sake of clarity, and the harder example in Remark 2 is strictly stronger because the separations between Gaussian averages hold for **both** modalities simultaneously. A slight modification to the construction of Remark 2 can fully recover such separations for both modalities as in the simpler example. Details are given below.
Any potential data point $(x,y,z)$ is now generated by three parameters $c\in (0,1), \theta_1 \in (1,2),\theta_2 \in (-2,-1)$, under the constraint that $\theta_1+\theta_2\ne 0$, and $(x,y,z)$ is of form $(c\theta_1,c\theta_2,c(\theta_1+\theta_2))$. For the learning problem, parameters $\theta_1,\theta_2$ are pre-fixed and unknown, data is then generated from some distribution of $c$. The hypothesis classes are now $\mathcal{G}=\{g(x)=\theta x, \theta \in (-1,0)\cup(0,1)\}$ and $\mathcal{F}=\sin (1/x)$.
For any uni-modal data $x=c\theta_1$, the range of ratio $(x+y)/x$ is $(1-2/\theta_1,0)\cup(0,1-1/\theta_1)$. This range is a subset of $(-1,0)\cup(0,1)$ and we have that $\max(|1-2/\theta_1|,|1-1/\theta_1|)\ge (2/\theta_1-1+1-1/\theta_1)/2\ge 1/4$. As a result, $G(\mathcal{F} \circ \mathcal{G}(X))$ in this case is at least $1/4$ of that in the simpler example because the flip of sign doesn't affect Gaussian averages, thus the term remains $\Omega(n)$. On the other hand, we have that $\max(|1-2/\theta_1|,|1-1/\theta_1|)\le 1$, so $G(\mathcal{G}(X))=O(\sqrt{n})$ holds still. The same argument holds for $\mathcal{Y}$ as well since it mirrors $\mathcal{X}$.
**Lower bound conditions:** in our understanding, your question is can we go beyond worst case constructions and derive a general condition for such separation? While our current construction seems simple, it effectively serves its purpose in illustrating a lower bound. Deriving general instance-dependent separation bounds seems challenging, since this task is not much easier than deriving closed-form estimations of the generalization error. Nevertheless, we agree that more realistic examples can be more insightful. A potential method is to trade some flexibility from $\mathcal{G}$ to $\mathcal{F}$. We can also modify the example in Appendix D which is used for a similar purpose, but this example is less intuitive than the one in the main-text and it has other restrictions.
**The role of heterogeneity:** we acknowledge your concern about the role of heterogeneity, and whether it forms circular reasoning. Let's briefly clarify the logic behind. The superiority of multi-modal model over uni-modal model on uni-modal tasks consists of two parts: multi-modal model is comparable to uni-modal model as if multi-modal data is provided (connection), and multi-modal data is easier to learn than uni-modal by **any** uni-modal ERM algorithm (heterogeneity). Therefore, the superiority is the consequence of the two factors connection and heterogeneity.
Why heterogeneity isn't the consequence of such superiority? Because heterogeneity has stronger requirements: from Definition 6, heterogeneity is defined to be the gap for **any** uni-modal ERM algorithm, while for a particular algorithm with such superiority, the hypothesis class in use doesn't exclude the possibility that there exists better hypothesis classes with a smaller gap. To sum up, connection + heterogeneity is a sufficient condition for superiority, while superiority alone doesn't necessarily imply heterogeneity, and Theorem 7 shows the existence of a scenario with connection + heterogeneity. We will clarify and emphasize this point to avoid potential confusion.
**Extension to CLIP:** we appreciate your suggestion to explore our theory's applicability to CLIP. There are similarities between our work and the approach CLIP employs, although a more nuanced analysis is needed due to CLIP's use of cosine similarity as a specific measure. We'll delve deeper into this in our future work.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thanks for your response! I'd like to better grasp the logic behind the role of heterogeneity. From my understanding so far, the notion of the "heterogeneity gap" assumes that the "population risk of learning a single modality is notably higher than that of learning both modalities, for any algorithm". In your argument, it seems you wanted to establish that this would lead to the conclusion that "population risk of learning a single modality is notably higher than that of learning both modalities, for a particular algorithm". I'm curious why the transition from the heterogeneity gap to this superiority isn't trivial, as the conclusion appears to be inherently contained within the definition of the heterogeneity gap.
I'm also interested in understanding how the joint impact of heterogeneity and connection contributes to this superiority. In Theorem 4, it appears that heterogeneity alone is sufficient.
---
Reply to Comment 1.1.1:
Comment: Thanks for your questions. Let's start with a brief recap on the problem setting and definitions.
**The learning task:** given uni-modal population data from $\mathcal{X}$, predict the true label $z$.
**The goal of this work:** to show for the above task, a model trained on multi-modal data $(X,Y)$ outperforms any model trained on single-modal data $X$, on the same uni-modal population data from $\mathcal{X}$.
**Heterogeneity:** there is a model with multi-modal data from $(\mathcal{X,Y})$, which outperforms any model with uni-modal data from $\mathcal{X}$.
**Connection:** a mapping between modalities $\mathcal{X,Y}$ is learnable.
For your first question, in the definition of heterogeneity, the multi-modal algorithm is given multi-modal data $(X,Y)$ during both training and testing, while the task only provides the algorithm with uni-modal data in the testing phase. Therefore, heterogeneity alone can't fulfill the goal because it has a stronger requirement.
For your second question, a good connection implies that for an algorithm trained on multi-modal data $(X,Y)$, when faced with uni-modal population data from $\mathcal{X}$, it has comparable performance to as if multi-modal population data from $(\mathcal{X,Y})$ is given. Combined with heterogeneity, it fulfills the goal. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors presents an interesting theoretical framework that allow them to estimate a generalisation bound for multimodal learning.
The main result consist in proving the the bound for the multimodal case is superior to the unimodal one up to a factor $O(\sqrt{n})$ that depends on the sample size of the dataset considered.
Strengths: The paper is pleasant to read and the work is refreshingly original. Moreover, even if at an abstract level, it builds a theoretical framework for multimodal learning that was long missing in the community, especially given the importance and the relevance that multimodal models are acquiring in recent times.
Weaknesses: As also the author points out the results presented are rather abstract and more fine-grained analysis of specific multi-modal learning algorithms would be of broad interest.
Another note is the notation usage, some key terms and notations should be briefly explained (it is sufficient in the appendix) to allow less experienced readers to follow the paper more straightforwardly.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Some questions that I would like to have feedback from the others on are:
- How would you proceed to relax the condition of having a predictor class containing not only Lipschitz functions?
- How would you generalise the results presented to the case of considering more than two modalities? Namely having $m$ modalities with multiple mapping functions linking single and combination of modalities. Do you see any specific limitation in the current framework?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: First of all, the authors did an amazing job in highlighting the limitations of their work in their manuscript.
Here I simply highlight the major ones:
- Lipschitz assumption on $\mathcal{F}$ (predictor class) are restrictive as it prevents a direct extension to DNN
- Lack of a concrete analysis for specific multimodal learning algorithms
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thoughtful feedback and constructive suggestions!
**Fine-grained analysis:** the scope of this current study is primarily to establish a general theoretical framework for multi-modal learning. We will certainly delve into specific algorithms based on our framework in future works.
**Notation usage:** we will clarify the key terms and notations to ensure they are clearly explained.
**Lipschitz assumption:** in general, we believe it's necessary to jump out of the current hypothesis-composition mindset to relax the Lipschitz assumption, since this assumption is also known to be a fundamental limit of representation learning generalization bounds.
Within our existing framework, assuming zero training loss of $\mathcal{G}$ on $X'$ ($\mathcal{R}(\mathcal{G},X')=0$ which is natural in practice) can partially mitigate this problem. The $L$-dependent term is then $O(\frac{L}{\sqrt{m}})$ where $m$ the size of unlabeled data is huge enough to lighten the effect of $L$ (see line 444). Moreover, if we consider the supervised setting, assuming zero training loss will immediately reduce the $L$-dependent term to zero. Future explorations could focus on modifying the analysis for this setting.
**Multiple modalities:** this work is focused on the case of two modalities for the sake of clarity, but our theoretical framework can be extended to accommodate multiple modalities in a similar way. In this case, the ERM algorithm would learn a mapping from a subset of modalities to all modalities, which involves only one hierarchy as in the two-modality case. Therefore our analysis naturally carries over to this new setting. In particular, the hypothesis class $\mathcal{G}$ will include mappings from the subset to all modalities.
If a more complicated relationship graph between multiple modalities with more hierarchies is considered and the data don't contain full modalities, we believe new tools beyond the current representation learning framework are required, potentially from the field of graph theory.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarifications and the response
Comment: The authors properly addressed my questions and I confirm my evaluation of a strong accept.
---
Reply to Comment 1.1.1:
Comment: Thanks for your evaluation and support of our work! | null | null | null | null | null | null |
Unbiased Watermark for Large Language Models | Reject | Summary: This study explores watermarking large language models without reducing output quality. It introduces "unbiased watermarking" which avoids trade-offs in prior work. Two novel techniques - $\delta$-reweight and $\gamma$-reweight - are proposed along with an improved likelihood ratio test for detection. Risks of large language models and how unbiased watermarking enables responsible AI are discussed.
Strengths: 1. Introduces unbiased watermarking that maintains output quality. Prior work showed a trade-off, but an unbiased watermark avoids this.
2. Proposes novel $\delta$-reweight and $\gamma$-reweight techniques that preserve output quality in experiments.
3. Develops an improved detection method with a proven upper error bound, improving detection reliability.
4. Concrete demonstration of the efficacy of watermarking techniques in maintaining the utility of LLMs for downstream tasks.
Weaknesses: 1. The paper lacks a thorough examination of the efficacy of the watermark detection method. It would strengthen the findings to provide more details on factors like detection accuracy, robustness to interference, and computational efficiency of the detection approach.
2. The experiments only test the watermarking techniques on BART language models, without evaluating other popular LLMs like GPT models.
3. Conducting additional experiments using LLMs for other natural language tasks, beyond text summarization and machine translation studied in the paper, would provide a wider test case set and bolster the claims regarding the output quality preservation of the watermarking techniques.
4. The paper does not discuss the resilience of the proposed watermarking methods against potential adversarial attacks or interference attempts.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: What kinds of decoding methods are suitable for this watermarking method?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comprehensive review and valuable feedback on our paper.
We appreciate your recognition of our novel introduction of "unbiased watermarking" and the importance of ensuring output quality. We're glad you've noted the improvements we've offered over previous techniques, as well as our development of an enhanced detection mechanism.
> The paper lacks a thorough examination of the efficacy of the watermark detection method. It would strengthen the findings to provide more details on factors like detection accuracy
We understand your concerns about the lack of explicit discussion on detection accuracy. While we indirectly provided information on detection accuracy through the reported score per token, we acknowledge that the data can be represented more intuitively.
To address your feedback, we plan to include additional visualizations such as ROC curves and provide computations for the AUC, which will more directly illustrate the detection accuracy. If you have any specific suggestions on other ways we can more effectively demonstrate this information, we would be glad to consider incorporating them into the paper.
> robustness to interference
> The paper does not discuss the resilience of the proposed watermarking methods against potential adversarial attacks or interference attempts.
While our main focus in this paper was on introducing and establishing the concept of unbiasedness, we recognize your point about the absence of robustness discussions.
We have now added new experiments to test the robustness of the two unbiased watermarking methods in this paper, and “Soft Red List” method. We truncate the generated text length to 16, and about $\epsilon$ portion of the output is changed to random tokens. For 512 independent samples, we compute the AUC of different watermarking detection methods.
All methods tested here are prone to some level of degradation in performance with increased perturbations. As mentioned in Section G. Limitations, we acknowledge the significance of robustness of watermarking, but we believe that unbiasedness and robustness are two separate research directions. There is existing literature, such as the work by Kirchenbauer et al. 2023 [2], Krishna et al. 2023 [3], Sadasivan et al. [4], which is dedicated to the topic of robustness. A complete response to the watermarking robustness issue requires establishing a threat model, gauging the intensity of attacks, and experimentally verifying watermark hyperparameters. We hope the reviewer could allow us to keep the focus of this paper on introducing and validating the concept of unbiasedness, and leave robustness for other specialized research works.
> computational efficiency
We can provide further details on the efficiency of our watermarking methods. The computational complexity of the $\delta$-reweight and $\gamma$-reweight is $O(BS∣\Sigma∣)$, where $S$ is the sequence length, $B$ is the batch size, and $∣\Sigma∣$ represents the size of the vocabularies. For the maximin variant of LLR score, the complexity is $\widetilde{O}(BS∣\Sigma∣)$. When compared to the internal computation of large language models, the computational overhead of the watermark layer is insignificant.
> 2.The experiments only test the watermarking techniques on BART language models, without evaluating other popular LLMs like GPT models.
We've broadened our experiments to include other popular models like T5 for translation and LLaMA 2 for summarization and poem generation. As GPT-3.5 and GPT-4 are popular, they are proprietary and we don't have access to their full output distributions, thus we are unable to watermark them.
We want to emphasize that the methods proposed in this paper are general and applicable to any auto-regressive language model, not just specific ones such as BART, as our unbiased property is established on the general properties of auto-regressive language models.
> 3.Conducting additional experiments using LLMs for other natural language tasks, beyond text summarization and machine translation studied in the paper, would provide a wider test case set and bolster the claims regarding the output quality preservation of the watermarking techniques.
We've added a new task of generating poetry using LLaMA 2 and evaluated the output using the Perplexity.
If you have specific downstream tasks in mind that you would like to see evaluated, we would welcome your suggestions and are open to incorporating these into our experiments.
> What kinds of decoding methods are suitable for this watermarking method?
Deciding on a suitable decoding method is still an open question, largely due to the trade-offs between likelihood, diversity, and informativeness [5].
However, focusing on whether the quality of the output is maintained when combining a decoding method with unbiased watermarking method, we find the following:
- For decoding methods that sequentially modify the probability distribution of each token, e.g. **Top-k Sampling** and **Nucleus Sampling**, we can apply the decoding method first to adjust the probabilities, followed by the watermarking method. This ensures the output quality is equivalent to the use of the decoding method alone.
- If the decoding method respects the original distribution through sampling (e.g. **Multinomial sampling**, **Arithmetic Sampling**[6]), we can apply the watermarking method first, then use the decoding method. This also maintains the same output quality.
- However, for deterministic decoding methods like **Beam Search**, we cannot guarantee that combining it with our watermarking method will preserve the output quality. For example, if $\delta$-reweight and Beam Search are used, the Beam Search would make no difference as the $\delta$-reweight has already adjusted the output distribution to $\delta$ distribution.
Once again, we appreciate your thoughtful review and feedback on our paper. Please let us know if you have any additional questions or suggestions.
---
Rebuttal Comment 1.1:
Title: Addition Response from Authors
Comment: Robustness experiment:
| $\epsilon$ | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
|-|-|-|-|-|-|-|
| $\delta$-reweight | 0.9997±0.0005 | 0.9569±0.0021 | 0.8881±0.0043 | 0.8152±0.0059 | 0.7487±0.0056 | 0.6851±0.0067 |
| $\gamma$-reweight | 0.9936±0.0016 | 0.9297±0.0030 | 0.8391±0.0018 | 0.7574±0.0054 | 0.6942±0.0107 | 0.6502±0.0068 |
| Soft($\delta$=1.0) | 0.8446±0.0069 | 0.7871±0.0081 | 0.7339±0.0110 | 0.6741±0.0119 | 0.6334±0.0084 | 0.5859±0.0079 |
| Soft($\delta$=2.0) | 0.9705±0.0030 | 0.9239±0.0070 | 0.8680±0.0088 | 0.7956±0.0110 | 0.7312±0.0121 | 0.6561±0.0124 |
Downstream quality experiment:
| |Text summarization (Llama 2), ROUGE-1| Machine translation (T5), BERTScore | Poetry generation (Llama 2), PPL |
|--|--|--|--|
|$\delta$-reweight| 0.3704±0.0009| 0.577±0.003| 2.71±0.06|
|$\gamma$-reweight| 0.3704±0.0009| 0.576±0.003| 2.71±0.08|
|Soft($\delta$=0.0)| 0.3705±0.0009| 0.575±0.003| 2.73±0.08|
|Soft($\delta$=1.0)| 0.3678±0.0009| 0.571±0.003| 3.04±0.13|
|Soft($\delta$=2.0)| 0.3610±0.0009| 0.560±0.003| 3.92±0.16|
[1] Kirchenbauer, John, et al. "A watermark for large language models." arXiv preprint arXiv:2301.10226 (2023).
[2] Kirchenbauer, John, et al. "On the Reliability of Watermarks for Large Language Models." _arXiv preprint arXiv:2306.04634_(2023).
[3] Krishna, Kalpesh, et al. "Paraphrasing evades detectors of ai-generated text, but retrieval is an effective defense." _arXiv preprint arXiv:2303.13408_ (2023).
[4] Sadasivan, Vinu Sankar, et al. "Can ai-generated text be reliably detected?." _arXiv preprint arXiv:2303.11156_ (2023).
[5] Zarrieß, Sina, Henrik Voigt, and Simeon Schüz. "Decoding methods in neural language generation: a survey." Information 12.9 (2021): 355.
[6] Vilnis, Luke, et al. "Arithmetic Sampling: Parallel Diverse Decoding for Large Language Models." _International Conference on Machine Learning_. PMLR, 2023.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: We noticed that your score for our paper decreased following our rebuttal. We are keen to understand the reasons behind this change, especially since we believed we had addressed the concerns you previously raised.
Could you kindly provide clarification or specific feedback behind this change?
Thank you for your time and expertise. | Summary: The paper proposes a modification of the watermark of Kirchenbauer et al. that ensures each next token prediction is marginally indistinguishable from a regular sample from the language model (whereas Kirchenbauer et al. bias some tokens over others). The main idea is to use inverse transform sampling to sample the text token (the paper also proposes a "soft" version of inverse transform sampling, i.e., \gamma-reweight), where the inputs to the sampler are a function of the context.
Strengths: The paper proposes a neat, simple modification of the watermark of Kirchenbauer et al. that addresses a salient problem (i.e., the watermark of Kirchenbauer et al. does not preserve the original text distribution). The paper validates the proposed watermark with both theory and experiments.
Weaknesses: The empirical validation of the watermarked proposed in the paper is somewhat lacking. For example, how robust is the watermark to paraphrasing compared to the watermark of Kirchenbauer et al.? Given Kirchenbauer et al. evaluate robustness to various kinds of paraphrasing attacks, it seems reasonable to expect the paper to do the same. It's also not clear what the purpose of Table 1 and Figure 3 is if the main claim is that the watermark *provably* does not bias the text distribution (i.e., shouldn't all the metrics stay the same?). Also, Section 4.1 seems needlessly abstract, since ultimately both watermarks are variations of inverse transform sampling (it feels unintuitive to think \delta-reweighting as a "reweighting" of a distribution, since it is essentially deterministic).
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Line 88: "Note that the one-shot-undetectable property implies the downstream invariant property." It is not immediately clear why this is the case. In general, both Definitions 1 and 2 would benefit from some more context and/or clearer presentation.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper does not discuss limitations (if any) of the proposed methods in meaningful detail. The Conclusion section would benefit from more detailed/concrete discussion and takeaways.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for recognizing the saliency of the problem that our study addresses.
> The empirical validation of the watermarked proposed in the paper is somewhat lacking.
We've expanded the empirical validation with a new robustness experiment, and in addition to the original models BART and OPT, we've tested on new models including LLaMA2 and T5. We've also incorporated a new task, poetry generation, and a new metric, GPTScore. We hope these additional experiments address your concern.
> The paper should also evaluate various paraphrasing attacks, similar to Kirchenbauer et al.
Indeed, the original paper doesn't contain a discussion on robustness, as the main focus of our paper is on unbiasedness.
However, to address your point, we have now added new experiments to test the robustness of the two unbiased watermarking methods in this paper, and the “Soft Red List” method in Kirchenbauer et al. 2023 [1]. We truncate the generated text length to 16, and about $\epsilon$ portion of the output is changed to random tokens. For 512 independent samples, we report the AUC of different watermarking detection methods:
| $\epsilon$ | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
|-|-|-|-|-|-|-|
| $\delta$-reweight | 0.9997±0.0005 | 0.9569±0.0021 | 0.8881±0.0043 | 0.8152±0.0059 | 0.7487±0.0056 | 0.6851±0.0067 |
| $\gamma$-reweight | 0.9936±0.0016 | 0.9297±0.0030 | 0.8391±0.0018 | 0.7574±0.0054 | 0.6942±0.0107 | 0.6502±0.0068 |
| Soft($\delta$=1.0) | 0.8446±0.0069 | 0.7871±0.0081 | 0.7339±0.0110 | 0.6741±0.0119 | 0.6334±0.0084 | 0.5859±0.0079 |
| Soft($\delta$=2.0) | 0.9705±0.0030 | 0.9239±0.0070 | 0.8680±0.0088 | 0.7956±0.0110 | 0.7312±0.0121 | 0.6561±0.0124 |
All methods tested here are prone to some level of degradation in performance with increased perturbations. As mentioned in Section G. Limitations, we acknowledge the significance of robustness of watermarking, but we believe that unbiasedness and robustness are two separate research directions. There is existing literature, such as the work by Kirchenbauer et al. 2023 [2], Krishna et al. 2023 [3], Sadasivan et al. [4], which is dedicated to the topic of robustness. A complete response to the watermarking robustness issue requires establishing a threat model, gauging the intensity of attacks, and experimentally searching for the best watermark hyperparameters. We hope the reviewer could allow us to keep the focus of this paper on introducing and validating the concept of unbiasedness, and leave robustness for other specialized research works.
> It's also not clear what the purpose of Table 1 and Figure 3 is if the main claim is that the watermark _provably_ does not bias the text distribution (i.e., shouldn't all the metrics stay the same?).
You're right in pointing out that for all unbiased watermarks, all metrics should remain the same. However, please note that Soft($\delta$=1.0) and Soft($\delta$=2.0) presented in Table 1 and Figure 3 are "Soft Red List" method from Kirchenbauer et al. 2023 [1], which is biased. Hence there are statistically significant differences. On the other hand, there is no statistically significant difference among unbiased watermark methods.
> Also, Section 4.1 seems needlessly abstract, since ultimately both watermarks are variations of inverse transform sampling (it feels unintuitive to think \delta-reweighting as a "reweighting" of a distribution, since it is essentially deterministic).
We appreciate your feedback on this section. We believe that the general framework of the unbiased reweighting function is important to showcase the infinite possibilities of unbiased watermark methods. However, your feedback suggests that our presentation may not have been clear enough. We are open to revising this section, emphasizing the deterministic nature of the $\delta$-reweight, and further clarifying the usage of inverse transform sampling.
> Line 88: "Note that the one-shot-undetectable property implies the downstream invariant property." It is not immediately clear why this is the case. In general, both Definitions 1 and 2 would benefit from some more context and/or clearer presentation.
Thanks for your suggestion. One-shot-undetectable property ensures that distribution is the same, therefore the expectation is also the same in downstream invariant property. We'll include an explicit proof in the appendix, and, if accepted, use the extra space to provide more context to these definitions, ensuring clarity for the reader.
Once again, we're grateful for your valuable feedback. We hope our responses address your concerns, and we're willing to make the necessary modifications to improve the paper.
[1] Kirchenbauer, John, et al. "A watermark for large language models." arXiv preprint arXiv:2301.10226 (2023).
[2] Kirchenbauer, John, et al. "On the Reliability of Watermarks for Large Language Models." _arXiv preprint arXiv:2306.04634_(2023).
[3] Krishna, Kalpesh, et al. "Paraphrasing evades detectors of ai-generated text, but retrieval is an effective defense." _arXiv preprint arXiv:2303.13408_ (2023).
[4] Sadasivan, Vinu Sankar, et al. "Can ai-generated text be reliably detected?." _arXiv preprint arXiv:2303.11156_ (2023).
Additional Experiment Results:
| |Text summarization (Llama 2), ROUGE-1| Machine translation (T5), BERTScore | Poetry generation (Llama 2), PPL | Machine translation (mbart), GPTScore |
|--|--|--|--|--|
|$\delta$-reweight| 0.3704±0.0009| 0.577±0.003| 2.71±0.06|1.25 ± 0.01|
|$\gamma$-reweight| 0.3704±0.0009| 0.576±0.003| 2.71±0.08|1.26 ± 0.01|
|Soft($\delta$=0.0)| 0.3705±0.0009| 0.575±0.003| 2.73±0.08|1.26 ± 0.01|
|Soft($\delta$=1.0)| 0.3678±0.0009| 0.571±0.003| 3.04±0.13|1.31 ± 0.01|
|Soft($\delta$=2.0)| 0.3610±0.0009| 0.560±0.003| 3.92±0.16|1.41 ± 0.01|
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts in crafting their rebuttal. They promise to make various parts of the paper clearer, add more experiments, and also add more theoretical results. While the proposed additions/changes seem reasonable, it is difficult to evaluate whether they will be effective without actually seeing the revised paper. I will not be changing my score (i.e., borderline accept), and would encourage the authors to perhaps consider resubmitting a revised version of the paper to another conference.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Firstly, we sincerely understand your position in wanting to see the new information in the rebuttal incorporated into a revised paper. We recognize the challenges this poses.
To emphasize, there aren't new theoretical results being introduced. Our original assertion of preserving the distribution remains sound and unchanged.
The supplementary experiments and clarifications were introduced based on your feedback to answer your question about robustness and to clear potential misunderstandings. The new experiment is already conducted and results are provided for your consideration. These are minor modifications that do not alter the main conclusion of our study: that we've addressed a salient problem of preserving the output distribution and quality with unbiased watermark algorithm.
We sincerely thank you for your insights and diligence throughout the review process. We hope that the detailed explanations provided in our previous comments alleviate your concerns, even though these details cannot be incorporated into manuscript during this discussion phase.
However, should you have any lingering doubts or questions about our methods' effectiveness in addressing the salient problem of preserving the output quality/distribution, please do let us know. We hope our endeavors and contributions resonate with the importance of the problem we solve. Thank you once again for your time and insight. | Summary: This paper discuss about the important problem of how to watermark the outputs from language models while keeping the model not impacted by watermarking. A perfect watermark scheme should be undetectable without prior information and should have no harm on the utilities of LLMs. This paper proposes some desired properties of watermark schemes such as **n-shot-undetectable** and **downstream-invariant**. The papers also gives the proof that there exists perfect watermark schemes. And guided by the proposed concepts, two reweighing watermark scheme are proposed, as well as corresponding methods for verification. The experiment results show the proposed methods have minor impact on the generated text quality.
Strengths: 1. It's super important to have a theoretical framework to guide researchers to design better watermark schemes, and this paper is one of the pioneers in this direction.
2. Considering those automatic metrics, the results shown in the paper, the proposed two methods look good.
Weaknesses: Major concerns:
1: The quality of the watermarked texts are only evaluated by automatic metrics.
2: The results only comes from BART ( along with examples from OPT ).
Minor concerns:
1: The word 'unbiased' is a little misleading.
2: It's a little hard to understand some paragraphs in this paper.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1: As I stated before, if some evaluation results from LLMs such as GPTScore. The results would be more convincing.
2: It's about the 2nd weakness. The generation power of OPT is poor compared to many other LLMs. Actually, all the provided examples in the paper is not good to me, either for those from a watermarked model or those from a model without watermark. I'm wondering the results from other LLMs such as LLaMA and T5. Will the quality be severely degraded?
3: The so-called strength of watermark doesn't mean the degradation of generation quality of language models. For examples, texts from a watermarked model may have a different style from the model without watermark. Both the two styles of texts can be good to us.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Please read last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We're deeply grateful for your acknowledgment of our work's importance and pioneering status. Your recognition is invaluable to us. The following addresses each point in your feedback.
> 1: The quality of the watermarked texts are only evaluated by automatic metrics.
> 1As I stated before, if some evaluation results from LLMs such as GPTScore. The results would be more convincing.
Indeed, we rely on automatic metrics for evaluating the quality of the watermarked texts. Our theoretical guarantee on the unbiasedness of the text distribution implies that the quality should remain consistent across all metrics, including non-automatic ones. The challenges associated with manual evaluation, such as high costs, subjectivity, and reproducibility, made us prioritize automatic metrics.
On the other hand, we have supplemented our evaluations with the GPTScore metric using text-curie-001 as backend. The result for machine translation are as follows:
- $\delta$-reweight: 1.25 ± 0.01
- $\gamma$-reweight: 1.26 ± 0.01
- No watermark: 1.26 ± 0.01
- Soft($\delta$=1.0): 1.31 ± 0.01
- Soft($\delta$=2.0): 1.41 ± 0.01
The GPTScore experiment are consistent with our initial findings, confirming the **downstream-invariant** property.
> 2: The results only come from BART (along with examples from OPT).
> I'm wondering the results from other LLMs such as LLaMA and T5. Will the quality be severely degraded?
Our method is designed to work for any auto-regressive language model, not limited to BART and OPT. We opted for BART and OPT in our experiments because they are representative models for encoder-decoder and decoder-only structures.
However, responding to your suggestion, we have now also evaluated T5 for translation tasks and LLaMA for summarization and poem generation. The downstream-invariant property is also observed in these new models: The difference in output quality between no-watermark and biased-watermark versions is statistically significant, but there is no statistically significant difference between the unbiased watermark and the no-watermark versions (, here Soft($\delta$=0.0) contains no watermark).
| |Text summarization (Llama 2), ROUGE-1| Machine translation (T5), BERTScore |
|--|--|--|
|$\delta$-reweight| 0.3704±0.0009| 0.577±0.003|
|$\gamma$-reweight| 0.3704±0.0009| 0.576±0.003|
|Soft($\delta$=0.0)| 0.3705±0.0009| 0.575±0.003|
|Soft($\delta$=1.0)| 0.3678±0.0009| 0.571±0.003|
|Soft($\delta$=2.0)| 0.3610±0.0009| 0.560±0.003|
It's worth emphasizing that our theoretical results are built on the general properties of auto-regressive language models, and not on the specifics of individual models.
> 1: The word 'unbiased' is a little misleading. 2: It's a little hard to understand some paragraphs in this paper.
Thank you for your suggestion on presentation. We have considered other terms, including "undetectable watermark" and "watermark without performance degeneration." But since the latter two are implications of the “unbiased”, we decided to highlight the property of unbiasedness.
Additionally, we value clarity and are actively working on refining our presentation to ensure better comprehension. If there are specific sections you found hard to understand, please kindly let us know. We would be happy to provide further explanations and clarification.
> 3: The so-called strength of watermark doesn't mean the degradation of generation quality of language models. For examples, texts from a watermarked model may have a different style from the model without watermark. Both the two styles of texts can be good to us.
We agree there could be different styles of generation between biased-watermark and no-watermark models. However, based on common definitions of what's considered "good," we found that biased watermarking methods led to statistically significant deteriorations on metrics such as PPL, BERTScore, ROUGE score, BLEU score and GPTScore. To our knowledge, we're yet to encounter a metric where a biased watermark leads to a statistically significant improvement. Thus, while we acknowledge style variations, our findings lean towards the quality degradation introduced by biased watermarks.
Again, thank you for your constructive feedback and your valuable suggestions. We hope we have addressed your concerns and made necessary improvements to the paper. Please don't hesitate to reach out if you have additional questions or suggestions. | Summary: This paper introduces a novel framework for embedding watermarks into Large Language Models (LLMs) without compromising their output quality. The proposed watermark is designed to be undetectable by LLM users.
A general framework is put forth for incorporating this unbiased watermark into LLMs. This is achieved using two innovative and practical watermarking techniques: $\delta$-rewrite and $\gamma$-reweight.
Experiments conducted on summarization and machine translation tasks demonstrate that these watermarking techniques do not degrade the LLM's output quality, thereby substantiating the effectiveness and practicality of the proposed framework.
Strengths: This paper tackles a critical issue in the Large Language Models (LLMs) field, which is the misuse of LLMs. The authors propose an unbiased watermark and a novel framework for its implementation. The experimental results clearly prove that this watermarking technique works effectively.
The paper's layout is good and easy to follow. It uses clear formulas and theorems that help explain both the problem and the proposed solution. Overall, this is a solid piece of work that contributes significantly to the field.
Weaknesses: This paper does not provide a comparison with any existing watermark baselines. Such comparisons would be beneficial to demonstrate the relative performance of non-unbiased watermark techniques.
Only two types of downstream tasks have been evaluated in the study, which limits the generalizability of the findings. It would enhance the robustness of the results if a broader range of tasks, perhaps using a comprehensive benchmark, were tested.
Additionally, the evaluation of model output quality relies solely on automatic metrics. However, these metrics alone may not be sufficient to provide a comprehensive assessment of output quality. Including more diverse and possibly human-centric evaluation measures or LLM auto evaluator could strengthen the evaluation process.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Can you provide information on the computational complexity of the proposed watermarking framework?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the novelty and significance of our work. We appreciate the time you took to review our paper and the feedback you provided. Here's our response to address your concerns:
> This paper does not provide a comparison with any existing watermark baselines.
Actually, we did provide a comparison with the watermarking technique discussed in Kirchenbauer et al. 2023 [1]. To the best of our knowledge, this was the only published and comparable paper on watermarking at the time we submitted our paper. Specifically, Figure 3 illustrates the “Soft Red List” method from Kirchenbauer et al. 2023 [1], which we refer to as the “Soft(...)” method. If you think there are other relevant baselines worthy of comparison, please kindly let us know. We value your expertise and would be happy to consider them.
> The paper only evaluates two downstream tasks, which may limit the generalizability of the conclusions.
We chose text summarization and translation tasks due to their representativeness in the NLP community, and prior to our work, we haven't seen any studies that assess the impact of watermarking on these tasks.
Moreover, based on your feedback, we have now incorporated an additional task: poetry generation using LLaMA 2, which diverges from text summarization and translation in being an open-ended text generation task without any objective performance metric. Here we report the perplexity:
- $\delta$-reweight: 2.71 ± 0.06
- $\gamma$-reweight: 2.71 ± 0.08
- Soft($\delta$=0.0): 2.73 ± 0.08
- Soft($\delta$=1.0): 3.04 ± 0.13
- Soft($\delta$=2.0): 3.92 ± 0.16
We emphasize that our primary findings of the unbiased watermark method have theoretical proofs ensuring their applicability across tasks. The empirical validations on the two (now three) tasks were meant to verify our theoretical findings rather than claim generality solely based on them. On the other hand, the nature of our theoretical proof ensures that its properties should hold true across any task, not just the ones we've tested. Nonetheless, we acknowledge your feedback and welcome any specific downstream task recommendations for added evaluations.
> The quality of model outputs is evaluated only using automated metrics.
Indeed, we primarily relied on automated metrics for evaluation due to their efficiency, cost-effectiveness, consistency, and reproducibility. While human-centric evaluation measures can provide an additional verification of our theory, they are subjective, expensive, and time-consuming. Given the mathematical guarantees backing our framework, we felt automated metrics were sufficient.
However, we are receptive to your feedback and we have supplemented our evaluation with GPTScore [2], an LLM auto evaluator. Utilizing text-curie-001 for our evaluations, the final results for machine translation are as follows (smaller score is desirable):
- $\delta$-reweight: 1.25 ± 0.01
- $\gamma$-reweight: 1.26 ± 0.01
- No watermark: 1.26 ± 0.01
- Soft($\delta$=1.0): 1.31 ± 0.01
- Soft($\delta$=2.0): 1.41 ± 0.01
> Can you provide information on the computational complexity of the proposed watermarking framework?
Certainly! The $\delta$-reweight and $\gamma$-reweight both have an asymptotic complexity of $O(BS∣\Sigma∣)$, where $S$ is the sequence length, $B$ is the batch size, and $∣\Sigma∣$ represents the size of the vocabularies. The maximin variant of the LLR score has an asymptotic complexity of $\widetilde{O}(BS∣\Sigma∣)$. The constants involved are quite small, making the time required to add/detect the watermark negligible compared to the internal operations of the LLM.
We hope this addresses all of your concerns. Thank you once again for your valuable feedback, and we look forward to any further suggestions or queries you may have.
[1] Kirchenbauer, John, et al. "A watermark for large language models." arXiv preprint arXiv:2301.10226 (2023).
[2] Fu, Jinlan, et al. "GPTScore: Evaluate as you desire." arXiv preprint arXiv:2302.04166 (2023).
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the rebuttal! The addition of the new task and the automatic scorer GPT addresses my concerns. I've adjusted my score to 6 (weak accept).
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: We sincerely appreciate your time and effort in evaluating our rebuttal and for your constructive feedback throughout the review process.
Most importantly, we are grateful for your recognition of our novel contribution. Your remarks about our contribution being "solid" and one that "contributes significantly to the field" truly mean a lot to us.
Thank you once again for your thoughtful review and for your adjusted score. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
HyTrel: Hypergraph-enhanced Tabular Data Representation Learning | Accept (spotlight) | Summary: This paper proposes to transform a source table into a hypergraph. Each cell is a node in the graph, where nodes in the same row, column, and table are connected using three types of hyperedges. The authors claimed that by modeling the hypergraph of the table, the proposed method, named *HyTrel*, is able to generate representations of tables robust to permutations. Two self-supervised objectives: (1) cell or header corruption indication prediction and (2) in-batch negative sampling with InfoNCE contrastive loss for paired nodes.
The experiments were performed on (1) pre-training on 27M tables and (2) fine-tuning for four tabular understanding tasks: column type annotation, column property annotation, table type detection, and table similarity prediction. The empirical findings include
- Pre-training on HyTrel leads to marginal improvement over the variant without pre-training.
- No clear evidence of which pre-training objectives are better, considering the four tasks evaluated.
Strengths: - It is reasonable to seek permutation invariance in tabular data representation as the existing table serialization approach fails to achieve this property.
- The proposed method surpasses previous methods by utilizing directly supervised training on the target data without pre-training.
Weaknesses: **Complexity**
The main concern lies in the efficiency of the proposed method. It needs to build a huge hypergraph on the tables, where the number of nodes is equal to the number of cells in the table, and the number of edges explodes when we combine nodes, columns, and rows. As we know, it is very common that a table may contain millions of rows and thousands of columns. However, there seems to be lacking complexity analysis of the overall running time and memory cost.
Although HyTrel was observed converging faster than previous BERT-based methods in terms of the number of epochs, there is no direct comparison of the inference time needed for each sample/batch. Also, the time required for building the graph in the training and the prediction phase should be taken into account.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: Complexity. The main concern lies in the efficiency of the proposed method. It needs to build a huge hypergraph on the tables, where the number of nodes is equal to the number of cells in the table, and the number of edges explodes when we combine nodes, columns, and rows. As we know, it is very common that a table may contain millions of rows and thousands of columns. However, there seems to be lacking complexity analysis of the overall running time and memory cost.
**A1**: a. Given a table with $m$ rows and $n$ columns, the corresponding hypergraph will have $mn$ cells, and $m+n+1$ hyperedges. So the number of hyperedge is **linearly** related to the table size.
b. As for the large table input, HyTrel can deal with **arbitrary size of tables theoretically**, because the HyperTrans model structure does not have positional encodings that constrain the sequence length as in common transformer models. **We have included such analysis and the overall running time and memory cost for relatively large tables in our submitted appendix** (Appendix B1, Table 3), and we also include it here for your reference.
|Size Limitations (#rows, #columns) | Dev Acc (%) | Test Acc (%) | Training Time (min/epoch) | GPU Memory Consumption (memory used/batch size, GB/sample) |
|-|-|-|-|-|
|(30, 20) | 96.23 | 95.81 |14 |0.58|
|(60, 40) | 96.38 | 95.80 | 23 | 1.41 |
|(120, 80)| 96.02 | 95.71 | 51 | 5.13 |
|(240, 160) | 96.06 | 95.67 | 90 | 16.00 |
When dealing with large tables, downsampling of rows and columns to represent the table content is a commonly used practice. As shown in the above table, we use the table type detection dataset for experiment as it contains relatively large tables. We sample a subset of rows and columns of a large table by limiting the maximal rows and columns count (we truncate tail rows and columns).
We observe that **appropriate downsampling does not hurt the performance** for table level tasks (and we hypothesize the same for column-level prediction tasks). We note that, as we sample more rows, the performance increases. However, the performance soon saturates but incurs a huge increase in time and memory consumption. So in our evaluation, we limit to using just 30 rows and 20 columns, as HyTrel can already perform close to its peak with reasonable time and memory cost.
**Q2**. Although HyTrel was observed converging faster than previous BERT-based methods in terms of the number of epochs, there is no direct comparison of the inference time needed for each sample/batch. Also, the time required for building the graph in the training and the prediction phase should be taken into account.
**A2**: a. Theoretical Analysis: given a table with $m$ rows and $n$ columns, let’s assume each cell only contains one token, and we use dot-product and single head in the attention operation. The inference complexity comparison of one layer attention is:
||BERT-based methods | HyTrel |
|-|-|-|
|w/o linear transformation | $O((mn)^2 \cdot d)$| $O((mn)^2)$|
|w/ linear transformation | $O((mn)^2 \cdot d) + (mn)d^2)$| $O((mn)^2) +(mn)d^2)$|
For the BERT-based methods, the input sequence length is $mn$. $d$ is the hidden dimension.
We can see that our HyTrel has less complexity, and the difference comes from the dot-product for attention scores. The query in the set transformer of HyTrel is a learnable vector with $d$ dimension, then the bottleneck comes from the softmax calculation with a complexity of $O((mn)^2)$. In the self-attention, the query is a matrix from the whole input sequence with $mn \times d$ dimensions, which has a complexity of $O((mn)^2 \cdot d)$ in the attention score calculation.
b. Empirical Analysis
| Time (seconds) | CTA (4844 samples) | CPA (1560 samples) | TTD (4500 samples) | TS (1391 samples) |
| - | - | - | - | - |
|batch size | 8 | 8 | 8 | 8 |
|TaBERT (K=1) - data preparing | 13 | 5 | 51 | 13 |
|TaBERT (K=1) - inference | 27 | 5 | 25 | 28 |
| **TaBERT (K=1) - total inference** | **40** | **10** | **76** | **41** |
|TaBERT (K=3) - data preparing | 14 | 5 | 52 | 13 |
|TaBERT (K=3) - inference | 85 | 16 | 79 | 80 |
| **TaBERT (K=3) - total inference** | **99** | **21** | **131** | **93** |
|HyTrel - graph building | 5 | 1 | 16 | 5 |
|HyTrel - inference | 16 | 6 | 17 | 13 |
| **HyTrel - total inference** | **21** | **7** | **33** | **18** |
We include comparison of our approach with the TaBERT baseline about the inference time, including data processing (graph building time in HyTrel). As observable from the table, **the total inference time of our model is lesser compared to TaBERT, and the time cost for graph building is not a bottleneck**.
Note:
1. We keeps a maximal of 3 rows with the HyTrel model for fair comparison with the BERT (K=3) models.
2. The data preprocessing of TaBERT is to format the input tables (.json) into tensors, and the graph building of HyTrel is to format the input tables (.json) into feature tensors and the incidence matrix in hypergraphs.
3. All experiments are conducted on a single A10 GPU, and the inference batch sizes are all chosen to be 8 for all models and all dataset;
4. We use the validation set of CTA, CPA and TTD for experiments. For TS with a small number of tables that is tested with five fold cross-validation in the paper, here we use the whole dataset for experiments.
---
Rebuttal 2:
Comment: Thanks for your response and I will update my score accordingly.
---
Rebuttal Comment 2.1:
Comment: Thank you for revisiting our work and considering our response; we appreciate your willingness to update the score. | Summary: The paper proposes a framework for tabular representation learning by modeling the structure of the tables as a hyper-graph working on different granularities, namely, rows, columns, and the entire table.
Strengths: - The paper addresses a substantial question of how to best incorporate the structure of a table while pre-training models for tabular data.
- The proposed framework outperforms the baselines on tasks purely dependent on the learned table representations.
- The paper is well-written and easy to follow.
- In addition, the authors also include quantitative analysis to demonstrate that their framework can assimilate the table structures to generate robust representations.
Weaknesses: There have been recent works proposing techniques to incorporate the table structure while pre-training or fine-tuning (by biasing the attention layer or tasks other than the ones considered in the paper). It will strengthen the paper if the authors discuss how their work differs from those.
- MATE: Multi-view Attention for Table Transformer Efficiency Julian Martin Eisenschlos, Maharshi Gor, Thomas Müller, William W. Cohen EMNLP 2021
- Learning Enhanced Representations for Tabular Data via Neighborhood Propagation Kounianhua Du, Weinan Zhang, Ruiwen Zhou, Yangkun Wang, Xilong Zhao, Jiarui Jin, Quan Gan, Zheng Zhang, David Wipf NeurIPS 2022
- Learning Representations without Compositional Assumptions Tennison Liu, Jeroen Berrevoets, Zhaozhi Qian, Mihaela van der Schaar ICML 2023
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Does the model also include a classification head like other TaLMs? If not, why did the authors not consider including it? This simple change would have opened up evaluation for some more benchmarks.
- What's the rationale behind the modified ELECTRA and contrastive objective for pre-training (as opposed to other objectives)?
- How would the approach perform if tables have a lot of numerical values? Will numeric tables lead to a degradation in performance?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors include a section in the paper that goes through the relevant limitations of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: Missing Inferences
**A1**: Thanks for pointing out and we will include them in our paper. Here are the relevance and differences of these papers compared with ours.
[1] MATE belongs to the second group of studies we have categorized in Section 5 (Related Work). MATE explicitly restricts the attention of the transformer models to be applied within the same row and columns, and the sparse attention mechanism can be more efficient and it also learn spart of the table structures. **This paradigm is very similar to our baseline approach TURL**. However, our hypergraph-enhanced approach can maximally take advantage of the four table structures as we have emphasized in the paper, and our HyperAtt block with set transformers is also efficient as it does not need to pairwise calculate the attention scores among all the cells.
[2] NeurIPS'22 is **already included in our related work (line 305)**. It belongs to the first group of studies that focus on predicting labels (essentially one row) for classification and regression problems, using row information and column schema as input. While the method used in this paper uses hypergraphs to model table structure, the problem solved in this paper is fundamentally different from ours. Our work belongs to the second group of study (Tabular language models) that focuses on using the hypergraphs to obtain task agnostic representations as part of pre-training.
[3] ICML'23 is a contemporary work (**public after our submission**) that belongs to the first group of studies that focus on predicting row labels, like [2]. It focuses on the scenario with multi-views (different sets of features for an object) for a given table. It uses graph auto-encoder models to enforce information propagation among different views. The target tasks and motivations to use graphs in the paper are significantly different from ours.
**Q2**: Does the model also include a classification head like other TaLMs? If not, why did the authors not consider including it? This simple change would have opened up evaluation for some more benchmarks.
**A2**: If we understand you correctly, the classification head you mentioned is the special tokens that represent the special elements of a table, like the ‘[CLS]’ token from the BERT and TaBERT that represents the whole sentence or the whole table. In our settings, we use the last-layer node representations (each corresponds to a cell) and the hyperedge representations (each corresponds to a row, a column or the whole table) as an input for the classification head. **We can use them for any relevant downstream tasks**.
**Q3**: What's the rationale behind the modified ELECTRA and contrastive objective for pre-training (as opposed to other objectives)?
**A3**: The rationale behind the modified ELECTRA is in line with the motivation of masked token prediction in the Word2Vec and the BERT model. In our case, if the context of a cell (the representations of the target cell is aggregated from its surroundings) in a table can predict the value of a cell, it means the representations of the context ‘knows’ the cell. Then it can incorporate the information from predicted cells during the pretraining. However, the prediction space is extremely large for cell values as compared with the vocabulary size in the language model. Inspired by the original ELECTRA and [4], an alternative efficient approach is just to predict whether the cell has been corrupted or not. Our empirical results are consistent with previous work, demonstrating the effectiveness of this strategy.
As for the rationale behind the contrastive objective, it is inspired by the contrastive learning over hypergraphs [5]. This work has proposed effective ways to augment hypergraphs for contrastive learning, and learn good representations for the nodes and hyperedges. Since we find it beneficial to model a table as a hypergraph for table structures, we also tried the contrastive over hypergraphs to learn the table representation. An intuitive rationale behind this is that similar elements of a table should be grouped together, and dissimilar ones should be separated. Our empirical results further demonstrate its effectiveness.
**Other objective**: We also tried to reconstruct the incidence matrix of the hypergraph as a part of the objective (inspired by graph auto-encoder) - however this does not yield any performance gain.
[4] Iida, Hiroshi, et al. "Tabbie: Pretrained representations of tabular data." arXiv preprint arXiv:2105.02584 (2021).
[5] Wei, Tianxin, et al. "Augmentations in hypergraph contrastive learning: Fabricated and generative." Advances in neural information processing systems 35 (2022): 1909-1922.
**Q4**: How would the approach perform if tables have a lot of numerical values? Will numeric tables lead to a degradation in performance?
**A4**: We use numBERT [6] to deal with numerical values, which turns all numerical value into scientific format with a special token "scinotexp". To demonstrate whether numeric tables will lead to degradation in performance, we experiment with the TTD benchmark as it contains tables that have rich numerical values. We divide the test set into two subsets: one subset with a large fraction of numerical cell values (>40%) and one with a smaller fraction (<40%). We test them by fine-tuning the HyTrel ELECTRA version both with numBERT preprocessing and without it. The results demonstrate that **numerical tables tend to degrade the performance by 2.23%, and numBERT can help alleviate the gap ( from 2.23 to 1.58)**. It demonstrates that our approach can take advantage of the numBERT.
|TTD (accuracy, %) | numerical values > 40 %| numerical values < 40 % | difference |
|-|-|-|-|
|HyTrel - ELECTRA | 94.65 | 96.23 |1.58 |
|HyTrel - ELECTRA w/o numBERT | 93.68 | 95.91 |2.23|
[6] Zhang, X., Ramachandran, D., Tenney, I., Elazar, Y., & Roth, D. (2020). Do language embeddings capture scales?. arXiv preprint arXiv:2010.05345.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. I appreciate it.
---
Reply to Comment 1.1.1:
Comment: Thank you for your consideration and the time you've taken to review our work. We appreciate your feedback. | Summary: This work presents a novel tabular language model that represents tables as hypergraphs called HyTrel. HyTrel is designed to capture the structural properties of tabular data, including 1) invariance to row/column permutations, 2) structural similarity within columns, 3) high-order multilateral relations, and 4) hierarchical organization. However, many of the Language models pre-trained on tabular data do not consider these structural properties that exist in tabular data. Experimental results are provided to demonstrate the superiority of the HyTrel on four downstream tasks, and the robustness of the representations from HyTrel is further validated through qualitative analysis.
Strengths: 1. Unlike previous methods, the HyTrel model can handle changes in the order of rows and columns without significantly affecting its performance.
2. Also, HyTrel can handle input tables of arbitrary size, making it versatile and adaptable to a wide range of datasets.
3. The model is highly efficient regarding the number of epochs for pretraining compared to prior works.
4. HyTrel can incorporate the hierarchical table structure into its representations.
5. By incorporating these structural properties, HyTrel consistently achieves superior performance over the competing baselines.
Weaknesses: No detailed comparison of computational efficiency or runtime on the number of epochs needed for pretraining compared to prior works.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Compared to the previous method, how would HyTrel representation of Tabular Data benefit Multilingual Tabular data alignment?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: HyTrel is designed for tables with simple column structures and struggles with tables that have complex hierarchical column structures.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: No detailed comparison of computational efficiency or runtime on the number of epochs needed for pretraining compared to prior works.
**A1**: Thanks for pointing out the pretraining comparison. We **have discussed the pretraining epochs** as compared with previous work in Section 4.3 (line 295 - 296). Here we include the detailed analysis for the pretraining cost as compared with our baseline TaBERT (the TURL paper does not report more pretraining cost details like the pretraining time and machine used, it only includes the pretraining epochs we have compared in the paper).
As we can observe, our HyTrel models are way more efficient (**480/320 v.s. 17,280 GPU hours, 5 v.s. 10 epochs**)
|| Backbone model | # Paras (million) | Pretraing corpus | Time cost per epoch | # epochs | GPUs used | Total time cost | Total GPU hours |
|-|-|-|-|-|-|-|-|-|
| HyTrel - ELECTRA | Random initialization | 179.4 | CommonCrawl + Wikipedia Tables | 6h | 5 | 16*A100 | 30h | 480 |
| HyTrel - Contrast | Random initialization | 179.4 | CommonCrawl + Wikipedia Tables | 4h | 5 | 16*A100 | 20h | 320 |
| TaBERT (K = 3) -Large | BERT-large | 340 | CommonCrawl + Wikipedia Tables | None|10 | 120*V100 | 6 days |17,280 |
Please also note that:
a. All the list model are pretrained with the same corpus (CommonCrawl WDC Web Table + Wikipedia Tables).
b. The computation efficiency difference between the V100 and A100 is trivial as compared with the scale of the GPU hours (A100 is 2.6x faster than the V100 using mixed precision in language model pretraining, from lambdalab.com)
c. The TaBERT paper only report the pretraining cost for the large version TaBERT, which has twice amount of parameters than our HyTrel models. However, the pretraing overheads are way more than twice than ours.
d. Our models are pretrained from scratch (randomly initialized), while the TaBERT are pretrained from the checkpoints of BERT models. This means to get a ready model, the pretraining overheads are much more than the current GPU hours in the table for the baselines models .
**Q2**: Compared to the previous method, how would HyTrel representation of Tabular Data benefit Multilingual Tabular data alignment?
**A2**: Thanks for bringing up the attention to extend HyTrel to multilingual tabular data. From our understanding, multilingual representation alignment is a challenging problem, especially in the case of contextual representations [1]. To the best of our knowledge, we are not aware of significant studies of TaLMs that work on multilingual data alignment. However, the BERT-based TaLMs like TaBERT can always adopt the alignment textual representations alignment approaches like the multilingual BERT pretrained on 104 languages. Based on our HyTrel that models a table as a hypergraph, one possible approach to do the multilingual alignment is to pretrain parallel tables with a contrastive objective that aligns the parallel cells/rows/columns/tables from different languages. **One benefit we can expect from this approach is the extremely pretraining efficiency** in comparison with the multilingual pretraining approach from the BERT models, as we have discussed in the previous question.
[1] Cao, S., Kitaev, N., & Klein, D. (2020). Multilingual alignment of contextual word representations. arXiv preprint arXiv:2002.03518.
**Q3**: Limitation: HyTrel is designed for tables with simple column structures and struggles with tables that have complex hierarchical column structures.
**A3**: Thanks for pointing out the limitation of dealing with complex hierarchical column structures. We have included such a limitation discussion in our paper (**section 6, line 343-345**). Even though our Hytrel is not explicitly designed for complex hierarchical columns structures, our model can still deal with such tables. As in the following example, we can **collapse the hierarchical headers into a simple structure, and then model the table as a hypergraph**. However, we leave special designs for such complex hierarchical column structures as future work.
|Variable | Type of job contract | |
|-|-|-|
||Informal (%) | Formal (%) |
|Retail | 27.3 | 72.7 |
|Services | 32.1 | 67.9 |
|Variable | Type of job contract- Informal (%) | Type of job contract- Formal (%) |
|-|-|-|
|Retail | 27.3 | 72.7 |
|Services | 32.1 | 67.9 |
---
Rebuttal Comment 1.1:
Title: Comments after Rebuttal
Comment: Thank you for your detailed response in addressing my concerns. For this reason, I am willing to revise my score up.
---
Reply to Comment 1.1.1:
Comment: Thank you for revisiting our work and considering our response; we appreciate your willingness to revise the score. | Summary: This paper proposes a tabular language model (HyTrel) that utilizes the hypergraphs of the data table. Specifically, the hypergraph is constructed with cell values representing each node and the row, column, table representing the hyeredges. In the proposed framwork, the nodes and edges are first fed into the embedding layer, followed by 12 layers of HyperTrans structure, which is consisted of two HyperAtt blocks connected by a hyperedge fusion block. The model is pretrained through corruption of certain parts of the table/hypergraph. To use this model, one can first compute the table encoder of a table, which can then be used to finetune downstream tasks. This paper reports competitive performance of HyTrel in 4 downstream tasks (CTA, CPA, TTD, TSP).
Strengths: The HyTrel architecture produces a table encoder that is permutation invariant with both theoretical supports and empirical experiments. It analyzes different functions in the HyperTrans layer and shows the output encoding will not be affected by row/column permutation of the table. Tasks finetuned with such encoder outperform other baseline models. There are also some qualitative analysis of the hierarchical structure of the table being learned through the proposed framework.
Weaknesses: The method only applies to a single table scenario. In real-world applications, this could be a rare case where tabular learning is involved. Another dimension that can be added to the hypergraph can be key/foreign key relationships, which may extend the model to work in multi-table settings.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What are some examples of realistic permutation of the table other than simple row/column permutation? If there are other non-trivial permutation types that can benefit from the invariance brought by HyTrel, I think it adds additional value and persuasion to the adopt this method from a practical point of view. Or, in the case where row/column permutation is the main target, it would be helpful to show how it negative affects downstream tasks, motivating the development of an permutation invariant encoding representation.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See the weakness and questions section above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: The method only applies to a single table scenario. In real-world applications, this could be a rare case where tabular learning is involved. Another dimension that can be added to the hypergraph can be key/foreign key relationships, which may extend the model to work in multi-table settings.
**A1**: Thanks for pointing out modeling the cross-table relationships. We agree with this point and we have such a discussion in our Limitation Section (**section 6, line 345-347**). Our work currently focuses on learning representations for single tables, which is still a challenging and important problem for table understanding going beyond our table evaluation tasks. We believe that our hypergraph approach can generalize to model the cross-table interactions via key/foreign key relationship, and we leave these aspects for future research.
**Q2**: What are some examples of realistic permutation of the table other than simple row/column permutation? If there are other non-trivial permutation types that can benefit from the invariance brought by HyTrel, I think it adds additional value and persuasion to the adopt this method from a practical point of view.
**A2**: To the best of our knowledge, we are not aware of any other non-trivial permutation types that exist in general tables. The arbitrary row and column permutations (independently) that we model are in line with real-world situations. For example, insertion/deletion in arbitrary positions on tables are common database operations which result in the order of rows and columns being permuted. **This necessitates the need for learning table representations that are invariant to permutation of the table content resulting from such operations**. Besides, Hytrel also does not allow arbitrary invalid cell permutations (arbitrarily swap of two cells in a table), which will destroy the table structure and cause excessive invariance problems, as we have emphasized in Appendix B2.
**Q3**: Or, in the case where row/column permutation is the main target, it would be helpful to show how it negative affects downstream tasks, motivating the development of an permutation invariant encoding representation.
**A3**: As for the relationship between row/column permutations and downstream tasks. We experiment with different permutations on the TTD benchmark with the TaBERT baseline (which does not model the permutation invariance property) and our approach, as in the following table.
||Table Type Detection (TTD, Accuracy, %) ||| |
|-|-|-|-|-|
|| No Permutation | Row Permutation | Column Permutation | Row & Column Permutations |
|TaBERT (K=1) | 93.11 | 93.13 | 92.67 | 92.64 |
|TaBERT (K=3) | 95.15 | 95.03 | 94.61 | 94.31|
| HyTrel - ELECTRA | 95.81 | 95.81 | 95.81 | 95.81
| HyTrel - Contrast | 94.52 | 94.52 | 94.52 | 94.52 |
We observe that the performance of our HyTrel models will not be affected by the permutation because the representations are the same with the permutations. However, we can see that for the TaBERT model, different permutation types will degrade the performance a little bit, and the combination of the permutations hurt the performance most.
Please note that:
a. TaBERT K=1 with only one row, the row permutation is equivalent to resample a different row, so the performance is very close to the one without permutations.
b. Each model is evaluated with the same parameter settings for different permutation types.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for providing additional explanations and running additional experiments. I have raised my score.
As a side note, it seems like the gain from the permutation invariance property is rather limited, not sure if it is due to the saturation of the task or the simplicity of the database/tables. Looking forward to the extension of this work being adapted to multi-table situations.
---
Reply to Comment 1.1.1:
Comment: Thank you for revisiting our work and considering our response; we appreciate your willingness to raise the score. We will dive into more about the permutation invariance benefits and multi-table situations. | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their thoughtful feedback. We address the reviewers' comments below individually and will incorporate all the feedback in our paper. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In the paper, the authors aim to capture the structural properties of tabular data using hyper-graphs with four different types of hyperedges based on the co-occurrences in the table. The experimental results on four downstream tasks show the advantages of the proposed method on other competitive baselines.
Strengths: The manuscript is well-structured and easy to follow. The overall idea of modeling high-order cell/rows/columns interactions using hyper-graph is novel.
1. The proposed hyper-graph encoder captures tabular structures via attention learning.
2. The experimental results, which extensive ablation studies, convincingly demonstrate the advantages of the proposed methods.
3. The visualization in Sec. 4.2 shows that hierarchical representation can be learned by hyper-graph.
Weaknesses: Minors
1. Since hyper-graph excels majorly in handling hyper-edges of varied orders. It would be sub-optimal for hyper-graph to handle the hype-edges of fixed order in the tabular data?
2. It’s expected that some baseline hyper-graph models are included in the compassion results such as [1,2].
3. It’s suggested to elaborate on the hierarchical structures of tabular data in the Introduction otherwise a little bit confusing. Further, it's still unclear the relationship between hierarchical structures and hypergraphs.
[1] Hypergraph Neural Networks, 2018
[2] Hypergraph convolution and hypergraph attention, 2020
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: NON
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NON
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: Since hyper-graph excels majorly in handling hyper-edges of varied orders. It would be sub-optimal for hyper-graph to handle the hype-edges of fixed order in the tabular data?
**A1**: We do not fix the orders of hyperedges. The **orders of hyperedges depend on the size of the tables**- and are different for different tables, as different tables have different numbers of columns and rows. For the hypergraph built from a given table with $m$ rows and $n$ columns, the order of the row hyperedge is $n$, the order of the column hyperedge is $m$, and the order of the table hyperedge is $mn$. So the same type of hyperedges have the same order, but the hyperedge order is not fixed for the whole table.
As for the sub-optimality of a hypergraph that has fixed order hyperedges - unfortunately we are unaware of any such work (theoretical or practical which raise this concern). We would appreciate it if you could kindly refer to us to works which state this - and we will appropriately incorporate this into the manuscript.
**Q2**: It’s expected that some baseline hyper-graph models are included in the compassion results such as [1,2].
[1] Hypergraph Neural Networks, 2018
[2] Hypergraph convolution and hypergraph attention, 2020
**A2**: It is important to note that that primary goal of our work was to model and showcase the benefits of modeling table as hypergraphs rather than designing new hypergraph neural networks, We choose AllSet -- the set-transformer based Hypergraph Neural Network because **[3] demonstrated that it is provably more expressive than HNN [1] or the HCHA [2]**. Specifically, [3] points out that the set transformer they leverage can capture high-order relations amongst nodes within the same hyperedge. On the contrary, HNN and HCHA use a clique-expansion approach thereby losing any such higher order interaction information. We leave the experiments with HNN or HCHA in our frameworks as future work.
[3] Chien, Eli, et al. "You are allset: A multiset function framework for hypergraph neural networks." arXiv preprint arXiv:2106.13264 (2021).
**Q3**: It’s suggested to elaborate on the hierarchical structures of tabular data in the Introduction otherwise a little bit confusing. Further, it's still unclear the relationship between hierarchical structures and hypergraphs.
**A3**: We apologize for any confusion in the introduction. The hierarchical structure of a table is that we can learn the representations or semantics of a whole table from the sub-level rows and columns semantics, and the rows and columns semantics can further be aggregated from its constituent cell values. So **we can learn coarse grained representations of table elements like row/s, column/s, tables using finer grained cell representations through aggregation functions**. Previous BERT-based TaLMs abstract this by serializing the content of these cells into tokens in some order and using the BERT encoder as the aggregation function. But these methods ignore the innate table structure like the permutation invariance etc - which we capture by modeling the table as a hypergraph, and use the HyperTrans model as the aggregation functions to aggregate the fine-grained cell information into the coarse-grained row/column and the whole table.
---
Rebuttal Comment 1.1:
Comment: Greatly appreciate the clarifications from the reviewers. I would like to keep the score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your consideration and the time you've taken to review our work. We appreciate your feedback. | null | null | null | null | null | null |
Federated Learning with Manifold Regularization and Normalized Update Reaggregation | Accept (poster) | Summary: The authors proposed a new algorithm for federated learning based on a Lorentzian regularization. The proposed algorithm achieves desired sub-linear convergence with linear speed-up. Numerical experiment shows the efficacy of the proposed algorithm over existing federated learning algorithms
Strengths: The proposed method is theoretically well-rounded with convergence guarantee for smooth non convex situation, which is the most commonly setting in the literature.
Weaknesses: (Please respond to the Questions section directly) The presentation of the paper is confusing. Also some technical details are not well-illustrated. The theory part of the paper is not clearly presented and seems to be abusing terminologies a lot.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: Major:
1. The presentation of the paper is confusing: First, I cannot link the proposed Algorithm 1 with all the argument in Section 3.2. More specifically, how is the “mapped Lorentzian vectors” in (3) used in Algorithm 1? The authors claimed that the regularized problem (5) has a number of desired property but is Algorithm 1 solving this regularized problem? The argument in this section seems to be ending at nowhere; Second, I really cannot understand the usages of “hyperbolic” and “Lorentzian”. The hyperbolic space is defined as space with constant -1 curvature in mathematics, but here we are not dealing with curvatures at all. As for the Lorentzian regularizer defined in (4) and (5) (which again I don’t understand how it is used in Algorithm 1), why do we need Lorentz metric? What would happen if we just use Euclidean metric, or some Riemannian metric defined by a positive definite matrix? The entire section 3.2 is confusing. On the other hand, all the described operations in Section 3.2 are not well-reflected in Algorithm 1. For example, what’s $g_{i}^{t,k}$ in Algorithm 1?
2. As the authors claimed, the algorithm is built upon MoFedSAM[1]. How is the rate of convergence (in theory) of Algorithm 1 compared with existing works, especially MoFedSAM? If the theoretical rate is not improved, I would argue that this refined engineering over MoFedSAM might not be very interesting since (in my perspective) the only difference of Algorithm 1 with MoFedSAM is the $\Delta_i$ which serve as a local update corrector.
3. I cannot understand what point Figure 2 try to make. To me Figure 2(b) is a rotation operation, not projection, also I don’t understand how is the operation used in Algorithm 1.
Minor:
1. In the abstract, the authors include a formula. I think it’s better to either explain the meaning of each of the variable in the formula, or just use words instead;
2. What does the name “FedMRUR” exactly stand for?
3. Typos: Line 227, Assumption 1, missed one $\nabla$; Line 263 “Dirichlet… from {0.30.6}” missed a comma?
References:
[1] Qu, Zhe, et al. "Generalized federated learning via sharpness aware minimization." International Conference on Machine Learning. PMLR, 2022.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: The limitation is well stated in weakness and question sections.
I’m not aware of any potential negative social impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive comments which helps us improve the paper. We have prepared our responses to each of your questions.
**- Q1(major):The presentation of the paper is confusing:
- I cannot link the proposed Algorithm 1 with all the argument in Section 3.2. More specifically, how is the “mapped Lorentzian vectors” in (3) used in Algorithm 1? The authors claimed that the regularized problem (5) has a number of desired property but is Algorithm 1 solving this regularized problem? The argument in this section seems to be ending at nowhere;
- I really cannot understand the usages of “hyperbolic” and “Lorentzian”. The hyperbolic space is defined as space with constant -1 curvature in mathematics, but here we are not dealing with curvatures at all. As for the Lorentzian regularizer defined in (4) and (5) (which again I don’t understand how it is used in Algorithm 1), why do we need Lorentz metric? What would happen if we just use Euclidean metric, or some Riemann metric defined by a positive definite matrix? The entire section 3.2 is confusing.
- On the other hand, all the described operations in Section 3.2 are not well-reflected in Algorithm 1. For example, what’s $g_{i}^{t,k}$ in Algorithm 1?**
- A1(major): For the answer, please refer to Q1 in the Author Rebuttal.
**- Q2(major): As the authors claimed, the algorithm is built upon MoFedSAM. How is the rate of convergence (in theory) of Algorithm 1 compared with existing works, especially MoFedSAM? If the theoretical rate is not improved, I would argue that this refined engineering over MoFedSAM might not be very interesting since (in my perspective) the only difference of Algorithm 1 with MoFedSAM is the $\triangle_i$ which serve as a local update corrector.**
- A2(major): For the answer, please refer to Q2 in the Author Rebuttal.
**- Q3(major): I cannot understand what point Figure 2 try to make. To me Figure 2(b) is a rotation operation, not projection, also I don’t understand how is the operation used in Algorithm 1.**
- A3(major): Sorry for the unclear presentation of Figure 2. Figure 2 is a toy schema to compare the naive aggregation and normalized aggregation of the local updates. The main difference is the norm $\lVert \triangle \rVert$ of the global update, which is the aggregation of the local updates $\triangle_p$ from the participating clients. When adopting the naive aggregation method, the server directly takes the average of the local updates as the global update. Each client's contribution on the global update is $\lVert \triangle_p \rVert \cos \theta_p$, where $\theta_p$ is the angle between the local update $\triangle_p$ and the global update $\triangle$. When the data heterogeneity is significant, the divergence between the directions of local updates is large. Correspondingly, the divergence between the directions of each local update and the global update is large, and the global update norm shrinks which slows down the convergence of the global objective function. When adopting the normalized aggregation method, the server computes the direction of the global update by normalizing the sum of local updates, which is the same as the naive method. The global update norm is obtained by averaging the local updates. The normalized aggregation way is depicted in line 14 of Algorithm 1. In this way, the client $p$'s contribution on the global update $\theta$ grows from $\lVert \triangle_p \rVert \cos \theta_p$ to $\lVert \triangle_p \rVert$. Accordingly, the norm of the global update grows and the speedup the convergence. In order to compare the two aggregation methods, we repalce Figure 2 with a new one (Figure 2 in the attached pdf).
**- Q1(minor): In the abstract, the authors include a formula. I think it’s better to either explain the meaning of each of the variable in the formula, or just use words instead;**
A1(minor): Thank you for the advice about the abstract. We explain the meaning of variables $S$ (the number of participated clients in one communication round), $K$ (the local interval), $T$(the total number of communication rounds) in the formula.
**- Q2(minor): What does the name “FedMRUR” exactly stand for?**
A2(minor): Thanks for the reminder. In the revision of the paper, we include the full name where FedMRUR first appears. The name "FedMRUR" stands for **Fed**erated Learning with **M**anifold **R**egularization and Normalized **U**pdate **R**eaggregation. In the name "FedMRUR", we want to emphasize two main components in our algorithm: the manifold model fusion scheme using the hyperbolic model to constrain the divergence between the local and global model and the normalized client updates aggregation way applied at the server to mitigate the norm reduction of the global update.
**- Q3(minor): Typos: Line 227, Assumption 1, missed one $\nabla$; Line 263 “Dirichlet… from {0.30.6}” missed a comma?**
A3(minor): Thank you for the careful proofreading. We add the missed $\nabla$ in the Assumption 1 at Line 215 (the new draft). At Line 252, we add the missed comma and the sentence is "Dirichlet coefficient $\mu$ from {0.3 , 0.6}".
---
Rebuttal Comment 1.1:
Comment: I thank the authors for responses to my questions. May I know where I can find the revised version of the paper? For example I cannot find Theorem 9 mentioned in the rebuttal.
I still have some concerns with this work:
The first issue is that adding the extra hyperbolic regularization may increase the testing accuracy, but the justification in the author's rebuttal "In FL, clients and the server logically also have a tree-like hierarchical topological relationship, so adopting the Lorentz metric of hyperbolic space makes use of the hierarchical information in the datasets and the hierarchical relationship between the server and clients, which are helpful to group data samples further bring prediction gains" seems too strong and general to be just supported by current evidences. However this is still not the biggest issue that I'm concerned, since the observation may be a good starting point for incorporating geometric information in FL. The authors could give a comprehensive study on the effectiveness of different regularization terms as a future work to support their broad claim.
My biggest concern is that I'm still not sure if the proposed algorithm is theoretically better than MoFedSAM[1]. As in the rebuttal "The main dominated term of convergence rate is ..., the convergence rate of FedMRUR is faster than MoFedSAM". Can the authors illustrate why the rate $(F^0-F^*)/(c\alpha\eta_g \sum_{t=1}^{T}d_t) + \Phi$ is better than $(F^0-F^*)/(c\alpha\eta_g T)$ in [1], as the authors wrote in the rebuttal? I'm still very concerned and confused here.
[1] Qu, Zhe, et al. "Generalized federated learning via sharpness aware minimization." International Conference on Machine Learning. PMLR, 2022.
---
Reply to Comment 1.1.1:
Comment: **-Q1: May I know where I can find the revised version of the paper? Where is Theorem 9.**
-A1: We are sorry about that you can't find the revised paper. Due to the limitations of the rebuttal system, we can't upload the revised paper. We re-analyze the convergence rate of the algorithm to better illustrate the effect of **normalized aggregation**. Theorem 9 is the revised version of Theorem 8 in the supplementary file. Compared with Theorem 8, under the same condition, Theorem 9 gives a tighter upper bound on the convergence rate, which is
$$\frac{1}{\sum_{t=1}^{T} d_t} \sum_{t=1}^{T} \mathbb{E} \left\lVert \nabla F(w^t) \right\rVert^2 d_t \le \frac{F^0 - F^*}{C \alpha \eta_g \sum_{t=1}^{T} d_t} + \Phi.$$
In A2(the reply to Q2), we present our revision of the proof of Theorem 9.
**-Q2: Can the authors illustrate why the rate $\frac{F^0 - F}{C \alpha \eta_g \sum_{t=1}^{T} d_t} + \Phi$ is better than $\frac{F^0 - F}{C \alpha \eta_g T}$ in MoFedSAM paper?** ($F=F^*$)
-A2: Sorry for the typo about the convergence rate of MoFedSAM. The convergence rate should be $\frac{F^0 - F^*}{C \alpha \eta_g T} + \Phi$ (Theorem D.4 in MoFedSAM paper). Due to the limit of the reply, we present the revised part of the proof and illustrate why FedMRUR is better in terms of convergence rate. At the beginning of the proof of Lemma 7, we define $d_t= {\sum_{i \in S_t}\Vert \triangle_i^t \Vert }/{\Vert \sum_{i \in S_t} \triangle_i^t \Vert} \ge 1$, which is obtained by **normalized aggregation**, and introduce $\varepsilon_{\delta}$ from Lemma B.1 in MoFedSAM paper. For the 2nd and 3rd term in the R.H.S of (6), we multiply them by $d_t$. Consequently, we multiply the entire R.H.S of (7) by $d_t$. Finally, we rewrite (8) and draw a new conclusion for Lemma 7:
$$\mathbb{E}_t [ F(\tilde{w}^{t+1})] \le F(\tilde{w}^t) - K \eta_g \eta_l d_t (\frac{1}{2} - 20K^2L^2\eta_l^2)\left\lVert \nabla F(\tilde{w}^t) \right\rVert^2 + K\eta_g\eta_l\left( 6K^2\eta_l^2\alpha^4\rho^2 + 5K^2\eta_l\alpha^4\rho^2\sigma^2 + 20K^3\eta_l^3\alpha^2\sigma_g^2 + 16K^3\eta_l^4\alpha^6\rho^2 + \frac{\eta_g\eta_l\alpha^3\rho^2}{N} \sigma_l^2 \right)$$
Summing this new inequality over $t$ and multiplying both sides by $\frac{1}{C \alpha \eta_g \sum_{t=1}^T {d_t}}$, we have
$$\frac{1}{\sum_{t=1}^{T} d_t} \sum_{t=1}^{T} \mathbb{E} \left\lVert \nabla F(w^t) \right\rVert^2 d_t \le \frac{F^0 - F^*}{C \alpha \eta_g \sum_{t=1}^{T} d_t} + \Phi \le \frac{F^0 - F^*}{C \alpha \eta_g T} + \Phi,$$
where $\Phi$ is the same as the $\Phi$ in MoFedSAM. The first term is tighter than the one in MoFedSAM since $d_t \ge 1$. There, we get the conclusion of Theorem 9 (revised version of Theorem 8). With a proper choice of $\eta_g$, $\eta_l$ and $\rho$, the convergence rate can be rewritten as $O (\frac{1}{\sqrt{SKT}}) + O \left(\frac{\sqrt{K}}{{ST}} \right) + O \left( \frac{1}{\sqrt{K}T} \right)$, where the main dominated term has the same order with the convergence rate of MoFedSAM (Theorem 4.1 in MoFedSAM paper). Our derived bound agrees with MoFedSAM in order ($O (\frac{1}{\sqrt{SKT}})$), and our convergence analysis illustrates the effect of **Normalized Aggregation** on constants.
**-Q3: The first issue is the justification in the rebuttal about hyperbolic space.**
-A3: In FL, the clients train local models with their own data and the server merges these models into the shared global model. The shared global model should contain as much common information from each client as possible. The global model can be seen as the tree's root, and the local model is viewed as the leaf. In addition, from the perspective of the model structure, deep neural networks can be modeled as graphs[1,2] due to their multi-layered structures containing hierarchical information. Since its volume increases exponentially with its radius, the hyperbolic space can represent hierarchies with minimal distortion[3] and reduce the number of embedding dimensions[4]. Therefore, embedding deep neural networks makes model fusion more effective and improves performance. We also conduct experiments embedding deep neural networks in different geometric spaces using multiple seeds. The results are as follows:
| Space | Euclidean | Hyperbolic |
| :-------: | :-------: | :--------: |
| Test Acc. |67.46(0.32)| 68.99(0.35)|
From these results, we can see that embedding model in hyperbolic space improves the test accuracy of the aggregated global model.
Reference:
1. Singh et.al, "Model Fusion via Optimal Transport", NeurIPS 2020
2. Liu et.al, "Deep Neural Network Fusion via Graph Matching with Applications to Model Ensemble and Federated Learning", ICML 2022
3. M. Gromov, “Hyperbolic groups”, Essays in Group Theory, 1987
4. Peng et.al, "Hyperbolic Deep Neural Networks: A Survey", IEEE Trans. PAMI 2022
Thank you very much for your valuable comments, which help us to improve our work. If you have any further questions about our submission and rebuttal, please let us know. | Summary: The authors propose a novel Federated Learning Algorithm, FedMRUR that uses hyperbolic graph diffusion to reduce the effect of data heterogeneity and thereby model inconsistencies. The authors also propose a normalized aggregation scheme to achieve faster convergence. The algorithm FedMRUR achieves state-of-the-art performance on standard datasets.
Strengths: In terms of comparisons to baselines, FedMRUR seems to outperform in terms of accuracy for various Dir(\mu) settings versus the baselines. The closest competitor seems to be MoFedSAM.
Weaknesses: Finally, while convergence speed is quicker for FedMRUR, Table 3 in the Supplemental Materials actually has FedCM to have the quickest total time to achieve 60% test accuracy on CIFAR-100. Do the authors have comments or suggestions on when to use FedMRUR vs FedCM given the disparity between convergence speed and clock times?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. The authors should clarify notation in Figure 1 and Algorithm 1. In Figure 1, the global parameter is subscripted, w_0, but superscripted in the algorithm.
2. The algorithm can be hard to follow as it:
a. Did not specify the global parameter \eta_g in the parameters
b. Did not specify the gradients, g_{i}^{t,k}, though that is easily induced
c. More importantly did not specify how \tilde{g}_{i}^{t,k} is derived. This is mentioned later in line 213, but would make the presentation clearer by linking it directly to the equations
d. They can also specify the normalized aggregation more clearly, and tie to the algorithm 1.
3. The authors should discuss normalized aggregation in more depth, as it seems to contribute more than hyperbolic graph fusion (Table 3). In fact, it would be interesting to understand the effect of this operator in other baselines.
4. In terms of comparisons to baselines, FedMRUR seems to outperform in terms of accuracy for various Dir(\mu) settings versus the baselines. The closest competitor seems to be MoFedSAM. First, it would be useful to have test accuracy and test accuracy std to enable better comparison between comparably performing models. Second, how was Table 1 derived, as the accuracy numbers are quite different and higher than the comparable Table 5 in the MoFedSAM paper for CIFAR-100 with Dir().
5. Similarly, the convergence table 2 seem different for MoFedSAM and FedAvg vs Table 5 in the MoFedSAM paper. Is this a setup difference?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have addressed some limitations of their method
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive comments which helps us improve the paper. We have prepared our responses to each of your questions.
**- Weaknesses: Finally, while convergence speed is quicker for FedMRUR, Table 3 in the Supplemental Materials actually has FedCM to have the quickest total time to achieve 60% test accuracy on CIFAR-100. Do the authors have comments or suggestions on when to use FedMRUR vs FedCM given the disparity between convergence speed and clock times?**
- A(Weaknesses): The FedCM achieves the fastest total time is due to the less time consumption compared with FedMRUR. Because of the operations of SAM and Hyperbolic Graph Fusion scheme need a lot of computation, FedMRUR takes more time to compute in one round. For FL, academia and industry are more interested in test accuracy vs communication rounds. Under the setting of FL, the connections between devices are often through Wi-Fi or cellular network, their bandwidth is limited, and local devices are often redundant. It is common to reduce the number of communication rounds by adding local updates. In this paper to be fair, we compare test accuracy vs communication rounds with the same setting of local epochs. From the perspective of communication rounds, FedMRUR achieves the fastest convergence and the highest test accuracy; From the perspective of wall clock time, FedMRUR lags behind FedCM at low accuracy, but at higher accuracy, FedCM will gradually lag behind FedMRUR.
**- Q1: The authors should clarify notation in Figure 1 and Algorithm 1. In Figure 1, the global parameter is subscripted, w_0, but superscripted in the algorithm.**
- A1: Thank you for the careful reading. To avoid confusion, we replace '$w_0$' with $w_g$ to denote the global parameter. Please refer to Figure 1 in the attached pdf.
**- Q2: The algorithm can be hard to follow as it: a. Did not specify the global parameter \eta_g in the parameters b. Did not specify the gradients, g_{i}^{t,k}, though that is easily induced c. More importantly did not specify how \tilde{g}_{i}^{t,k} is derived. This is mentioned later in line 213, but would make the presentation clearer by linking it directly to the equations d. They can also specify the normalized aggregation more clearly, and tie to the algorithm 1.**
- A2: For the answer, please refer to Q1 in the Author Rebuttal.
**- Q3: The authors should discuss normalized aggregation in more depth, as it seems to contribute more than hyperbolic graph fusion (Table 3). In fact, it would be interesting to understand the effect of this operator in other baselines.**
- A3: For the answer, please refer to Q2 in the Author Rebuttal.
**- Q4: In terms of comparisons to baselines, FedMRUR seems to outperform in terms of accuracy for various Dir(\mu) settings versus the baselines. The closest competitor seems to be MoFedSAM. First, it would be useful to have test accuracy and test accuracy std to enable better comparison between comparably performing models. Second, how was Table 1 derived, as the accuracy numbers are quite different and higher than the comparable Table 5 in the MoFedSAM paper for CIFAR-100 with Dir().**
- A4: Thank you for the advice and the careful Experiments reading.
First, we add some experiments to compare our method FedMRUR and MoFedSAM. The results on CIFAR100 dataset are as follow.
| Algorithm | $\mu=0.6$ | $\mu=0.3$ | $n=20$ | $n=10$ |
| :-------: | :-------: | :-------: | :----: | :----: |
| MoFedSAM |67.51(0.37)|66.81(0.36)|64.23(0.34)| 61.25(0.40)|
| FedMRUR |69.23(0.39)|68.99(0.35)|66.72(0.31)| 63.50(0.42)|
In this table, we record the averaged test accuracy and the std at the 1600-th communication round over 8 seeds. From these results, we can clearly see that FedMRUR can achieves better performance than MoFedSAM.
Second, since the authors of MoFedSAM paper don't have the open source code, we obtain the baseline based on our own implementation of the paper, and we reproduce their code as well as possible for a fair comparison. As for the experimental setup, our paper sets the customer participation rate to 0.1 and the number of local epochs to 5, while the MoFedSAM paper sets the customer participation rate to 0.2 and the number of local epochs to 10. In addition, when replicating the MoFedSAM algorithm, we employ the ASAM[1] optimizer to compute the gradient of the local loss function.
**- Q5: Similarly, the convergence table 2 seem different for MoFedSAM and FedAvg vs Table 5 in the MoFedSAM paper. Is this a setup difference?**
- A5: Thank you for the carefully reading the experiments. The difference between Table 2 in our paper and Table 5 in the MoFedSAM paper comes from the different setup in these two papers. In the MoFedSAM paper, the client participation ratio is $0.2$ and the local interval $K$ is $10$; In our paper, the client participation ration is $0.1$ and the local interval $K$ is $5$. When the data heterogeneity is not severe (Dirichlet 0.6), more participating clients $\lVert S \rVert$ and more local intervals $K$ results in a faster convergence; when the data heterogeneity is severe (Dirichlet 0.3) and larger local interval $K$ forces the local parameter to close to the local optimum, which cause severe client drifts and degrade the performance.
Reference:
1. Kwon, et.al "ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks", International Conference on Machine Learning, PMLR, 2021.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response, which clarified questions 2-5.
However, I am not convinced by their argument for run times in their response to Q1. I understand that communication rounds present a bottleneck, and is a standard for many FL studies, while other operations can be performed locally and asynchronously. However, does the high computation cost of SAM and Hyperbolic Graph Fusion, as stated by the authors, limit the application of FedMRUR to a more narrow range of local devices thereby limiting FedMRUR?
---
Reply to Comment 1.1.1:
Title: FedMRUR is more efficient with respect to both the communication round and wall-clock time when high-performance models are required.
Comment: Dear reviewer, thanks for your quick reply and valuable comments, which help us a lot to improve our submission.
**-Q: However, does the high computation cost of SAM and Hyperbolic Graph Fusion, as stated by the authors, limit the application of FedMRUR to a more narrow range of local devices thereby limiting FedMRUR?**
-A: To answer this question, here we report the communication rounds, sampled gradient, and wall-clock time used to achieve different test accuracy on CIFAR100 with FedMRUR, FedCM, and MoFedSAM.
| Rounds vs. Acc. | 40% | 45% | 50% | 55% | 58% | 60% | 61% | 62% | 63% |
| :--------------:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| FedCM | 160 | 200 | 249 | 307 | 390 | 464 | 543 | 684 | 994 |
| MoFedCM | 177 | 217 | 258 | 315 | 364 | 403 | 447 | 492 | 573 |
| FedMRUR | 198 | 219 | 253 | 288 | 318 | 352 | 372 | 411 | 438 |
| Sampled Gradient (x1000) vs. Acc. | 40% | 45% | 50% | 55% | 58% | 60% | 61% | 62% | 63% |
| :-------------------------:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| FedCM | 84 | 100 |124.5|153.5| 195 | 232 |271.5| 342 | 497 |
| MoFedCM | 177 | 217 | 258 | 315 | 364 | 403 | 447 | 492 | 573 |
| FedMRUR | 198 | 219 | 253 | 288 | 318 | 352 | 372 | 411 | 438 |
| Wall-clock time(s) vs. Acc.| 40% | 45% | 50% | 55% | 58% | 60% | 61% | 62% | 63% |
| :-------------------------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:--------:|
| FedCM | 2205.84 | 2626.00 | 3269.37 | 4030.91 | 5120.70 | 6092.32 | 7129.59 | 8980.92 | 13051.22 |
| MoFedCM | 3571.86 | 4379.06 | 5206.44 | 6356.70 | 7345.52 | 8132.54 | 9020.46 | 9928.56 | 11563.14 |
| FedMRUR | 4369.86 | 4833.33 | 5583.71 | 6356.16 | 7018.26 | 7768.64 | 8210.04 | 9070.77 | 9666.66 |
Due to the double calculation of the gradients via SAM, MoFedSAM, and FedMRUR take more time in a single round. However, the communication rounds required are much less than FedCM. From these tables, we can find that FedMRUR requires the least wall-clock time and sampled gradients to achieve high test accuracy due to the fastest convergence of FedMRUR under the same hardware conditions. Considering the total wall-clock time costs, the acceleration ratio of FedMRUR achieves $1.20 \times$ compared with MoFedSAM ($1.35 \times$ compared with FedCM) at the final. Therefore, we can conclude that the FedMRUR is more efficient with respect to the communication round, gradient calculation, and wall-clock time when high-performance models are required.
At last, thank you very much for your valuable comments, which help us to improve our work. If you have any further questions about our submission and rebuttal, please let us know. | Summary: This paper studies the problem of model inconsistency across clients in federated learning (FL). The authors propose a method called FedMRUR, which uses a hyperbolic graph manifold regularization term and a normalized update aggregation scheme to alleviate the issues introduced by model inconsistency. Compared with the previous works, the proposed FedMRUR can reflect the model bias better in a low-dimensional subspace and mitigate the norm reduction of global updates caused by model inconsistency. The authors prove the convergence of FedMRUR for nonconvex objectives under partial client participation. They also run experiments to show that FedMRUR can achieve a new state-of-the-art (SOTA) accuracy with less communication.
Strengths: 1. The work is written clearly and intuitively.
2. The considered problem is quite meaningful as the model inconsistency severely impairs the performance of FL and building an algorithm to solve this issue is a promising direction to improve FL.
3. The theoretical result is rigorous for non-convex settings under partial client participation.
4. There are experiments to support the theory as well as to show that the algorithm will be useful in practice.
Weaknesses: See questions
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. It is not clear what is the “manifold structure of their representations”.
2. What does it mean by “norm reduction” and why it is important?
3. For the non-i.i.d pathological-n setting on TinyImageNet, line 264 conflicts with Figure 3(b).
4. Does the setting of the parameter $\beta$ for hyperbolic graph manifold regularization have an impact on the performance of the algorithm?
5. In the ablation study, the author should discuss the hyperparameters sensitivity of $\gamma$ in a wider range.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to the Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive comments which helps us improve the paper. We have prepared our responses to each of your questions.
- Q1: It is not clear what is the “manifold structure of their representations”.
- A1: In our paper, we design the hyperbolic graph fusion scheme to mitigate the model inconsistency among the clients. In this scheme, we map the data representations of the clients to the vectors of Lorentz model in hyperbolic space. The induced distance of the Lorentz model is used to measure model inconsistency in our method and taken as the regularization term in the objective function to constrain the model inconsistency for improving the performance of FL. Since that hyperbolic geometry is a Riemannian manifold with a constant negative curvature, the typical geometric property of hyperbolic space is that its volume increases exponentially in proportion to its radius, whereas the Euclidean space grows polynomially. The hyperbolic space has 2 benefits enabling it to well deal with the complex real-world FL applications.
1. Hyperbolic space exhibits minimal distortion and fits the hierarchies particularly well since the space closely matches the growth rate of tree-like data while the Euclidean space cannot.
2. Even though with a low-embedding dimension space, hyperbolic models are surprisingly able to produce high-quality representation, which makes it to be especially favorable in low-memory and low-storage scenarios.
In realistic scenarios, there extensively exists many tree-like data structure, such as the hypernym structure in natural languages, the subordinate structure of entities in the knowledge graph, the organizational structure for financial fraud, and the power-law distribution in recommender systems. In FL, clients and the server logically also have a tree-like hierarchical topological relationship.
Therefore, adopting the Lorentz metric of hyperbolic space makes use of the hierarchical information in the datasets and the hierarchical relationship between the server an clients, which are helpful to group data samples further bring prediction gains.
We also conducts experiments using different geometric spaces over several seeds and the results are as follow.
| Space | Euclidean | Hyperbolic |
| :-------: | :-------: | :--------: |
| Test Acc. | 67.46(0.32) | 68.99 (0.34) |
From this results, we can find that using the Lortenz metric of hyperbolic space can help the algorithm achieving the highest test accuracy. With the use of "manifold structure of their representations" in hyperbolic space, FedMRUR improves the performance of FL.
- Q2: What does it mean by “norm reduction” and why it is important?
- A2: In FL framework, because of the data heterogeneous among the clients, there exits great divergences among the gradients of the local loss function and the similarity between the local updates are low during FL training. From the perspective of geometry, the low similarity means the vectors are almost orthogonal (i.e. the $\cos \theta \approx 0$, where $\theta$ is the angle between the vectors). When adopting the naive aggregation method, the server takes an average of many vectors as the global update, whose norms will be very small. Then the global update becomes small, which leads to slow convergence and diminishing returns. We also conduct experiments in Ablation Study to vertify that compensating the "norm reduction" can imporve the performance of the algorithm. Therefore, it is critical to compensate the "norm reduction" in FL.
- Q3: For the non-i.i.d pathological-n setting on TinyImageNet, line 264 conflicts with Figure 3(b).
- A3: Thank you for the careful reading. For the non-i.i.d pathological-n setting on TinyImageNet, the coefficient $n$ should be selected from $\{40 ,80\}$, we correct this typo in the paper.
- Q4: Does the setting of the parameter $\beta$ for hyperbolic graph manifold regularization have an impact on the performance of the algorithm?
- A4: Thank you for the suggestion to study the effects of parameter $\beta$ on the performance. We conduct the experiemnt with various settings of $\beta$.
Here, we provide the test accuracy after the $1600-$ th communication round.
| $\beta=0.1$ | $\beta=0.5$ | $\beta=1.0$ | $\beta=5.0$ | $\beta=10.0$ |
| :---------: | :---------: | :------: | :------: | :-------: |
| 68.67 | 69.03 | 68.96 | 68.26 | 69.10 |
From this table, we can find that $\beta$ has a limited impact on the performance of the algorithm.
- Q5: In the ablation study, the author should discuss the hyperparameters sensitivity of $\gamma$ in a wider range.
- A5: Thank you for the suggestion about investigating the impact of $\gamma$ on the perofrmance. We conduct the experiemnt with various settings of $\gamma$.
Here, we provide the test accuracy after the $1600-$ th communication round.
| $\gamma=0.001$ | $\gamma=0.0005$ | $\gamma=0.0002$ | $\gamma=0.0001$ | $\gamma=0.00005$ |
| :------------: | :-------------: | :-------------: | :-------------: | :--------------: |
| 63.35 | 65.95 | 67.75 | 68.54 | 68.96 |
From this table, we find that $\gamma$ has great impacts on the performance. From (3) and (5), a high $\gamma$ slows down the local training process; a low gamma value can't support the regularization term to constrain model inconsistency. When $\gamma=0.00005$, the algorithm achieves the highest test accuracy.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer aqu4:
We really appreciate your constructive opinions that helped us improve this paper. All the discussions will be incorporated into our revision. If there are any concerns unresolved, please let us know and we are ready to have further discussions with you.
Thanks again for your time.
Best,
Authors. | Summary: The authors found the existing vanilla distillation in FL, the model inconsistency caused by the local data heterogeneity across clients results in the near-orthogonality of client updates, which leads to the global update norm reduction and slows down the convergence. Moreover, the authors argue previous works may fail to reflect the model inconsistency due to the complex structure of the machine learning model and the Euclidean space’s limitation in meaningful geometric representations. To resolve the above issues, they propose the FedMRUR algorithm for FL. By adopting a hyperbolic graph manifold regularizer and aggregating the client updates norms as the global update norm, the FedMRUR achieves a new state-of-the-art (SOTA) accuracy with less communication.
Strengths: originality: Exploiting the manifold structures of the representations can reflect the model bias better than the parameters (or gradients) method, to significantly reduce the model inconsistency. Aggregating the client updates norms as the global update norm can mitigate the norm reduction caused by model inconsistency.
quality: The paper is well written, with detailed experiments and ablation studies with other methods and the proposed variants.
clarity: The paper consists of an illustration of the workflow and text explanations of the proposed FedMRUR algorithm.
significance: The paper solves the problem of the model inconsistency across the clients in FL, reduces the model bias more effectively than its baseline, mitigates the norm reduction caused by model inconsistency, and improves the test performance.
Weaknesses: - There are some unclear points and confusing notations.
- More experiments can be conducted.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) Some notations are confusing. For example, in line 14 of Algorithm 1, what does the notation “$\delta_i^{t}$” mean?
2) In line 263, what should the setting of the Dirichlet coefficient be?
3) The authors should discuss the effect of local interval $K$ on the performance of the algorithm.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please find the weaknesses and limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive comments which helps us improve the paper. We have prepared our responses to each of your questions.
- Weakness
**- W1: There are some unclear points and confusing notations.**
- A1: Thank you for the careful readings. We check the whole paper and correct all the typos. We eliminated the unclear points and confusing notations. We unify the notations in Figure 1, Algorithm 1 and Section 3.2. In Figure 1, we replace $w_0$, $F_0$ with $w_g$, $F_g$ to denote the global parameter and function. Please refer to Figure 1 in the attached pdf. In Algorithm 1, the local gradient is denoted by $\nabla F_p(\cdot)$, where $F_p(\cdot)$ is the local loss function formulated by (5) in Section 3.2. We replace Figure 2 with a new figure (Figure 2 in the attached pdf) to present the "Normalized Aggregation of Local Updates" more clearly. We also fix some typos in the proof. The convergence rate is $O \left(\frac{1}{\sqrt{SKT}}\right) + O \left(\frac{\sqrt{K}}{{ST}}\right) + O \left(\frac{1}{\sqrt{K}T}\right)$ in Theorem 1. In this way, we clearly demonstrate the two components of FedMRUR: "hyperbolic graph fusion" and "normalized aggregation of local updates" and validate the effectiveness of the algorithm.
**- W2: More experiments can be conducted.**
- A2: Thank you for the advice about adding experiments.
We conduct the experiment on discussing the effect of local interval $K$ and validate the linear speedup property of FedMRUR. The results can be seen from the Figure 3 in the attached pdf. From the figure, when $K$ increases to $8$, the algorithm performs $2.0\times$ faster than $K=4$. When $K$ is increased to $\mathcal{O} (ST)^{\frac{1}{2}}$, the impact of the second dominated term of (7) in Theorem 1 becomes greater and the first term becomes less. When $K$ increase from 8 to 16, the speedup of convergence is not obvious.
We also extend the range of $\gamma$ to study its hyper-parameter sensitivity by experiments.
| $\gamma=0.001$ | $\gamma=0.0005$ | $\gamma=0.0002$ | $\gamma=0.0001$ | $\gamma=0.00005$ | $\gamma=0.00002$ |
| :------------: | :-------------: | :-------------: | :-------------: | :--------------: | :--------------: |
| 63.35 | 65.95 | 67.75 | 68.54 | 68.96 | 68.45 |
Form this table, we find that $\gamma$ has great impacts on the performance. From (3) and (5), a high $\gamma$ slows down the local training process; a low gamma value can't support the regularization term to constrain model inconsistency. When $\gamma=0.00005$, the algorithm achieves the highest test accuracy.
To study the impact of $\beta$ for hyperbolic graph manifold regularization on the performance, we conduct the experiment on CIFAR100 task with different $\beta$ settings and present the final test accuracy as follows.
| $\beta=0.1$ | $\beta=0.5$ | $\beta=1.0$ | $\beta=5.0$ | $\beta=10.0$ |
| :---------: | :---------: | :------: | :------: | :-------: |
| 68.67 | 69.03 | 68.96 | 68.26 | 69.10 |
From this table, we can find that $\beta$ has a limited impact on the final performance of the algorithm.
- Questions
**- Q1: Some notations are confusing. For example, in line 14 of Algorithm 1, what does the notation “$\delta_i^t$” mean?**
- A1: Sorry about the unclear presentation. "$\delta_i^t$" means the accumulated update of the local parameter at client $i$ within $t-$ th round (i.e., $w_i^{t,K} - w_i^{t,0}$). We have fix this issue in Algorithm 1.
**- Q2: In line 263, what should the setting of the Dirichlet coefficient be ?**
- A2: Thank you for the careful Experiments readings. The setting of the Dirichlet coefficient should be $\{0.3, 0.6\}$.
**- Q3: The authors should discuss the effect of local interval $K$ on the performance of the algorithm.**
- A3: Thank you for the advice about experiments. We conduct the experiment with different local interval $K$ on CIFAR-100 dataset and plot the test accuracy in communication rounds. In a certain range, enlarging $K$ can accelerate the convergence and improve the performance. When $K$ is out of the range, larger $K$ means more updates on the local dataset, which forces the local parameter $w_{p}^{t,k}$ closed to local optimum $x_{p}^*$. This causes severe client drifts that degrade the performance. In addition, since the number of local datasets changes as the number of clients grows, we also test the linear speedup property by changing $K$. From the figure, when $K$ increases to $8$, the algorithm performs $2.0\times$ faster than $K=4$. When $K$ is increased to $\mathcal{O} (ST)^{\frac{1}{2}}$, the impact of the second dominated term of (7) in Theorem 1 becomes greater and the first term becomes less. When $K$ increase from 8 to 16, the speedup of convergence is not obvious.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer HiX9:
We really appreciate your constructive opinions that helped us improve this paper. All the discussions will be incorporated into our revision. If there are any concerns unresolved, we would be glad to have further discussions.
Thanks again for your time.
Best,
Authors. | Rebuttal 1:
Rebuttal: **Thank you for the constructive comments which helps us improve the paper. We have prepared our responses to the common questions.**
**-Q1: Problem on hyperbolic space and presentation of algorithm. (EAwW;jAXP;mqsa)**
- A1:Sorry for our unclear presentation, we describe our method more clearly as follows:
- The "mapped Lorentzian vectors" $L$ of (3) are the model representation $Z$ mapped in the hyperbolic space. In our method, we take the distance $R(w_p,w_g)$ of the mapped Lorentzian vectors in hyperbolic space defined by (4) and (5) to measure the bias between model $w_p$ and $w_g$. This distance is added to the original loss function $f(\cdot)$ as a regularization term to constrain the model inconsistency between the client and server side. Algorithm 1 solves the new regularized $F(\cdot)$. To illustrate the connection between Algorithm 1 and Section 3.2 clearly, we fix the typos in Algorithm 1.
- Since hyperbolic geometry is a Riemann manifold with a constant negative curvature, its typical geometric property is that the volume grows exponentially with its radius, whereas the Euclidean space grows polynomially.
Such a geometric trait has 2 advantages:
1. The hyperbolic space exhibits minimal distortion and it fits the hierarchies particularly well since the space closely matches the growth rate of tree-like data while the Euclidean space doesn't.
2. Even with a low-embedding dimensional space, hyperbolic models are surprisingly able to produce a high quality representation, which makes them to be particularly advantageous in low-memory and low-storage scenarios.
In realistic scenarios, there exists many tree-like data structure, such as the hypernym structure in NLP, the subordinate structure of entities in the knowledge graph and the power-law distribution in recommender systems. In FL, clients and the server logically also have a tree-like hierarchical topological relationship, so adopting the Lorentz metric of hyperbolic space makes use of the hierarchical information in the datasets and the hierarchical relationship between the server and clients, which are helpful to group data samples further bring prediction gains. Using Euclidean metric, or some Riemann metric defined by a positive definite matrix is an interesting idea. Here, we show the results of experiments using different geometric spaces as follow.
| Space | Euclidean | Hyperbolic |
| :-------: | :-------: | :--------: |
| Test Acc. | 67.46(0.32) | 68.99(0.35) |
From these results, we can find that Lortenz metric of hyperbolic space can help the algorithm achieving the highest test accuracy. The reason is that the traits of hyperbolic space make good use of the hierarchical information in the datasets, which is helpful to group data samples and further bring prediction gains.
The representations generated by the model have fewer dimensions than the data. Mapping the representations to the hyperbolic space introduces less computation overhead than mapping the data. We also conduct experiments mapping the original data to the hyperbolic space over 8 seeds. The results are as follow.
| Original data | Representations |
| :---------: | :---------: |
| 69.05(0.57) | 68.99(0.35) |
From the table, we can see that both methods achieve similar performance.Thus, considering the computation overhead and performance, we only map representations to hyperbolic space and do not treat the entire learning process in hyperbolic space.
- Section 3.2 is the basis of Algorithm 1. The first component is **Hyperbolic Graph Fusion**. With this scheme, we obtain a new manifold regularization term $R(w_p, w_g)$ to constrain the model inconsistency and formulate the new objective function $F(\cdot)$ with the regularization term. All the gradient computations are about $F(\cdot)$, such as '$g_{i}^{t,k}$'. To depict our method more clearly, we replace '$g_{i}^{t,k}$' with '$\nabla F(\cdot)$' to denote the gradient.
The second component is **Normalized Aggregation of Local Updates**, a new global optimizer to compensate for the global update norm reduction, as described in line 14 of Algorithm 1.
**- Q2: Problem on Normalized Aggregation (EAwW;jAXP)**
- A2: Thank you for the comment on "Normalized Aggregation of Local Updates" from the perspective of theoretical rate.
We check the mathematical part and correct some typos in the proof of Lemma 8 and Theorem 9. The main dominated term of convergence rate is ${(F^0 - F^*)}/{(C \alpha \eta_g \sum_{t=1}^{T} d_t)} + \Phi$ in Theorem 9 (new version), where $d_t = {\sum_{i \in S_t}\Vert \triangle_i^t \Vert }/{\Vert \sum_{i \in S_t} \triangle_i^t \Vert}$ is introduced by "Normalized Aggregation of Local Updates". Compared with the main dominant term in convergence rate of MoFedSAM (${(F^0 - F^*)}/{(C \alpha \eta_g T)}$, in Theorem D.7 of the paper), the convergence rate of FedMRUR is faster than MoFedSAM. Because $\forall t, d_t \ge 1$ derived from the triangle inequality. Therefore, From the theoretical view, we can conclude that the "Normalized Aggregation of Local Updates" can accelerate the convergence. In fact, using this operator in other baselines can also improve the performance. Here, we present the effect of the normalized aggregation method applied to FedCM.
| Algorithm | $\mu=0.6$ | $\mu=0.3$ | $n=20$ | $n=10$ |
|:---------:|:---------:|:---------:|:------:|:------:|
| FedCM+ | 66.23(0.45) | 65.23(0.48) | 61.97(0.43) | 59.27(0.42) |
| FedCM | 64.87(0.42) | 63.50(0.44) | 60.58(0.39) | 57.56(0.36) |
| FedAvg+ | 55.24(0.43) | 53.05(0.41) | 46.89(0.38) | 42.14(0.37) |
| FedAvg | 53.85(0.38) | 51.20(0.37) | 45.70(0.32) | 40.80(0.33) |
Pdf: /pdf/401f0859350e1489e10d5ce5f0717dd69e03d57a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a federated learning framework called FedMRUR to deal with the model inconsistency caused by the local data heterogeneity across clients and insufficient geometric representation ability. To do this, it adopts a hyperbolic graph manifold regularizer to ensure that the representations obtained by the local and global models are close in a low-dimensional subspace. Then it aggregates the client updates norms as the global update norm to mitigate the norm reduction introduced by the near-orthogonality of client updates. A linear speedup property is theoretically proved for the proposed algorithm.
Strengths: + The motivation to deal with the model inconsistency and insufficient representation ability is clear.
+ It is interesting to adopt the hyperbolic graph fusion scheme in federated learning.
+ The theoretical guarantee is provided for the convergence and especially the proposed method exhibits a linear speedup.
+ The experimental results seem promising.
Weaknesses: I'm not very familiar with federated learning. Here are some major concerns:
- There are many ways to exploit the manifold structure. What's the advantage of adopting the graph fusion scheme, especially in the hyperbolic space? If the hyperbolic space is a good choice, then why not let the whole learning process work in the hyperbolic space?
- Figure 2 is less informative since the readers cannot understand how the normalized aggregation works and what's the difference between these two operations in terms of their workflow.
- It seems that the widely used CIFAR-10 benchmark is not involved for evaluation.
- The experiments to validate the linear speedup property are missing.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see the above part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive comments which helps us improve the paper. We have prepared our responses to each of your questions.
- Weakness:
**- W1: There are many ways to exploit the manifold structure. What's the advantage of adopting the graph fusion scheme, especially in the hyperbolic space? If the hyperbolic space is a good choice, then why not let the whole learning process work in the hyperbolic space?**
- A1:For the answer, please refer to Q1 in the Author Rebuttal.
**- W2: Figure 2 is less informative since the readers cannot understand how the normalized aggregation works and what's the difference between these two operations in terms of their workflow.**
- A2: Sorry for the unclear presentation of Figure 2. Figure 2 is a toy schematic to compare the naive aggregation and normalized aggregation of the local updates. The main difference is the norm $\lVert \triangle \rVert$ of the global update, which is the aggregation of the local updates $\triangle_p$ from the participated clients. When adopting the naive aggregation method, the server directly takes the average of the local updates as the global update. Each client's contribution on the global update is $\lVert \triangle_p \rVert \cos \theta_p$, where $\theta_p$ is the angle between the local update $\triangle_p$ and the global update $\triangle$. When the data heterogeneous is significant, the divergence among the directions of local updates is large. Correspondingly, the divergence between the directions of each local update and the global update is large, and the global update norm shrinks which slows down the convergence of the global objective function. When adopting the normalized aggregation method, the server computes the direction of the global update by normalizing the sum of local updates, which is the same as the naive method. The global update norm is obtained by averaging the local updates. In this way, the client $p$'s contribution on the global update $\theta$ grows from $\lVert \triangle_p \rVert \cos \theta_p$ to $\lVert \triangle_p \rVert$. Accordingly, the norm of the global update grows and the speedup the convergence. In order to compare the two aggregation methods, we replace Figure 2 with a new one (Figure 2 in attached pdf).
**- W3: It seems that the widely used CIFAR-10 benchmark is not involved for evaluation.**
- A3: We place more challenged results on CIFAR100 and TinyImageNet in the amin part of the paper. The results on CIFAR-10 are presented in Figure 1, Table 1 and 2 of the Supplementary Material. From these results, FedMRUR achieves the fastest convergence and the highest test accuracy.
**- W4: The experiments to validate the linear speedup property are missing.**
- A4: Thank you for the reminder about validating the linear speedup property. Because the whole dataset is fixed, increasing the number of clients changes the amount of data in the local data, which changes the entire optimization problem, we conduct the experiment under various settings of local intervals $K$ fixing the number of clients to verify the linear speedup property. From Figure 3 in the attached pdf, when $K$ increases to $8$, the algorithm performs $2.0\times$ faster than $K=4$. When $K$ is increased to $\mathcal{O} (ST)^{\frac{1}{2}}$, the impact of the second term of (7) in Theorem 1 becomes greater and the first term becomes less. When $K$ increase from 8 to 16, the speedup of convergence is not obvious.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer mqsa:
We really appreciate your constructive opinions that helped us improve this paper. If there are any concerns unresolved, we would be glad to have further discussions.
Thanks again for your time.
Best,
Authors. | null | null | null | null | null | null |
Discriminative Entropy Clustering and its Relation to K-means and SVM | Reject | Summary: In this paper, the authors first presented an analysis of the relationship between the regularized information maximization (RIM) clustering framework to K-means and SVM-based clustering methods, showing stronger similarities to the SVM-based clustering than K-means. Then they proposed a new loss function and associated EM optimization algorithm for clustering leveraging the reverse cross-entropy/KL divergence to obtain more robust and fair clustering, which has been demonstrated to improve the performance on several balanced image classification benchmarks.
Strengths: 1. The authors identified an error about the missing normalization term in the proof of the equivalence theorem presented in Jabi et al. (2021).
2. The replacement of the forward cross-entropy with the reverse counterpart in the RIM loss appears novel in the clustering scenarios and has the potential to effectively mitigate the impact of uncertain/noisy pseudo-labels.
3. The proposed method showed improved performance on different image classification benchmarks over several baselines.
Weaknesses: 1. The manuscript was poorly written. The authors dedicated more than two pages to discuss the general background of the information maximization clustering framework and related methods. However, these discussions were confusing and largely limited the space for presenting the actual contributions of this work. Furthermore, many terms, including concepts like H and KL divergence, are not explicitly defined or explained, which may cause difficulties to understand the differences between the forward and reverse formulations. Additionally, the claimed conceptual and algorithmic contributions seem to be independent of each other. It is unclear if any of these conceptual clarifications contribute the discovery of the new loss function.
2. The disproof of the equivalence theorem in Jabi et al. (2021) is not convincing. While the authors pointed out an error in the original proof, this does not necessarily eliminate the possibility that the equivalence itself remains valid. Furthermore, this work focused on the standard K-means objective (including Figure 1), whereas Jabi et al. (2021) considered a soft and regularized K-means loss.
3. The authors' claim that the L2 regularization is linked to margin maximization seems questionable. [1] demonstrated that margin maximization is a property of the loss function itself rather than the regularization, which serves to control the model complexity. Indeed, certain combinations of loss function and regularization are not margin-maximizing.
4. The experimental validation is limited. A more comprehensive evaluation of the proposed modifications to the loss function would involve investigating the individual and combined effects of these changes on selected benchmarks and then comparing the results with multiple established baselines. It is still unclear how each modification contributes to the final improvement. Although the authors presented the impact of the reverse cross-entropy modification, they did so within the fully supervised setting rather than unsupervised scenarios. Furthermore, the authors only considered balanced datasets and tested the clustering with the ground truth number of clusters. A more diverse set of experimental conditions, including unbalanced datasets and varying numbers of clusters, would provide a more robust evaluation of the proposed method. Both NMI and ARI metrics used in Table 4 are capable of handling different number of clusters. Additionally, the architecture used in Section 4.2 is different from that used in the baseline methods. It would be preferable to standardize the experimental settings, including model architecture, to be able to directly compare with the results in the literature. Lastly, the inability of the proposed method to properly train a deep network-based clustering model is a concern as well. Most of the tricks should be independent of the loss function modifications, especially the reverse cross-entropy term, thus can be naturally integrated together.
[1] Rosset et al., 2003. Margin maximizing loss functions.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: 1. Could the tolerance of "highly unbalanced solutions where $\bar{\sigma}_k = 0$ for some cluster k" be a good thing, especially when we over-cluster in practice?
2. How were the coefficients of the norm regularization and KL divergence determined in the proposed method and relevant baselines? Were they shared across all the methods? How could we select these coefficients in practice in an objective way since the performance is very sensitive to these values?
3. The K-means results reported in Table 2 for CIFAR10/100 do not match with those in Table 3 in the IMSAT paper, but both employed the pretrained ResNet-50 extracted features. This raises the question of the reliability of the reported quantitative results.
4. Why only used the 20 coarse categories for the CIFAR100 benchmark?
5. The authors mentioned that "if we simultaneously learn the representation/feature and cluster the data, we observed that the results are less sensitive to such coefficient." But I could not find these results in the main text or supplementary material.
6. Table 4 presents a comparison with SCAN but does not include any comparisons with the baseline methods featured in Table 3 following the same two-stage training.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: There is no discussion of limitations or potential societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **the first two pages (introduction) are confusing... and limit the space from the actual contributions...**
It would greatly help if the reviewer could point out specific confusing points and redundancies. Adding the definitions of cross-entropy and KL divergence sounds good, thanks.
**"conceptual and algorithmic contributions seem to be independent of each other"**
Indeed, the conceptual and algorithmic contributions in Sections 2 and 3 are fairly independent on a technical level. But, they are related as follows. Section 2 established the general theoretical properties for entropy clustering solutions, and Section 3 develops a good algorithm for finding such solutions. For example, it would be sad if someone implements EM algorithm for GMM, but does not know or misunderstands the conceptual properties of the GMM problem and its MLE solution.
**"The disproof of the equivalence theorem in [1] is not convincing..."**
The reviewer seems to agree that the specific technical mistake we pointed out in the proof of the equivalence theorem in [1] is fundamental and can not be easily rectified. Now that their "proof'' is invalidated, there is no reason for that claim to be valid. For example, we did not see in [1] any intuition that led them to believe that regularized entropy clustering and soft K-means might be equivalent, which could have motivated them to try to prove this. In general, it is rare that two arbitrary losses are equivalent. Without valid proof, their claim should now be considered only as speculation or conjecture that is not even clearly motivated by anything we can see.
Having said all that, we do want to prove to the community that their claim itself can not be true and this is why we are making our best effort to rectify your just criticism of our counterexample, see the first point in the general section of the rebuttal. We really appreciate your help with this and we do believe that the revised Figure 1 addresses your concern.
**"The authors' claim that the L2 regularization is linked to margin maximization seems questionable..."**
The reference [2] convinced us that it is misleading to refer to the regularizer $||v||^2$ as a "margin-maximization'' term in Section 2.2, see lines 170 and 173. Indeed, the properties of the penalty function (hinge loss or cross-entropy) are also important for the margin maximization effect, as proven in [2]. On the other hand, without the regularization term, entropy-based clustering could be degenerate, as known since [3, 4], even though specific optimization algorithms may also invoke margin maximization bias in cross-entropy [5].
Other parts of Section 2.2 use terms like the "margin maximization effect'' only in the context of the combined losses (7,8,9,10,11), which should be appropriate, but we will definitely correct lines 170 and 173. We will also cite [2], which provides a perfect technical argument on line 177 to motivate the switch from the hinge loss in (7) to the cross-entropy in (8).
**"Although the authors presented the impact of the reverse cross-entropy modification, they did so within the fully supervised setting rather than unsupervised scenarios."**
We compared our clustering results on standard benchmarks with the algorithm by [1], which uses standard cross-entropy and standard KL divergence.
**" Add more diverse set of experiments"**
Our paper includes experiments on all the benchmarks that any prior entropy clustering works ever used.
**concerns about training features from scratch**
Our work is not focused on representation learning, however we do at least as well as prior entropy clustering methods. We are very interested in addressing this issue in our future work.
**“How were the coefficients of the norm regularization and KL divergence determined in the proposed ...”**
The coefficients of the norm regularization should theoretically be very small inducing maximum margin as we discussed in the general reply. However, mildly larger ones may be beneficial to the optimization. Thus, we still tuned such parameter by using simple grid search as well as for the coefficient of KL term. We tuned these hyperparameters separately for each method that uses them.
**"The K-means results reported in Table 2 for CIFAR10/100 do not match..."**
First, those numbers in IMSAT were obtained over 5 years ago and the pre-trained weights in the ResNet50 could possibly be changed over time. Second, we once used their official code but could not reproduce their own results.
**"Table 4 presents a comparison with SCAN but does not include any comparisons with the baseline methods..."**
We added the most important two baselines in the table below and we can observe a clearer improvement over these baselines.
| | **CIFAR10** | **CIFAR10** | **CIFAR10** | **CIFAR20** | **CIFAR20** | **CIFAR20** | **STL10** | **STL10** | **STL10**
|:---| :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | ---:
|| ACC | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI |
|SCAN|81.3(0.3)|71.2(0.4)|66.5(0.4)|42.2(3.0)|**44.1(1.0)**|26.7(1.3)|75.5(2.0)|65.4(1.2)|59.0(1.6)|
|MIADM| 74.76(0.3) | 69.17(0.2) |62.51(0.2) | 43.47(0.5) |42.85(0.4)| 27.78(0.4)|67.84(0.2) |60.33(0.5)| 51.67(0.6) |
|IMSAT| 77.64(1.3) | 71.05(0.4) |64.85(0.3) |43.68(0.4) |42.92(0.2)|26.47(0.1)|70.23(2.0) | 62.22(1.2) | 53.54(1.1) |
|our|**83.09(0.2)**|**71.65(0.1)**|**68.05(0.1)**|**46.79(0.3)**|43.27(0.1)|**28.51(0.1)**|**77.67(0.1)**|**67.66(0.3)**|**61.26(0.4)**|
[1] Jabi et al. (2021)
[2] Rosset et al. (2003)
[3] Bridle et al. (1991)
[4] Krause et al. (2010)
[5] The implicit bias of gradient descent on separable data. Soudry et al. (2018)
---
Rebuttal Comment 1.1:
Comment: Thanks for the response, which has addressed some of the concerns. However, there are still several remaining. First, all clustering results are obtained using the "ground truth" number of clusters, but we rarely have this information in practice. Thus, it would be much better to test the robustness of the proposed method with different number of clusters. Both NMI and ARI metrics are capable of handling different number of clusters. Second, the authors still did not provide any explanation regarding why only used the 20 coarse categories for the CIFAR100 benchmark. It is unclear if the proposed method only shows benefits with limited number of clusters.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to clarify these remaining concerns. We simply did not have space in the original rebuttal to answer all of your detailed comments. We are happy for the opportunity to address these here.
We would like to divide your comment into two relatively independent parts: (A) how does our method work when the number of clusters $K$ is not known, and (B) evaluating our method for different but **fixed** $K$.
**(A) Estimating the unknown number of clusters $K$**
In general, the assumption of the known number of clusters is significant and limits the practical usefulness of any clustering method based on this assumption. We are acutely aware of this in the context of some practical applications in our lab (e.g. clustering of DNA sequences). We are highly interested in addressing this question in our future work.
As far as our submission is concerned, we do not make any general claims that we solved the clustering problem, which is fundamentally ill-posed [1]. Thus, some assumptions always have to be made (ideally, specific to the application).
More specifically, our paper is focused on the general entropy clustering methodology [2-8]. All these and other prior works on entropy clustering assume that the number of clusters is fixed (even though some of them do experiments for different **fixed** $K$ on either simpler or different problems, see (B)). This assumption is very similar to the fundamental assumption in the basic *K-means* methodology, which is even reflected in its name. This is not to say that K-means cannot be generalized in a way where $K$ becomes an estimated variable, but this requires additional terms added to the loss (e.g. AIC or BIC information theoretic criteria). In fact, similar ideas can be explored in the context of entropy clustering, but, this is a substantial topic on its own. We do have plans to continue in this direction in our future work.
We can more clearly state that fixed $K$ is an assumption made by us and all the previous works on entropy clustering (at least those we are aware of).
**(B) Evaluating performance for different fixed $K$**
Evaluating robustness to different (but fixed) $K$ would be interesting, but no prior entropy clustering papers compare NMI and ARI metrics using different $K$ on the data like STL10, CIFAR10, CIFAR100. All such prior works [2-8] typically do not vary $K$ on these datasets - they use them with NMI and ARI for one fixed $K$ (the same as in our experiments). We only see two exceptions: [3] shows some results with various $K$ but on much simpler datasets, while [7] vary $K$ on ImageNet - they are focused on representation learning and do not report clustering quality metrics like NMI or ARI. Also, we agree with your point about testing $K>20$ on CIFAR100, but we simply stuck to the experiments that we saw in prior work on entropy clustering.
Moreover, we do not see any technical reason why our approach would differ from other entropy clustering methods in its robustness to $K$. We agree that entropy clustering methods (in general) should be evaluated for robustness to $K$, but it may be best to combine such evaluation with the proper study of point (A) above and present these in a separate dedicated paper.
[1] An impossibility theorem for clustering, Kleinberg, NeurIPS 2002
[2] Briddle et al. 1991
[3] Krause et al. 2010
[4] Hu et al. 2017
[5] Ghasedi Dizaji et al. 2017
[6] Ji et al. 2019
[7] Asano et al. 2020
[8] Jabi et al. 2021 | Summary: This paper first discusses a number of general properties of entropy clustering methods, including their relation to K-means and unsupervised SVM-based techniques.
Then the aurthors find that cross-entropy is not robust to pseudo-label errors in clustering.
Finally, this paper proposes a new loss function based on reverse KL divergence for clustering to obtain more robust and fair clustering.
Strengths: (1) The proposed loss function is interesting and and seems to be effective to obtain more robust and fair clustering.
(2) The authors try to establish connections between entropy clustering methods and classical methods.
Weaknesses: (1) This paper is not well organized. There are too many details for the proposed method. Some of them can be moved to Appendix.
(2) There can be more descriptions and examples about the proposed method.
(3) The proposed method is merely tested on three image datasets.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: (1) Will the authors open-source the code?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I mentioned them in the Weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **"This paper is not well organized. There are too many details for the proposed method. Some of them can be moved to Appendix."**
If possible, please indicate which specific parts of Section 3 you prefer to be in the supplementary materials.
**"There can be more descriptions and examples about the proposed method."**
We extended our previous Figure 1 illustrating the general properties of entropy clustering losses and added a new Figure 2 further illustrating the dependence on the regularization parameter $\gamma$ (both figures are in the rebuttal's PDF). The empirical example in Figure 3 (in the paper) motivates our reverse formulation of cross entropy as more robust to errors in the pseudo labels. Figure 2 (in the paper) conceptually motivates our stronger formulation of the fairness constraint. As far as our closed-form EM algorithm is concerned, its properties are similar to other known EM techniques. One good example is the famous EM algorithm used for estimating Gaussian Mixtures (GMMs).
**"The proposed method is merely tested on three image datasets."**
We have results for four datasets: MNIST, STL10, CIFAR10 and CIFAR100 (20). These are all the deep clustering benchmarks used in the previous papers on entropy clustering. Our method consistently achieves the best performance.
**"Will the authors open-source the code?"**
Yes, once the paper is accepted. | Summary: The paper presents a very interesting analysis of discriminative entropy clustering in the literature and their use for self-labeling highlighting clear interpretation of the conditional and marginal entropy terms as decisiveness (push to have confident predictions) and fairness (to encourage desired proportions in clusters). The paper analyzes the variants of this kind of losses and their connections. They also discusses the relationship of this loss to K-means and refines the previously mistaken connection in a previous paper to point out the distinct difference. They further point out the effectiveness of reverse cross-entropy in case of uncertainty error and forward KL term to make them more effective in the endpoints of softmax interval. They also propose to use the regularization of classifier weights for margin maximization similar to SVM. A closed form update is derived to compute the pseudolabels from the combined weighted loss and shown its efficacy in clustering experiments.
Strengths: - I think the paper discusses beautifully the intricacies and usage of Mutual information based entropy clustering loss, which is widely used in self labeling/self supervised/ weak supervised learning. The flow of the discussion is to the point and tries to give interpretation of each terms in the loss in a concise manner by drawing connection among the variants.
- The paper discusses the the previous link of entropy clustering to K-means and identify the distinct difference to k-means which was in fault in previous literature. On top of that they find the usefulness of the classifier weight regularization used in previous proof to link to the loss explicitly for margin maximization similar to SVM based clustering.
- The use of reverse cross-entropy and forward KL term and the motivation behind it is explained very well with the aid of Figure 2, so that they are more robust in case of uncertainty around the corner of the softmax simplex.
- The formulation of the EM algorithm is nice to make it work faster along with batchwise operation, showing its global optimal solution guarantee at convergence due to drawing convexity with formulated tight upper bound.
- The experimental results shows clear improvement according to their claim.
Weaknesses:
1. I would say the results of joint clustering and feature learning in Table 3 is not encouraging even when showing a very shallow network of VGG4, the improvement is not significant apart from MNIST. But in Resnet-18, the inductive bias learned from pretraining is helping, then the improvement from the proposed loss might not improve very significantly with the proposed loss. Also in Table 4, the regularization on the feature extractor $\textbf{w}$ done or not in the loss or by weight decay?
2. The experiments on semi supervised learning could also be shown to understand more when some supervision is available. How the idea of using reverse cross-entropy could be used in case of labeled one-hot y in equation 13? or it will be the regular cross-entropy for labeled set and reverse for pseudolabels with updated y_i?
3. What if the loss in 13 is directly optimized with gradient descent instead of using the EM? Although, it seems if $y_i= \sigma_i$ the reverse cross-entropy does not change anything if not updating y_i with closed-form update, is it?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - In Algorithm 1, after updating the pseudolabels y_i in closed form solutions, the loss in the gradient descent is without the KL term in ALgorithm 1 in appendix. Is it a typo or the KL term is not required here?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The limitations are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **"But in Resnet-18, the inductive bias learned from pretraining is helping, then the improvement from the proposed loss might not improve very significantly with the proposed loss. Also in Table 4, the regularization on the feature extractor $w$
done or not in the loss or by weight decay?"**
We extended Table 4 to include previous (competing) entropy clustering methods. The revised Table can be found in the answers to reviewer aVDq. Our approach works significantly better than them. Please also note that all methods (including ours) use weight decay for $w$ (representation layers).
**"The experiments on semi supervised learning could also be shown to understand more when some supervision is available. How the idea of using reverse cross-entropy could be used in case of labeled one-hot y in equation 13? or it will be the regular cross-entropy for labeled set and reverse for pseudolabels with updated $y_i$?"**
We have some semi-supervised experiments in the supplementary materials, though not very extensive as we keep exploring the applications. Indeed, the reverse cross-entropy is not well-defined for one-hot $y$, but simple $\epsilon$-softening works fine in practice. Our semi-supervised experiments use standard cross-entropy for points with ground truth labels, but we know that it works the same with the reverse cross-entropy and $\epsilon$-softening.
**"What if the loss in 13 is directly optimized with gradient descent instead of using the EM? Although, it seems if $y_i==\sigma_i$
the reverse cross-entropy does not change anything if not updating $y_i$ with closed-form update, is it?"**
In fact, each $y$-update (the EM part of the overall algorithm) is always initialized from the current predictions, i.e. $y_i = \sigma_i$. Indeed, if using gradient descent for updating $y$, the decisiveness term based on our reverse cross entropy would give zero gradients.
But, the fairness part of the loss will not, unless the current prediction are fair. In fact, EM algorithm would not change $y$ in this case as well (if predictions are fair). However, the update of prediction will work for our reverse cross-entropy even if the current predictions agree
with the pseudo-labels $y_i = \sigma_i$, see the solid lines in Figure 2b, unless the predictions are uniform. The update will encourage certainty for predictions.
**"In Algorithm 1, after updating the pseudolabels $y_i$ in closed form solutions, the loss in the gradient descent is without the KL term in ALgorithm 1 in appendix. Is it a typo or the KL term is not required here?"**
This is not a typo. The fairness constraint is imposed for pseudo-labels only, which is common for self-labeling entropy clustering methods we review in our paper [1, 2]. It is also true for SVM clustering in [3]. This is actually intended as the difficult task of optimizing a challenging non-convex loss (combining decisiveness and fairness) is delegated to a dedicated specialized solver (e.g. EM algorithm in our case) rather than to the gradient descent (backpropagation).
[1] Asano et al. (2020)
[2] Jabi et al. (2021)
[3] Xu et al. (2004) | Summary: The authors consider discriminative entropy clustering and produce a discussions linking previous works. They have a version of the algorithm based on EM and a modified KL-divergence term. Experiments show the modified algorithm works better than competing methods with small networks.
Strengths: - The authors provide a good overview of discriminative entropy clustering and its development, from the work of Bridle et al to the regularized version by Krause et al, to the more recent work using deep learning and representation learning (Asano et al and Jabi et al).
Weaknesses: - The contribution of this paper is somewhat limited:
1. The pointing out of a proof error in Jabi et al is helpful but is not significant on its own
2. The discussion on SVM is based on previous works and simply replaces the logistic loss with margin loss, and is not particularly insightful
3. Section 3 is the authors' contributed new algorithm, but the main difference with previous works is changing the order of the KL term in the objective.
- The improvements in empirical evaluations, compared to other methods, are somewhat limited. Many of the improvements are within standard error of competing methods.
- The authors use a lot of space to discuss previous work (first 5.5 pages), and not enough space to explain what is new about their method and specifically what problems it addresses.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Is there any explanation on the joint clustering and feature learning working only on small networks like VGG4, but not on larger VGG or ResNet18?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations not mentioned; potential negative societal impact not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **"The pointing out of a proof error in [1] is helpful but is not significant on its own."**
While we agree, there are many other contributions in our paper. For example, besides finding a fundamental problem in
their proof, we also show a counterexample proving that their actual claim is wrong.
**"The discussion on SVM is based on previous works and simply replaces the logistic loss with margin loss, and is not particularly insightful."**
The technical contribution of Sec 2.2 is that it shows how to combine the arguments in [2, 3, 4] to demonstrate that regularized
entropy clustering losses like (10) or (11) have (soft) margin maximization properties.
The perceived simplicity of our arguments is compensated by the conceptual significance of the newly
discovered property that was previously unknown for the entropy-based clustering introduced in [5].
**"Section 3 is the authors' contributed new algorithm, but the main difference with previous works is changing the order of the KL term in the objective.''**
Our EM algorithm addresses a new entropy clustering loss reversing the order of both cross-entropy (decisiveness) and KL divergence (fairness). We believe that both are well-motivated conceptually by robustness to (pseudo) label errors (see Fig.2 \& 3) and stronger fairness (Fig.2). Our new loss is also motivated empirically as the algorithm improves over the previous entropy clustering formulations.
**"The improvements in empirical evaluations, compared to other methods, are somewhat limited. Many of the improvements are within the standard error of competing methods."**
We respectfully disagree. First, the improvements are very consistent across all experiments and data sets.
Sometimes they are small, but in some cases they are significant. We also added an extended version of Table 4 (included in our answer to reviewer aVDq). This table compares with the state-of-the-art deep clustering (not necessarily) entropy-based.
This might be the most relevant experiment for practically-minded readers interested in clustering as this experiment
removes the self-imposed constraints of other experiments (e.g. fixed-features or training features from scratch).
The revised Table 4 shows fairly significant improvements, particularly over previous entropy clustering methods.
**"The authors use a lot of space to discuss previous work (first 5.5 pages), and not enough space to explain what is new about their method and specifically what problems it addresses."**
Please note that we consider Section 2 (pages 4-6) as a conceptual contribution of our paper studying entropy clustering,
which we like a lot. Section 2 improves theoretical understanding of the properties of entropy clustering that are not well understood (e.g. margin maximization and relation to SVM) or even misunderstood in the ML community (false connection to K-means in a recent TPAMI paper). Section 3 also provides an algorithmic contribution, also based on an improved understanding of the limitation in prior formulations.
**"Is there any explanation on the joint clustering and feature learning working only on small networks like VGG4, but not on larger VGG or ResNet18?"**
Intuitively, larger networks are more difficult to train to obtain good features. In other words, there will be more local minima. From our experiments, good initialization helps to avoid bad local minima, see the revised Table 4.
[1] Jabi et al. (2021)
[2] Margin maximizing loss functions. Rosset et al., 2003
[3] Bishop (2006)
[4] Xu et.al. (2004)
[5] Briddle et.al. (1991)
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the rebuttal. I am still skeptical about the claimed contributions in Section 2.2 after the rebuttal. In classification it is common to switch between the hinge loss in SVM and cross entropy loss in logistic regression, and the linear class balance constraint used in Xu et al 2004 and the entropy regularization term are also well-known in semi-supervised learning literature. So what is the main new understanding in this section?
As for the point on Section 2 (page 4-6), it still reads much more like a literature review than a description of new theoretical understanding. If there are new theoretical understanding, the authors could have stated it formally using a simple lemma or theorem. This makes it easier for reviewers to evaluate whether the claim is novel or significant.
Updated Table 4 shows improved results, but the results in Tables 2 and 3 are mostly still within standard errors of top 2/3 competing methods. The issue with the empirical evaluations is not just whether the method beats the state-of-art, but whether the improvements come from the algorithmic advances the authors propose. Since the empirical difference is relatively small, how can we be sure the improvements come from the self-labeling mechanism or changes to the KL-term?
---
Reply to Comment 1.1.1:
Comment: >**"In classification it is common to switch between the hinge loss in SVM and cross entropy loss in logistic regression, ... what is the main new understanding in Section 2?"**
Indeed, "switching" between hinge loss and cross-entropy is known in SVM **classification**, as discussed in textbooks [1] that we cite. But we would like to emphasize that our paper is on **clustering**, which is a significantly different problem, in particular, it is **unsupervised**.
We are first to point out the relation between entropy clustering and margin maximization, to the best of our knowledge. Perhaps convincing arguments in our paper make this obvious, but it was not known to many highly respected scientists working on entropy clustering [2-5]. These papers have no references to SVM clustering [6], maybe because it was not obvious to them that there is any relation. Moreover, the result published in 2021 in the top ML journal [5] claims that the entropy clustering is equivalent to K-means. This directly contradicts the relation to SVM clustering [6] since K-means has no margin maximization property.
In short (and directly answering the reviewer's question), the main new understanding that our paper provides for the ML community is that entropy clustering has margin maximization property and closely relates to SVM-based clustering [6]. Instead, the community currently believes that entropy clustering is related to K-means, which is incorrect.
>**"Section 2 still reads much more like a literature review than a description of new theoretical understanding. If there are new theoretical understanding, the authors could have stated it formally using a simple lemma or theorem."**
Concerning Section 2.1, we do not see how an error that we found in the proof of the main result in [5] (equivalence to K-means) can be stated as a theorem. Neither do we understand how our counterexample to their claim (see Fig 1) can be stated as a theorem. Does the reviewer have specific suggestions?
Concerning Section 2.2, it might be possible to state some formal property on the relation between regularized entropy clustering and margin maximization (e.g. see an informal claim in the first part of the general rebuttal). However, a similar formal claim may be formalistic and weak as an assumption restricting the claim to fair solutions is unavoidable. A similar issue also exists in standard **soft** SVM formulations for classification. Such losses seek a compromise between margin size and margin violations, but it is difficult to formally define what max margin even means for non-separable labeled data. In our unsupervised setting, data doesn't even have ground truth and our combined loss for unsupervised clustering also includes a soft fairness constraint (e.g. KL divergence term). For now, we prefer not to make any formal claims in Sec 2.2 and leave this development for future work.
We believe that Section 2.2 works sufficiently well without a formal claim. For example, [6] introduced "max-margin clustering" (as stated in their title) but they do not have any theorem or lemma proving that their method produces max-margin clusters, which are not even formally defined. Yet, most readers of that paper will probably be convinced (just like us) that their formulation of clustering is related to margin maximization. Their self-labeling methodology integrates the standard soft margin SVM formulation with a soft fairness constraint.
**References**
[1] Bishop (2006)
[2] Briddle et.al. (NeurIPS 1991)
[3] Krause et al. (NeurIPS 2010)
[4] Hu et al. (ICML 2017)
[5] Jabi et al. (TPAMI 2021)
[6] Xu et.al. (NeurIPS 2004) | Rebuttal 1:
Rebuttal: # General points for all reviewers
## Entropy Clustering and Margin Maximization (Sec 2.2)
Reference [1] provided by aVDq not only strengthens the arguments in Sec. 2.2 but also inspires some additional analysis that improves our understanding of how the regularization parameter $\gamma$ in entropy clustering loss (10) affects the maximum margin. While we address the specific concerns by aVDq in his reviewer's section, here we explain some extra insights on $\gamma$ in (10). We can incorporate such insights and improved illustrations into the final paper or the supplementary materials.
But first, we show the right place for reference [1] in Sec 2.2. On line 177 it works better than [2] (Sec. 7.1.2) to justify loss (8) as an alternative formulation for soft SVM classification (7). Note that the rest of Sec.2.2 extends margin maximization property to non-convex **entropy-based clustering** (10,11), while the theories in [1] are limited to convex **classification** problems assuming fully labeled separable data.
**Improved insights on $\gamma$**: Our submission treats $\gamma$ as a trade-off parameter in entropy clustering, e.g. (10,11). Such *margin-width-vs-margin-violation* interpretation is common for the soft SVM classification losses (7,8) used for non-separable labeled data. However, we overlooked the importance of the relation between such soft SVM and the classical SVM where the maximum margin is formally defined assuming linearly separable labeled data. As well known (e.g. Sec.7.1.1, [2]), assuming separability the soft margin solution in (7) or (8) converges to the classical max-margin solution as $\gamma\rightarrow 0$. That is, for separable data soft SVM produces the max-margin solution for all sufficiently small $\gamma>0$. We can extend this standard property to self-labeling clustering (9) or (10). Indeed, for a fixed pseudo-labeling, the optimal classifier $v$ should produce a max-margin solution for any given sufficiently small $\gamma>0$. The hinge or decisiveness terms in (9) or (10) should approach zero for any such solution where all data points respect the margin. Thus, among all balanced linearly separable pseudo-labelings, losses (9,10) should prefer one with the largest margin width $\frac{1}{||v||}$ corresponding to the lowest value of the regularization term $\gamma||v||^2$. Therefore, **assuming fairness, losses (9) and (10) prefer the maximum margin clustering for any given sufficiently small $\gamma>0$**.
**Improved figures**: The property above is illustrated for entropy clustering in the revised Figure 1 and new Figure 2 (in rebuttal's PDF) that focus on smaller $\gamma$. Besides decision boundaries of the optimal linear classifier $\sigma(v^\top f_p)$, our figures use transparency to visualize the softness of decisions produced by the soft-max $\sigma(\cdot)$, which appears as a boundary ''blur'' proportional to $\frac{1}{||v||}$. Max-margin solutions are also known to have width $\frac{1}{||v||}$. Thus, the boundary "blur'' of the optimal entropy clustering is proportional to the width of the corresponding max margin.
## Regularized Entropy Clustering vs. soft K-means (Sec 2.1)
Reviewer aVDq points out that our counterexample in Fig.1 uses hard K-means, which is not sufficient to contradict the claim in [3] about the equivalence between the soft K-means (sKM) and regularized entropy clustering (EC) as in (10). Please note that hard K-means in Fig 1 was used mainly for shortness, but we agree that this led to ambiguity. Here we hope to convince all reviewers
that in our counterexample sKM is not fundamentally different from the hard K-means, in part because we focus on small $\gamma$ revealing (as discussed above) the max-margin property of EC, which sKM does not have for any $\gamma$. Our modified version of Fig.1 and some extra examples in the new Fig.2 (see both in the rebuttal's PDF) illustrate significant differences between EC and sKM discussed below. They strengthen our counterexample to the equivalence claim in [3].
**Different global minima**: EC and sKM prefer fundamentally different solutions in our counterexample, compare (A) and (D) in the modified Fig.1. Each method produces consistent solutions for all $\gamma\in(0,0.00001]$ where the optimal loss varies below four significant digits. sKM is nearly identical to the hard K-means for this range of $\gamma$. In fact, the optimal solution for sKM is similar to (D) for all $\gamma\geq 0$, only the boundary softens for larger $\gamma$, see new Fig.2. Indeed, sKM is still a variance clustering criterion, even though the variance of each cluster is "weighted" based on each point's cluster membership that softens mainly near the boundary. Similarly to hard K-means, sKM prefers more compact clusters.
**Different local minima**: New Fig.1 shows the difference between local minima for EC and sKM. The local minimum in (C) is obtained by the standard sKM algorithm initialized by solution (A). Vice versa, if EC algorithm (e.g. [3]) is initialized by (C), it converges to (A). The same relationship applies to the two solutions in (B) and (D). Local minima for EC in (A) and (B) find balanced clustering with (locally) maximum margin, while local minima for sKM in (C) and (D) are always orthogonal bisectors for the cluster centers that ignore cluster margins.
**Different dependence on the parameter $\gamma$**: Note that $\gamma$ works as a temperature parameter in sKM, which quickly converges to hard clusters as $\gamma\rightarrow 0$, see new Figs.1 and 2. On the other hand, EC converges to the max-margin solution as $\gamma\rightarrow 0$ (see above). The boundary "blur'' represents the corresponding max-margin width $\frac{1}{||v||}$ (see the end of the previous section) explaining why the "blur'' of the optimal EC solutions in our figures becomes fixed as $\gamma\rightarrow 0$.
## References
[1] Margin maximizing loss functions. Rosset et al., 2003.
[2] Bishop (2006)
[3] Jabi et al. (2021)
Pdf: /pdf/c42e83d1a4ca042f8b5b64cc87ed03286accccf3.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DP-Mix: Mixup-based Data Augmentation for Differentially Private Learning | Accept (poster) | Summary: The paper proposes two data augmentation techniques, MIXUP-SELFAUG and MIXUP-DIFFUSION, for differentially private learning. The authors investigate why naive applications of multi-sample data augmentation techniques, such as mixup, fail to achieve good performance and propose these two techniques specifically designed for the constraints of differentially private learning. MIXUP-SELFAUG performs mixup on self-augmented data, which results in lower gradients norm average and variance leading to smoother training. MIXUP-DIFFUSION further improves performance by incorporating synthetic data from a pre-trained diffusion model into the mixup process. The paper also discusses the challenges of applying data augmentation techniques in the context of differential privacy and compares the proposed techniques with existing methods on various datasets. The experimental results show that the proposed techniques achieve state-of-the-art classification performance across a range of datasets and settings. Overall, the paper proposed effective data augmentation techniques that can improve the performance of machine learning models while preserving the privacy of sensitive data.
Strengths: This paper is easy to read. This paper considerately used DP-SGD for differentially private learning. These techniques improve the performance of mixup-based data augmentation. The paper also discusses the challenges of applying data augmentation techniques in the context of differential privacy.
Weaknesses: 1. The absence of a theoretical analysis for MIXUP-SELFAUG in the submitted work is notable. The authors have provided an intuition on why mixup could harm differential privacy and presented an alternative approach to mitigate this issue. It could further strengthen the paper if the authors could provide a theoretical guarantee for differential privacy.
2. In relation to MIXUP-SELFAUG which doesn't mix labels, questions arise about why MIXUP-SELFAUG improves performance. The original intent of mixup, which is label interpolation for better performance, seems not fully addressed in this context. The authors could consider explaining their approach's effectiveness via intuition or mathematical formulation.
[1] Zhang, Linjun, et al. "How does mixup help with robustness and generalization?." ICLR 2021
[2] Jeong, Jongheon, et al. "Smoothmix: Training confidence-calibrated smoothed classifiers for certified robustness." NeurIPS 2021
[3] Zhang, Linjun, et al. "When and how mixup improves calibration." ICML 2022
[4] Park, Chanwoo, Sangdoo Yun, and Sanghyuk Chun. "A unified analysis of mixed sample data augmentation: A loss function perspective." NeurIPS 2022
[5] Zou, Difan, et al. "The benefits of mixup for feature learning." arXiv preprint arXiv:2303.08433 (2023).
[6] Oh, Junsoo, and Chulee Yun. "Provable Benefit of Mixup for Finding Optimal Decision Boundaries." ICML 2023
3. Concerning the implementation of the diffusion model, there are a few considerations that the authors might need to address. The necessity for incorporating a diffusion model is not sufficiently justified in the current draft. For instance, the diffusion model used in this study is pretrained, which could potentially introduce the element of knowledge distillation. This raises the question: Is the observed performance improvement mainly due to distillation effects? Additionally, the use of a diffusion model not trained with DP-SGD could potentially pose issues for differential privacy learning. The authors are encouraged to provide clarification on these points.
4. While the adoption of a diffusion model might benefit differential privacy, the paper could delve deeper into explaining why it's advantageous for ensuring differential privacy. A detailed discussion on this matter could provide readers with a better understanding of the authors' choice and its implications on the overall study. The authors' insights on this matter would be greatly appreciated.
[1] Chourasia, Rishav, Jiayuan Ye, and Reza Shokri. "Differential privacy dynamics of langevin diffusion and noisy gradient descent." NeurIPS 2021
[2] Dockhorn, Tim, et al. "Differentially private diffusion models." arXiv preprint arXiv:2210.09929 (2022).
[3] Ghalebikesabi, Sahra, et al. "Differentially Private Diffusion Models Generate Useful Synthetic Images." arXiv preprint arXiv:2302.13861 (2023).
[4] Lyu, Saiyue, et al. "Differentially Private Latent Diffusion Models." arXiv preprint arXiv:2305.15759 (2023).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: I want to clarify especially the second and third issues of the weakness. If this is well-addressed, I am down to re-evaluate this paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors adequately addressed the limitation and potential negative societal impact on their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive feedback and detailed comments. We will correct typographical errors and make suggested improvements to our paper. Here are our clarifications:
**1. The absence of a theoretical analysis for MIXUP-SELFAUG in the submitted work is notable.**\
Our work is largely empirical in nature. However, theoretical justification for our approach can be provided with respect to why naive application of mixup harms DP and our approach does not.
Consider Alg. 1 in the submission PDF. The inner loop is what satisfies DP and it does so for a batch of example $B_t$. (The overall privacy guarantee is obtained using amplification theorems and composition as explained in related work [2,3] from the submission PDF.)
Let $B = \{ (x_1, y_1), (x_2, y_2), \ldots, (x_m, y_m) \}$ denote a batch of $m=|B|$ examples. Consider the clipped-gradients sum, defined as: $$ \bar{g}(B) = \sum_i {g}(z_i)/\max(1, \frac{||{g}(z_i)||_2}{C}) $$
where $z_i = (x_i, y_i)$ is the $i^{\rm th}$ example of the batch and $g(\cdot)$ denotes the gradient of the loss on an example with respect to the model parameters. The $L_2$ sensitivity of a function $f$ is denoted by $\Delta f$ and is defined as:
$$
\Delta f = \max || f(B) - f(B') ||_2,
$$
where we take the maximum over pairs of neighboring batches $B, B'$.
DP-SGD relies on the clipping step ${g}(z_i)/\max(1, \frac{||{g}(z_i)||_2}{C})$ to bounds the sensitivity of the sum to $C$. If we add or remove any example $(x, y)$ to/from $B$ the $L_2$-norm of only one term changes and because it is clipped to norm $C$, the clipped-gradients sum can change by at most $C$.
What if we now apply mixup to the batch $B$? To apply mixup to the batch $B$ we may randomly pair up examples in $B$ (e.g., ignoring any leftover example) and for each pair we create a single mixup example according to Eq. (4) in the submission PDF. In that case the new mixup-augmented batch contains $m$ original examples plus $\lfloor \frac{m}{2} \rfloor$ mixed-up examples, one for each pair of original examples. We then apply DP-SGD as in Alg. 1 to this augmented batch. We denote this construction applied to a batch $B$ as $\rm{mixup}(B)$.
We provide a proof sketch (due to space constraints) that the sensitivity of the clipped-gradients sum of $\rm{mixup}(B)$ is $2C$ (and not $C$).
(Proof sketch.) Suppose we apply $\rm{mixup}$ on the batch $B$ to obtain a mixup-augmented batch and then compute the clipped-gradients sum on that augmented batch. an original example $z \in B$ will impact two terms of the sum (the one involving the gradient of $z$ and also the one involving the gradient of the mixed-up pair involving $z$). For example this can happen if every term of the sum not involving $z$ points to some direction $C \vec{e}$ for some $\vec{e}$ and the two terms involving $z$ point to $-C\vec{e}$. Therefore the sensitivity is $2C$.
The sensitivity being $2C$ (instead of $C$) means that the scale of Gaussian noise added must be doubled to obtain the same privacy guarantee.
Our methods do not suffer from this issue (sensitivity remains $C$) because clipping occurs on the gradients of the ``microbatch'' formed by all of the augmentations (only one term of the clipped-gradients sum of norm at most $C$ is added/removed).
\
\
**2. In relation to MIXUP-SELFAUG which doesn't mix labels, questions arise about why MIXUP-SELFAUG improves performance.**\
Our intuition is that data augmentation and mixup help generalization. It's also worth pointing out that mixup does provide a benefit even without knowledge/use of true labels as evidence by works that apply mixup outside of the supervised settings. For example, see: [link1](https://arxiv.org/pdf/1905.04215.pdf), [link2](https://arxiv.org/pdf/2206.07692.pdf) and [link3](https://arxiv.org/pdf/2108.12296.pdf). The last paper is particularly relevant because they apply mixup on pairs of examples with the *same* label.
\
\
**3. The necessity for incorporating a diffusion model is not sufficiently justified in the current draft.**\
We discuss this in the general response. We made sure to select a diffusion model in a way that would not inadvertently bias results.
\
\
**4. Is the observed performance improvement mainly due to distillation effects?**\
We don't believe so. First, we aligned the training data for the diffusion models and pre-trained models to ensure that Mixup-Diffusion would not have access to data (through the synthetic examples) that other methods did not also have. Second, we explicitly tested whether the synthetic examples from the diffusion model by themselves explain the observed performance. For this, we fine-tuned the model with DP-SGD with and without pretrained on the synthetic examples. Results in supplementary materials (Table 6 --- A.2) show that pre-training with the synthetic examples provides does not improve performance. This shows that the benefits come from mixup and not distillation effects.
\
\
**5. The use of a diffusion model not trained with DP-SGD could potentially pose issues for differential privacy learning.**\
As we explain in the general response and above, the diffusion model and pre-trained models used the same training set. This means that there is no data leaked through the diffusion model that was not also used to pre-train the fine-tuned model.
\
\
**6. Why it's advantageous for ensuring differential privacy.**\
We believe DP-SGD benefits from seeing more data during training and that's what mixup with synthetic diffusion provides.
---
Rebuttal 2:
Title: Thank you.
Comment: Thank you for your detailed explanation.
Regarding the first point, I appreciate your clarification. However, I am not sure that the sensitivity difference between 2C and C might not be significant given that it's a constant factor. Plus, it is not a worst-case guarantee, so it would be more clarified.
For the second point, I might need further clarity. In the third link you shared, they utilized a "contrastive" loss to find a good representation. However, it appears this paper does not employ the same approach. I'm curious to understand how this work might benefit from the same label without using contrastive loss.
As for points 3, 4, and 5, my concerns remain. My understanding is that the diffusion model isn't a "differential private" method since it hasn't been trained with DP-SGD. Could you explain this? If a model isn't trained using a "differential private" method, wouldn't it naturally exhibit improved performance? I agree with your point on the alignment of training data and the testing with a synthetic dataset.
I'm not sure that the absence of data leaks isn't necessarily the same as ensuring differential privacy. (Using the same dataset also might induce not good differential privacy. Not really related I believe)
Thank you once again for your time and dedication in addressing my queries.
---
Rebuttal Comment 2.1:
Comment: Thank you for your quick response. We would like to clarify some points:
**1.** The difference between C and 2C is massive because it implies effectively doubling the noise level to achieve the same privacy guarantee with DP-SGD, which results in much worse performance. For example, see Table 2 of our submission PDF for EuroSAT, where for $\varepsilon=1$ and $\varepsilon=2$ correspond to almost doubling the noise level (to $\sigma=25$ from $\sigma=13$) with all other parameters (batch size, number of epochs, etc.) remaining the same (except the learning rate which is tuned to give best results in each case). As can be seen in Table 2, the absolute test accuracy decrease resulting from this almost doubling of the noise level is over $5$%.
Also, to clarify: our analysis for the doubling of sensitivity is a worst-case analysis. This is because differential privacy is a worst-case notion, as it applies to all pairs of neighboring datasets.
**2.** Here we want to clarify two separate points:
- **a.** Most data augmentation techniques, such as flipping and cropping, operate on a single example and thus do not mix labels, yet they still improve generalization. Mixup-SelfAug operates in a similar fashion.
- **b.** The related works mentioned in our previous response are not meant to endorse any particular approach or suggest that we would adopt it. Our point is simply that empirical results (from these related works) show that mixup without label mixing still enhances generalization.
**3.** Thank you for clarifying. We think we understand what you are asking now.
If we train a model with DP-SGD, the differential privacy protection is always with respect to the training set used. This means that if we take a model already pre-trained on dataset X and then fine-tune it with DP-SGD on dataset Y, the differential privacy guarantee only applies to dataset Y; there is no guarantee on dataset X unless pre-training was also done using DP-SGD.
This is why in our paper we consider both training from scratch and fine-tuning. In the training from scratch setting, we consider only Mixup-SelfAug and not Mixup-Diffusion. In the fine-tuning/pre-trained setting, we assume that the dataset used to pre-train the model (LAION-2B) is public so DP-SGD pre-training is not necessary. This is a standard assumption from prior works. For example, De et al. [10] pretrain their models on ImageNet, JFT-300M, and JFT-4B. In fact, the paper that introduced DP-SGD, Abadi et al. [2], pretrain their model on CIFAR-100.
In our case, we specifically pretrain ViT and ConvNext on the LAION-2B dataset so that both the model to be fine-tuned and the diffusion model used the exact same training data. Given the assumption that this dataset is public, we do not need to do the pretraining with DP-SGD. When we fine-tune the pretrained model with Mixup-SelfAug or Mixup-Diffusion differential privacy protects the fine-tuning data. | Summary: This paper considers the privacy-utility tradeoff for ML models trained with differential privacy guarantees, and develops a technique using data augmentation on image datasets to train models with high accuracies on standard benchmarks with DP guarantees. Mixup, a commonly used augmentation technique in computer vision, and its variants are not compatible with standard DP-SGD because a single datapoint could influence many training instances through augmentations. Recent work [10] pointed out that augmentations involving a single datapoint can be used if clipping if done after averaging all gradients stemming from augmentations of the datapoint. The present paper develops two methods inspired by mixup that involve a single datapoint and therefore can be used with DP-SGD. Experiments show performance comparable to state-of-the-art methods for training DP models.
Strengths: Thank you to the authors for sharing their interesting research! Overall, the paper tackles an important subject - improving the utility of models with rigorous privacy guarantees. This is an active area of research, and the paper engages with recent work appropriately. The proposed methods are simple to implement for image datasets and provide a boost in utility. An attempt is made to explain why mixup methods improve performance by checking the distributions of gradient norms.
Originality: The paper develops two variants of mixup that can be applied to DP-SGD training. I do not know of prior work using mixup for DP-SGD.
Quality: The analysis of differential privacy is careful and correct. The authors are diligent in highlighting parts of the analysis which are particularly tricky (L91, etc.) although these aspects are known in the literature. Relevant and recent methods are used for benchmarking, and error bars over 3 runs are given. Source code is provided.
Clarity: The paper is quite clear.
Significance: The poor utility of models trained with DP-SGD holds back the adoption of this important privacy enhancing technology, so new methods (like the ones in this paper) that improve the utility are necessary.
Weaknesses: Originality: The paper combines the augmentation techniques developed for DP in [10], with the idea of mixup which is very standard. The combination is only slightly non-trivial in this context.
Quality: The experiments lack detail - it would not be possible to reproduce the author's work with the current state of the paper and appendices. Prior SotA work [10] is frequently referenced, but performance levels from that paper are not reproduced or surpassed. The experiment using diffusion models is not benchmarked fairly. Broader impacts are relegated to the appendix.
Clarity: There are minor grammatical and typesetting issues, but they do not hurt the clarity of the paper. Figure 2 requires clarification.
Significance: The applicability of this technique is limited to image datasets, whereas that of [10] for example can be applied much more broadly.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. What augmentations were used in the experiments? I have not found this detail clearly stated in the paper or appendices, despite it being a central focus. L167 mentions a "randomized transformation function $T$" but this is not brought up again. L247 mentions flipping and cropping, but it seems to imply these are not the only augmentations used. The lack of important details such as this negatively impact the reproducibility of this research.
2. Can the authors state what computing resources were used and information on the runtime of their methods in comparison to prior work?
3. Can the authors give details on the diffusion model that was used (architecture, how it was trained, etc.)? All we know is that it was from Open Clip and trained on LAION-2B (L218).
4. The experiments attempt to follow [10] in many ways for close comparison, but the results reported in Section 5 as baselines do not reach the level of that paper. For instance, [10] reports 81.4% accuracy on CIFAR-10 training from scratch with a WRN-16-4 and epsilon=8. Table 1's Baseline is presumably meant to represent the method from [10], but only achieves 72.5% accuracy. The author's proposed method achieves 78.7% accuracy in the same setting. Given that the authors make claims of SotA performance, can they clarify why the results reported in [10] were not directly used or reproduced?
5. There are some results which are repeated across tables (e.g. first two rows of Table 4 are reproduced from Table 3). Could this be mentioned explicitly where applicable so that it is easier to track and compare results? A reader moving quickly might assume they are separate results.
6. In Table 4 the Mixup-Diffusion method uses additional data compared to the other approaches, namely the data generated by the diffusion model. Since the utility of DP models is often constrained by access to data, it is important to use baselines that have the same level of data access. Have the authors put thought into an appropriate baseline technique in the style of DP-SGD that also makes use of the synthetic data? (The most naive option being to pre-train or extend the training dataset with the same synthetic data made available to the mixup-diffusion method.)
7. In Figure 2 it is not clear what the difference is between the curves and the blue bars. I interpret each curve as the distribution of gradient norms over individual datapoints in the training dataset at a particular epoch (these details are not given, I have to guess). However, the blue bars are stated to be a histogram of the average norm of gradients, but what is the average taken over and what is the resulting distribution over? I note that the paper at various times considers epochs, minibatches, and microbatches, so the authors should clearly state which notion they are using in their analysis.
8. Also for Figure 2, the authors do not mention in the paper or appendices which dataset is used. Can the authors provide sufficient detail to reproduce their experiments (without having to dig through source code)?
9. The result in Figure 2 showing lower average gradient norms is indeed suggestive that mixup leads to smoother training and faster convergence. However, have the authors found an explanation for why mixup leads to lower average gradient norms? This is not at all self-evident, especially for Mixup-Diffusion where training and synthetic datapoints are mixed, and those images can come from different classes (as illustrated in Fig 1).
10. Have the authors checked the equivalent of Figure 2 for Mixup-Diffusion, or for other datasets? Can this analysis be included?
Minor:
In L67, couldn't $\delta=0$ be a valid choice, and for that matter $\delta=1$ corresponding to no privacy guarantee?
In L58, typically one chooses $\delta$ to be much less than $|D|^{-1}$, but the authors state $\delta=o(|D|)$.
Incomplete list of small errors:
L70 add -> adds
L72 gradient -> gradients
L89 vector the -> vector of the
Alg 2 line 2, missing space
L150 extra parens ((x_1, y_1)
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The proposed method is implicitly stated to only apply to image datasets, but this limitation is not openly addressed by the authors. Similar work used for benchmarking is not limited in the same way.
The paper focuses on the utility of private models, but does not mention that private training can have negative effects other than reducing utility (other than in a brief Appendix B without references to prior work). For instance, it is well-known that DP-SGD usually exacerbates the biases in models making them more unfair [A][B][C]. Does data augmentation affect fairness? What are the biases introduced from the synthetic data? Robustness is another topic aligned with Trustworthy ML, and researchers often introduce augmented data during training to improve it. It is possible that the proposed method has positive effects on robustness, but the authors have not addressed this direction despite the significant literature on the intersection of privacy and robustness.
[A] Bagdasaryan et al. 'Differential privacy has disparate impact on model accuracy' NeurIPS 2019
[B] Xu et al. 'Removing disparate impact on model accuracy in differentially private stochastic gradient descent' ACM SIGKDD 2021
[C] Esipova et al. 'Disparate impact in differential privacy from gradient misalignment' ICLR 2023
----
Summary of Discussion: I have read all reviews and rebuttals for this submission.
The authors responded to all points of my review. My rating was maintained at 5 as some of my original points remain as concerns, including the originality, and quality (lack of details of reproduction).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive feedback and detailed comments. We will correct typographical errors and make suggested improvements to our paper. Here are our clarifications:
\
\
**1. The paper combines the augmentation techniques developed for DP in [10].**\
We agree with the reviewer that we exploit the insights of [10] to maintain the privacy guarantee. However, the novelty of our paper is that we show how to exploit the augmentation technique of [10] for mixup. The benefit of the self-augmentations proposed by De et al. [10] quickly hits diminishing returns ($k=16$ -- see Table 6 in the attached 1-page rebuttal PDF) and adding different types of augmentations also does not help. By contrast, Mixup-Diffusion and Mixup-SelfAug provide consistently better performance by leveraging the benefits of mixup.
\
\
**2. The applicability is limited to image datasets, whereas that of [10] for example can be applied much more broadly.**\
We acknowledge that we only investigated our techniques for image data (similar to De et al. [10]). However, we disagree that our techniques are necessarily limited to image data. There are data augmentation techniques for other data domains and numerous related work showing successful application of mixup to other data domains (e.g., see (https://arxiv.org/pdf/2108.12296.pdf), (https://arxiv.org/pdf/1905.08941.pdf), (https://arxiv.org/pdf/2010.02394.pdf)). Some of these techniques apply mixup in the feature/embedding space instead of the input data. But, this is in principle compatible with our techniques and gradient clipping over all augmentations would also preserve the privacy guarantee in such cases. We leave for future work the investigation of whether the same benefits are obtained.
\
\
**3. What augmentations were used?**\
We apologize for any confusion. To clarify, our methods employ the exact same augmentations as De et al. [10] (flipping and cropping).
\
\
**4. Can the authors state what computing resources and information on the runtime of their methods in comparison to prior work?**\
We provide the running time for different methods in the table below. All experimental runs utilized a single A100 GPU and were based on the same task of finetuning the Vit-B-16 model on the Caltech256 dataset for 10 epochs. Due to additional augmentation steps the training time of our methods is larger than prior work.
**Table:** Running time for different methods of the same task (fine-tuning Vit-B-16 on Caltech256 for 10 epochs). We use one A100 GPU for each training method.
| Method | Self-Aug | Mixup-SelfAug | Mixup-Diffusion |
|--------------|----------|---------------|-----------------|
| Running time | 2h 12min | 7h 33min | 7h 40min |
\
**5. Details on the diffusion model that was used? All we know is that it was from Open Clip and trained on LAION-2B (L218).**\
We provide more information in the general response. But to be clear the diffusion model is stable-diffusion-v1-4 (https://huggingface.co/CompVis/stable-diffusion-v1-4). The models from Open Clip are Vit-B-16 and ConvNext (which we pre-trained on LAION-2B the same data used to train the diffusion model).
\
\
**6. Baselines do not reach the level of that paper**\
We think there is a misunderstanding. The WRN-16-4 model with $\varepsilon=8$, [10] indicates a performance of 79.5% on CIFAR-10 (as can be confirmed in Table 3 of their paper). This closely aligns with our reported performance of 78.74%.
Additionally, it's crucial to highlight the specifications in our submission PDF Table 1: ``Baseline'' in that table refers to vanilla DP-SGD without Self-Aug or any modifications. Elsewhere we sometimes refer to Self-Aug as baseline, since it is the prior SoTA.
To clarify: both of our methods Mixup-SelfAug and Mixup-Diffusion consistently outperform the Self-Aug [10] baseline in both pre-trained and from scratch settings. For example, Mixup-SelfAug obtains $79.83$% test accuracy in the setting where Self-Aug obtains $78.74$% (and [10] reports $79.5$%). The point of Table 1 in our paper is to show why microbatching will not work for mixup.
\
\
**7. There are some results that are repeated across tables.**\
Yes, we will revise this.
\
\
**8. Have the authors put thought into an appropriate baseline technique in the style of DP-SGD that also makes use of the synthetic data?**\
Yes, we conducted such experiments in the supplementary material (A.2 --- Table 6). We pretrain the model on the synthetic data, which is an appropriate baseline because if the performance boost we get could be obtained using the synthetic data directly (i.e., without mixup) then it may not make sense to use Mixup-Diffusion. However, pretraining on the synthetic data does *NOT* help. This implies that the benefit of Mixup-Diffusion is due to how it combines synthetic data into the mixup augmentations.
\
\
**9. Regarding Figure 2**\
We have reproduced the experiments and clarified this in the general response.
\
\
**10. Have the authors found an explanation for why mixup leads to lower average gradient norms?**\
We do not claim that mixup leads to lower average gradient norms. Rather we observe from Figure 2 (and Figure 1 in the attached 1-page PDF) that gradients for mixup are more concentrated (i.e., lower variance/std). See the general response for more details.
\
\
**11. In L67, couldn't $\delta=0$ be a valid choice, and for that matter $\delta=1$ corresponding to no privacy guarantee?**\
Yes we will fix this. $\delta=0$ or $\delta=1$ are allowable.
\
\
**12. Does data augmentation affect fairness? What are the biases introduced from the synthetic data? Robustness?**\
This is a question worth investigating. We believe it is outside the scope of our current paper, so we leave it for future work.
---
Rebuttal Comment 1.1:
Title: Response to rebuttals
Comment: I have read all the reviews and rebuttals.
6. I was looking at the WRN-40-4 rather than WRN-16-4 in [10], so this is clear now.
I was unclear about what "Baseline" referred to in Table 1, this could be made more explicit in the work.
However, the SelfAug's reported performance of 79.5% [10] is significantly different from your number, 78.7% in Table 2, given that Mixup-SelfAug achieves 79.8%. At this level, the approaches are within one standard deviation on performance.
10. Does Fig 2 not show that the average norm is reduced? If the distribution is more concentrated around zero, wouldn't the average norm be reduced?
---
Reply to Comment 1.1.1:
Comment: Thank you for your insights and feedback.
1. We will clarify the meaning of “Baseline” in Table1 of the revised paper.
2. To ensure a fair comparison, we employ Sander et al.’s (ICML 2023, https://openreview.net/pdf?id=DIkGgI9baJ) reproduction of [10] for both SelfAug and Mixup-SelfAug.
3. Fig 2. does indeed show that the average norm is reduced – there's a notable reduction from approximately 0.009 to 0.0004 for CIFAR-10 and 0.001 to 0.0003 for Caltech256. | Summary: The paper studies mixup data augmentation for differentially private (DP) machine learning. The traditional mixup data augmentation requires multiple samples. Therefore, applying them in DP model training is not straightforward. The paper first shows that a naive way of implementing it with micro-batches of size 2 gives poor results. To improve the performance, the paper proposes two new approaches. The first approach mixes up a sample and its own augmented version. The second approach mixes up a sample with generated samples from diffusion models. Experiments show that the proposed approaches improve over self-augmentation.
Strengths: * The paper is well-written.
* Mixup data augmentation is very useful in the non-DP world. However, incorporating that into DP training is non-trivial. The paper is a valuable step towards bringing the benefit of mixup data augmentation to DP training.
Weaknesses: * I have some doubts about whether the experimental comparison is fair.
* More ablation studies are needed.
See "questions" for the details.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors:
My major concern is the experimental comparison between self-aug and the proposed mixup variants in Tables 2 and 3.
* In the experiments of Table 2, did we keep the number of augmented samples the same? To make it more accurate, let's consider Mixup-SelfAug and use the notations in Section 3.2. For a fair comparison, we need to make sure that the "k" utilized in self-aug equals "k+k'" used in Mixup-SelfAug. (The motivation is to keep the computation cost roughly the same.)
* Similarly, for Table 3, did we keep the number of augmented samples the same?
Other questions:
* It is better to show how k, k', and k'' influence the performance of Mixup-SelfAug and Mixup-Diffusion.
* Is there any benefit of combining Mixup-SelfAug and Mixup-Diffusion together?
I will update the score according to the answers to the above questions.
Other minor issues:
* Figure 2: I do not fully understand the figure. It says "The histogram shows average norm of gradients.", but I do not get which values are averaged over. Is the process (1) for each sample, computing the average norms of gradients across all iterations, and (2) plotting the histogram of the above values of all samples? Similarly, how are the lines computed?
* Line 138: "however" -> ". However"
* Line 150: "((x_1,y_1)" -> "(x_1, y_1)"
* Line 249-254: The information in this paragraph is also covered in the method section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations:
The paper did not discuss the limitation or the potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive feedback and detailed comments. We will correct typographical errors and make suggested improvements to our paper. Here are our clarifications:
\
\
**1. The experimental comparison between self-aug and the proposed mixup variants in Tables 2 and 3.**\
For Self-Aug, increasing the number of augmentations $ k $ beyond $ k=16 $ does not increase (and often decreases) performance. We show this in Section 5.3 of our paper and the original paper from De et al. [10] shows the same phenomenon. To further address your concerns, we have conducted an additional experiment where we increase $ k $ from $ 16 $ to $ 36 $ for both 'training from scratch' and 'pretrain' settings. Results are shown in Table 6 (attached 1-page PDF). The best performance for Self-Aug is $ k=16 $. Furthermore, for the same number of augmentations Mixup-SelfAug outperforms Self-Aug. For $ k+k'=32 $ augmentations, Mixup-SelfAug also outperforms Self-Aug with $ k=36 $ augmentations. This shows that our comparisons are indeed fair. In fact, we give an advantage to Self-Aug in our paper because we report results for that method with the value of $ k $ (i.e, $ k=16 $) that gives the best results for it.
\
\
**2. How $ k $, $ k' $, and $ k'' $ influence the performance of Mixup-SelfAug and Mixup-Diffusion.**\
We have conducted additional experiments to show this. See the table below. Recall: $ k $ is the number of base self-augmentations, $ k' $ is the number of mixups, and $ k'' $ is the number of synthetic diffusion samples used. Overall, selecting $ k''=2 $ or $ k''=4 $ and setting $ k' $≤$ k $ gives good overall results. In our paper, we used $ k=k'=16 $ and $ k''=2 $ as it provides good overall results across many datasets and settings.
\
\
**Table:** Change $ k $, $ k' $, and $ k'' $ for fine-tuning Vit-B-16 model on Caltech 256 with $\varepsilon = 1$ and $\delta = 10^{-5}$. Here $ k $ is the number of augmented data, $ k' $ is the number of mixup processes, and $ k'' $ is the number of diffusion samples.
| $ k $ | $ k' $ | $ k'' $ | Acc. (%) | Rank |
|---|---|---|---|---|
| 8 | 8 | 0 | 80.39 | 20 |
| 8 | 8 | 2 | 85.03 | 9 |
| 8 | 8 | 4 | 87.21 | 1 |
| 8 | 16 | 0 | 77.05 | 27 |
| 8 | 16 | 2 | 84.45 | 11 |
| 8 | 16 | 4 | 87.19 | 2 |
| 8 | 24 | 0 | 77.16 | 26 |
| 8 | 24 | 2 | 84.23 | 12 |
| 8 | 24 | 4 | 86.35 | 4 |
| 16 | 8 | 0 | 77.66 | 24 |
| 16 | 8 | 2 | 84.22 | 13 |
| 16 | 8 | 4 | 86.71 | 3 |
| 16 | 16 | 0 | 81.21 | 19 |
| 16 | 16 | 2 | 83.76 | 14 |
| 16 | 16 | 4 | 86.28 | 5 |
| 16 | 24 | 0 | 77.28 | 25 |
| 16 | 24 | 2 | 82.55 | 16 |
| 16 | 24 | 4 | 85.80 | 6 |
| 24 | 8 | 0 | 77.94 | 22 |
| 24 | 8 | 2 | 82.72 | 15 |
| 24 | 8 | 4 | 85.54 | 7 |
| 24 | 16 | 0 | 77.89 | 23 |
| 24 | 16 | 2 | 82.10 | 17 |
| 24 | 16 | 4 | 85.08| 8 |
| 24 | 24 | 0 | 80.14 | 21 |
| 24 | 24 | 2 | 81.46 | 18 |
| 24 | 24 | 4 | 84.54 | 10 |
\
**3. Is there any benefit of combining Mixup-SelfAug and Mixup-Diffusion together?**\
First, note that Mixup-SelfAug can be seen as a special case of Mixup-Diffusion where $ k''=0 $. We introduced Mixup-Diffusion to improve upon Mixup-SelfAug by introducing additional data diversity through diffusion samples. To investigate your specific question, we conduct an experiment using what we call ``Pure-Mixup-Diffusion'', i.e., setting $k=0$ for Mixup-Diffusion, meaning that the original training samples are not used at all. In effect Pure-Mixup-Diffusion is simply mixing up the synthetic examples themselves. The table below compares Pure-Mixup-Diffusion to other methods. We can see that Pure-Mixup-Diffusion offers much worse performance than both Mixup-SelfAug and Mixup-Diffusion, although it still offers better performance than Self-Aug due to the beneficial effects of mixup. More generally, we think that Pure-Mixup-Diffusion will tend to worsen an overfitting problem whenever there is a large domain gap between the original training data and the diffusion samples. Mixup-Diffusion does not suffer from this problem because it ensures that (augmented versions) of the original training data samples are seen during training.
**Table:** Implement Pure-Mixup-Diffusion (K=0) on CIFAR-100 with Vit-B-16 model. We set $\delta = 10^{-5}$ and $\varepsilon = 1$ . We can observe that Pure-Mixup-Diffusion cannot improve the model's performance.
| Method | Test accuracy |
|----------------------|------------------------------|
| Self-Aug | 79.28% (±0.18%) |
| Mixup-SelfAug | 81.75% (±0.15%) |
| Mixup-Diffusion | **82.02%** (±0.11%) |
| Pure-Mixup-Diffusion | 80.91% (±0.17%) |
\
**4. Explanation of Figure 2 is not so clear.**\
We acknowledge that our explanations lacked clarity and details. We have produced new figures and provided clarification of this in the general response.
\
\
**5. The paper did not discuss the limitation or the potential negative societal impact.**\
We briefly discuss potential negative societal impact in the supplementary material (see Appendix B).
---
Rebuttal Comment 1.1:
Comment: The rebuttal fully addressed my concerns. Therefore, I increase the score. | Summary: This paper proposes a data augmentation technique for differentially private deep learning using mixup regularization. Mixup is a popular augmentation technique which involves taking linear combinations of training samples to create new samples. However, such a technique cannot be directly applied to differentially private training, as the sensitivity of the private algorithm increases rendering it to be un-useful. Towards this end the authors propose two methods for private mixup augmentation. First, is mixup with self augmentation which involves mixing samples obtained from augmentation of a single sample, and secondly, mixing training samples with samples generated from diffusion models. The paper provides empirical results to support their claim.
Strengths: Mixup is an interesting technique, and exploring it for differentially private training is important.
The technique is simple to implement in practice with easy to prove theoretical guarantees.
Method shows improved empirical performance on the presented datasets.
Weaknesses: The proposed method isnt too different from the method proposed by De et al which is cited in the paper. Its a minor augmentation to the augmentation they proposed.
Pre-trained models are trained on large datasets but fine-tuning is performed only on toy datasets. Needs more empirical evidence on practical datasets (for example, oxfordpets, flowers, and so on)
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: For diffusion model based experiments the pre-trained model is trained on 2B images which is likely to contain the entire fine-tuning sets. Does this seem like a reliable public pre-training to choose?
Can you increase the number of samples to mixup from 2? How does the performance change by increasing the number of samples in the convex combination?
What should be the nature of the distribution of samples that are used during mixup? Can out of distribution samples be mixed to obtain meaningful results?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The method is only restricted to self augmentations or augmentations with publicly available data. In private training having different gradients aligned when training with DP-SGD is important to reduce the effect of the gaussian noise added to ensure DP. Will it be possible to construct a mixup involving multiple samples (rather than self-mixup or diffusion model based) so that the gradients during training are more aligned, which thereby increase the signal-to-noise ratio?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive feedback and detailed comments.
\
\
**1. The proposed method isn't too different from the method proposed by De et al.**\
Our work uses the same insight as De et al. [10] --- by clipping the gradients aggregated over all the augmentations we preserve the DP guarantee. However, the same idea naively applied to mixup (or multi-sample data augmentation) does not work. Our contribution is showing how to make it work and that it yields substantial benefits.
\
\
**2. Pre-trained models are trained on large datasets but fine-tuning is performed only on toy datasets.**\
We extended our evaluation to include three additional datasets: Caltech256, SUN397, and Oxford-IIIT Pet, some of which we already used in supplementary material. See Table 1 in the attached rebuttal PDF. Our proposed methods significantly outperform the current SoTA ("Self-Aug" --- De et al. [10]) across these new datasets and for all values of $\varepsilon$. It is also worth pointing out that prior SoTA (De et al. [10]) not only uses some of the same datasets (e.g., CIFAR-10 and CIFAR-100) but also pre-trains models on very large datasets including ImageNet, JFT-4B, and JFT-300M.
\
\
**3. Pre-trained model is trained on 2B images which is likely to contain the entire fine-tuning sets.**\
No, we don't believe so. As we explain in the general response, we took great care in selecting the pre-trained models and pre-training data to ensure fair comparisons. There are two important points: First, even if there was overlap between the pre-training data and the fine-tuning sets our methods still outperform Self-Aug when fine-tuning the *exact same pre-trained model*. From this we can conclude that our method provides a boost beyond whatever advantage there could be from the pre-trained models' training data. Second, we show in supplementary material A.2 (Table 6) that fine-tuning the models using synthetic data from a diffusion model trained on the exact same 2B images dataset does *not* improve performance. If the fine-tuning data were included in the diffusion model training data, wouldn't we see substantial performance boost from training on synthetic examples from that very diffusion model?
\
\
**4. Can you increase the number of samples to mixup from 2? How does the performance change by increasing the number of samples in the convex combination?**\
Yes, we increase the number of samples to mixup from 2 to 3 and 4, and present its performance on CIFAR-100 with $\varepsilon=1$, in the table below. Increasing the number of samples for Mixup-Diffusion does not improve accuracy.
| # of samples for Mixup | Test accuracy |
|------------------------|---------------------|
| 2 | 82.02% (±0.11%) |
| 3 | 81.35% (±0.15%) |
| 4 | 81.33% (±0.09%) |
\
**5. What should be the nature of the distribution of samples that are used during mixup? Can out of distribution samples be mixed to obtain meaningful results?**\
The distribution of synthetic samples used for mixup should not be too dissimilar to the train/test data. See the general response for a discussion of this. We measured FID values to quantify distance between distributions in Tables 2 and 3 (attached 1-page PDF). When the FID value is too large as is the case for EuroSAT, we observe little to no benefits from mixup with synthetic example, although there is still a benefit from "self-mixup" as evidenced by the significantly higher accuracy of Mixup-SelfAug compared to SelfAug. Table 5 in supplementary material (A.1) also discusses this but through a different lens.
\
\
**6. Will it be possible to construct a mixup involving multiple samples (rather than self-mixup or diffusion model based) so that the gradients during training are more aligned, which thereby increases the signal-to-noise ratio?**\
Ideally we would prefer to apply mixup to the original training samples directly as the reviewer suggests, i.e., mixup pairs of samples $(x, y)$ and $(x', y')$ where $(x, y)$ and $(x', y')$ are two randomly selected samples from the training set. As we explain in the paper (Sections 2.1 and 3.1) trying to apply this idea naively does not work. It doubles the sensitivity of the clipped-gradient sum in DP-SGD, which severely degrades the privacy guarantee (or requires doubling the scale of noise to achieve the same guarantee). The obvious way to get around this is to use microbatching as we explain in Section 3.1. But microbatching itself degrades performance because it has the effect of increasing noise [31]. So even though the gradients are more aligned the negative impact of noise results in worse models. We show this experimentally in Table 1 in the submission PDF, where we see that microbatching (of size 2) negatively impacts accuracy so much that it completely negates the benefits of mixup. Note: in that table ``baseline'' refers to vanilla DP-SGD. This is what motivated us to design Mixup-SelfAug and Mixup-Diffusion, which attempt to get the benefits of mixup without the drawback of microbatching (or worsening the DP guarantee). | Rebuttal 1:
Rebuttal: Thank you for the feedback. We would like to clarify a few points.
\
\
**1. Motivation & novelty.**\
Data augmentation has the potential to improve DP-SGD, but naive application of techniques such as mixup compromises privacy. We propose two methods, Mixup-SelfAug and Mixup-Diffusion, to use mixup with DP-SGD *without* worsening privacy. Experimental results show that our methods consistently outperform the current SoTA (``Self-Aug'' from De et al. [10]) which uses single-sample data augmentation and serves as an important baseline.
Table 1 in the 1-page rebuttal PDF shows results on Caltech256, SUN397, Oxford-IIIT Pet. It shows that Mixup-SelfAug and Mixup-Diffusion outperform Self-Aug consistently across all datasets and privacy budgets tested. In some cases, the absolute test accuracy increase is particularly large (e.g., Caltech256 with $\varepsilon=1$).
\
\
**2. Comparison with De et al. [10].**\
De et al. [10] Self-Aug is the prior SoTA. They use single-sample data augmentation (SSDA) to improve performance over vanilla DP-SGD. Our proposed methods use a similar clipping idea to ensure DP. However, we show how to support multi-sample data augmentation (MSDA) such as mixup. Our experiments demonstrate that mixup provide substantial improvements over Self-Aug.
\
\
**3. Does more augmentations help?**\
Single-sample augmentations cannot provide benefits comparable to mixup for DP-SGD. For instance, Table 6 (attached 1-page PDF) shows that even with more augmentations, Self-Aug does not reach the performance of our proposed methods. In fact, additional augmentations (beyond $k=16 $ as recommended in [10]) decreases performance.
\
\
**4. What about different augmentations?**\
Adding augmentations beyond those suggested by De et al. [10] (which are flipping and cropping) hurts performance. We tested adding color jitter to the self-augmentation process of De et al. [10] (we call this Self-Aug+). (The idea of using color jitter is from Sander et al. We use their same parameter configuration. [https://github.com/facebookresearch/tan](https://github.com/facebookresearch/tan)) Results are shown in Table 4 (attached 1-page PDF), where we see that Self-Aug+ provides worse results than Self-Aug.
\
\
**5. Do our techniques benefit from additional augmentations?**\
By contrast Mixup-Diffusion benefits from additional diffusion samples $k'' $ as shown in Table 5 (attached 1-page PDF). For example, this leads to a 9.65% absolute test accuracy increase on Oxford-IIIT Pet.
\
\
**6. Selection of pre-training data, diffusion model and fair comparisons.**\
We took great care to ensure that our experiments lead to a fair comparison between our methods and alternatives such as Self-Aug (prior SoTA). In particular, all methods have access to the exact same training data. We also tune hyperparameters of each method optimally (e.g., $ k $ for Self-Aug). We use the same pre-trained models (Vit-B-16 and ConvNext from OpenClip) to compare our methods to others (Self-Aug and Mixed Ghost Clipping).
Since Mixup-Diffusion uses a diffusion model to generate synthetic examples, this could make the comparison unfair because other methods do not use synthetic samples. To avoid this, we purposefully use the exact same pre-training data (i.e., LAION-2B) to pre-train models as was used to train the diffusion model. This avoids the issue of the synthetic examples somehow ``containing'' data that other methods do not have access to. Moreover, we conducted experiments (supplementary material A.2 --- Table 6) to show that the synthetic examples themselves do **not** boost performance. It is the **way** they are used by Mixup-Diffusion that boosts performance. Finally, out of the six datasets we use for evaluation, none of them overlap with the LAION-2B dataset (to the best of our knowledge).
\
\
**7. Explanations for Figure 2**\
Reviewers raised a valid point regarding the lack of clarity of Figure 2 in the submission PDF. The original was plotted using WRN-16-4 model trained from scratch on CIFAR-10 and did not include Mixup-Diffusion.
To address these concerns, we redid this experiment but in the pre-trained setting on CIFAR-10 and Caltech256 and including Mixup-Diffusion. See Figure 1 (attached 1-page PDF). To clarify: the figure plots the per-parameter gradient magnitude averaged over samples at each epoch. The histogram shows the data averaged over all training epochs and the X\% color lines show that data only for the epoch at X\% of the total training process. There are 10 epochs for this experiment, so for example the line for 20% shows the data for epoch 2 (out of 10).
The figure shows more concentrated values for our methods compared to the Self-Aug baseline, which suggests more stable training and faster convergence. Standard deviations for CIFAR-10 with Self-Aug, Mixup-SelfAug and Mixup-Diffusion are: $2.16 \cdot 10^{-3}$, $9.76 \cdot 10^{-4}$ and $9.59 \cdot 10^{-4}$, respectively. For Caltech256 they are: $1.43 \cdot 10^{-3}$, $1.07 \cdot 10^{-3}$ and $9.32 \cdot 10^{-4}$, respectively. This is consistent with experimental results of test accuracies for each method.
\
\
**8. Why do our techniques work?**\
We believe the data augmentation in general and mixup in particular helps generalization. Furthermore, if we view data augmentation as effectively increasing the amount of data available during training, then DP model training will particularly benefit from it. (This view is consistent with arguments from related work [42].)
To confirm our intuition, we perform two types of experiments. The first type quantifies the potential benefit from synthetic diffusion examples and is described in supplementary material (A.1 --- Table 5). The second type measures the Fréchet Inception Distance (FID) between train/test data and diffusion data for each dataset and shown in Tables 2 and 3 (attached 1-page PDF), where results are consistent with test accuracies across datasets and methods.
Pdf: /pdf/ec421991eff064174d2de5b1e52f999c3b254186.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
ExPT: Synthetic Pretraining for Few-Shot Experimental Design | Accept (poster) | Summary: The authors introduce a novel approach SynTO to address a challenging setting - few-shot black-box optimization. Specifically, SynTO adopts a pretraining-adaptation pipeline. SynTO can be pretrained using synthetic functions and then adapt to downstream tasks via an in-context learning manner. Comprehensive experiments demonstrate the effectiveness of the proposed method in multiple settings.
Strengths: 1. The proposed few-shot BBO is more real-world applicable and generalizable to multiple optimization tasks. Generally speaking, the paper is technically solid.
2. It is interesting to pretrain models with synthetic data from families of other functions.
3. The paper writing is clear, and the presentation is satisfying.
Weaknesses: 1. SynTO assumes the access to large amounts of unlabeled data, which may cause unfair comparisons as the other methods are trained only with a few labeled data.
2. In Table 1, albeit with the extra unlabeled data, SynTO seems to perform inferior in some settings.
3. It would be better to give the efficiency of SynTO compared with previous methods, since it further introduces an additional pretraining step.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can you explain why SynTO degrades more on Max scores under the poorest setting?
2. In Table 4, can the forward and inverse modeling methods be directly compared regarding their qualitative results, considering the two approaches generate different kinds of outputs?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and for the recognition of the technical contributions of SynTO and the presentation of our paper. We answer each of the reviewer's concerns below.
> SynTO assumes the access to large amounts of unlabeled data, which may cause unfair comparisons as the other methods are trained only with a few labeled data.
SynTO can make use of the rich unlabeled data for pretraining, while the other methods cannot. We believe this is a strength of SynTO compared to other methods. This assumption holds in many real-world applications where we know the possible designs of an engineering problem but do not know how good they are. For example, in molecular optimization, we have databases of millions of molecules [1, 2, 3], but only the properties of a handful are known [4, 5, 6].
[1] Sunghwan Kim, Paul A Thiessen, Evan E Bolton, Jie Chen, Gang Fu, Asta Gindulyte, Lianyi Han, Jane He, Siqian He, Benjamin A Shoemaker, et al. Pubchem substance and compound databases. Nucleic acids research, 44(D1):D1202–D1213, 2016.
[2] Lorenz C Blum and Jean-Louis Reymond. 970 million druglike small molecules for virtual screening in the chemical universe database gdb-13. Journal of the American Chemical Society, 131(25):8732–8733, 2009.
[3] Lars Ruddigkeit, Ruud Van Deursen, Lorenz C Blum, and Jean-Louis Reymond. Enumeration of 166 billion organic small molecules in the chemical universe database gdb-17. Journal of chemical information and modeling, 52(11):2864–2875, 2012.
[4] John J Irwin and Brian K Shoichet. Zinc- a free database of commercially available compounds for virtual screening. Journal of chemical information and modeling, 45(1):177–182, 2005.
[5] Raghunathan Ramakrishnan, Pavlo O Dral, Matthias Rupp, and O Anatole Von Lilienfeld. Quantum chemistry structures and properties of 134 kilo molecules. Scientific data, 1(1):1–7, 2014.
[6] Anna Gaulton, Anne Hersey, Michał Nowotka, A Patricia Bento, Jon Chambers, David Mendez, Prudence Mutowo, Francis Atkinson, Louisa J Bellis, Elena Cibrián-Uhalte, et al. The chembl database in 2017. Nucleic acids research, 45(D1):D945–D954, 2017.
> In Table 1, albeit with the extra unlabeled data, SynTO seems to perform inferior in some settings.
We note that SynTO is not inferior compared to the baselines. In Table 1, in all tasks under all metrics, SynTO is either the best or the second-best method, and the mean rank and the mean score of SynTO are also superior to the baselines.
> It would be better to give the efficiency of SynTO compared with previous methods, since it further introduces an additional pretraining step.
SynTO has a pretraining step but performs optimization for downstream functions via in-context learning without any gradient updates, which means there is only 1 training stage, similar to the other methods. The only difference is SynTO trains on synthetic data, while the other methods train on real data.
> Can you explain why SynTO degrades more on Max scores under the poorest setting?
Overall, SynTO is still the best method in terms of the Max score metric with respect to mean score and mean rank. Moreover, we believe mean and median scores are better indicators of the actual performance, since they take into account the quality of all proposals instead of only the best proposal. In discrete tasks such as tf8 and tf10, a method can propose a very good point by chance without actually performing well in expectation.
> In Table 4, can the forward and inverse modeling methods be directly compared regarding their qualitative results, considering the two approaches generate different kinds of outputs?
Both the forward and inverse modeling approaches are optimizing f and hence, their final outputs used for evaluation in Table 4 are the same. That is, they both propose x's for black-box evaluation. The difference is while the inverse model generates those proposals directly conditioning on the desired y, the forward approach first learns a surrogate for f and then performs gradient ascent on the x's to propose better points.
---
Rebuttal Comment 1.1:
Comment: I have read the reviews and responses. Thanks. | Summary: The paper tackles black-box-optimization from few-shot examples, by pretraining a transformer model on synthetic proxy tasks using in-context learning, and evaluating with the same procedure but with real data. The synthetic tasks are generated by using 1) real unlabeled data and 2) a synthetic generative process that generates different tasks (i.e. input/targets pairs $x_i$, $y_i$) given the unalbeled data (inputs) for the task of interest. The procedure studied to generate the synthetic tasks is Gaussian processes with a radial basis function kernel. During training, the model receives D input/target value pairs generated by the process, and is tasked to predict the the distribution of the input given a target value ($p(x_y|y_i)$) and the context points using a VAE. At inference time, the model receives the few-shot examples context points, and is tasked to predict the distribution for the current task.
The approach results in competitive performance against existing methods, which generally outperforms existing methods.
Strengths: - The method proposed is simple and sound, and incorporates recent progress from transformer-based modelling, such as in-context learning which allows to efficiently adapt the model during inference time without backpropagation.
- Despite its simplicity on how to train the system and use it during inference, it achieves competitive performance on a wide range of benchmarks.
- The experimental section is thorough, giving lots of insights about the method. For example, it gives insights on how the training with GP performs when the (synthetic) tasks during inference are clearly out-of-distribution (Section 4.1), as well as a thorough analysis of alternative design choices such as random vs sorted selection for context/target points, which are shown to be suboptimal.
- The paper proposes pretraining on fully synthetic data, which is easy to construct and has less ethical concerns than pretraining on real data.
Weaknesses:
The main point of the paper is pretarining using synthetically generated data, and more inisights on why this works at all would make for a stronger paper. My main concerns for the paper are regarding this issue:
- For example, regarding the following sentence:
L50-51: "Our key insight is, if a model learns to perform few-shot learning on a diverse and challenging set of functions, it should be able to adapt quickly to any objective function at test time." Generating tasks that are "diverse and challenging" from the unlabelled data does not seem enough for the method to perform well, and the concepts of "diversity and challenging" are not well defined in the paper nor in references, neither theoretically nor experimentally. For example, one could generate a totally random process without any type of correlations, which would be both "challenging and diverse" and the method would likely not perform well during inference. Making this notions clearer and relating it to the choice of Gaussian processes, either theoretically or through experimental comparisons with pretraining with other types of processes would make the paper more complete.
- More insights on why the selected random process (Gaussian processes) is good, and analysis with other alternatives would make for a stronger paper. For example, although the functions used for synthetic evaluation in 4.1 are much more simple, it would be good to use them as pretraining at least to illustrate this point.
- If the only requirement is the data being "diverse and challenging", can real data from similar domains (or other domains that can be adapted to match the desired data statistics) be used for pretraining? Why training with a fully synthetic process is an advantage?
- It would be good to perform an analysis of how the statistics of the pretraining data compares to the statistics during inference (e.g. precision/recall as in [1] or similar metrics), to see if the inference task are statistically different or not when compared to the pretraining tasks. This or similar would illustrate if the the diversity of the pretraining data covers all modes of the inference tasks (high recall) and/or whether the diversity is too high compared to the distribution of data for the inference task (low precision).
[1] Assessing Generative Models via Precision and Recall. Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, Sylvain Gelly
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Despite the paper showing strong evidence that the proposed method is sound and outperforms alternatives, more insights on why GP works would make for a stronger paper. See weakness for questions.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes, the authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and for the recognition of the simplicity, soundness, strong performance of SynTO, and thoroughness of our experiments. We answer each of the reviewer's concerns below.
> Generating tasks that are "diverse and challenging" from the unlabelled data does not seem enough for the method to perform well, and the concepts of "diversity and challenging" are not well defined in the paper nor in references, neither theoretically nor experimentally. For example, one could generate a totally random process without any type of correlations, which would be both "challenging and diverse" and the method would likely not perform well during inference.
The main idea of synthetic pretraining is to enable the model to generalize to any downstream function given very limited labeled data. Ideally, the pretraining distribution should be representative of the downstream function, however, it is hard to find such a distribution as we only have access to a very small labeled dataset. An alternative solution is pretraining on a diverse set of functions so that the model gets exposed to many different types of functions during pretraining, which facilitates generalization to the downstream function at test time. Diversity can be thought of as having a function distribution that can generate many different kinds of functions such as linear, polynomial, exponential, periodic, etc., and their combination. A Gaussian Process with an RBF kernel is perfectly suited for this purpose since it is a universal approximator to any function [1], which means in theory if we pretrain the model for long enough it will eventually see data from a synthetic function which is close to the downstream function.
We agree that the model will not perform well when we generate data from a totally random process without any type of correlation. In this case, since x and y are independent, the model cannot extract any signal from the context points (x_context, y_context) to predict x_target. Therefore, the model will ignore the conditioning and only learn to model the marginal distribution p(x), which means it will generate x’s with random returns at test time.
In practice, choosing a suitable set of hyperparameters for the RBF kernel helps avoid this degenerate case. We showed this empirically in Appendix C.1, in which a too big or too small value of either the variance and the length scale significantly hurt the performance.
[1] Micchelli, Charles A., Yuesheng Xu, and Haizhang Zhang. "Universal Kernels." Journal of Machine Learning Research 7.12 (2006).
> More insights on why the selected random process (Gaussian processes) is good, and analysis with other alternatives would make for a stronger paper.
We performed an ablation study where we pretrain SynTO on different data distributions, including different GP kernels (GP-Cosine, GP-Linear, GP-Periodic), randomly initialized 1-layer neural networks (Random MLP), and neural network checkpoints trained on the few-shot data (Trained MLP). For each neural network generated from Random MLP, we randomly select one in 6 different initialization methods (uniform, normal, xavier uniform, xavier normal, kaiming uniform, kaiming normal) to generate the network weights. For Trained MLP, we first initialize neural networks with different depths (2, 3, 4, 5, 6), different hidden dimensions (16, 32, 64, 128, 256, 512, 1024), and different initialization methods (uniform, normal, xavier uniform, xavier normal, kaiming uniform, kaiming normal), and train these networks on the few-shot data which will then be used for generating data.
Tables 1 and 2 in the attached pdf file show the performance of SynTO on the few-shot random and few-shot poor settings when pretrained with different data distributions. Overall, the model achieves good performance across different data distributions, with GP-RBF being the best in most settings. This ablation study shows that the pretraining data can be generated from other distributions than GPs, as long as they generate a diverse set of synthetic functions for pretraining. This experiment also shows the robustness of SynTO to the pretraining data distribution.
> Can real data from similar domains (or other domains that can be adapted to match the desired data statistics) be used for pretraining? Why training with a fully synthetic process is an advantage?
Yes, we can use real data from similar domains for pretraining. However, this makes a stronger assumption because real-world data is hard to collect in most engineering problems, and we need multiple such datasets to pretrain SynTO. Using synthetic data for pretraining removes this assumption and allows generalization to real functions with no extra data collection costs.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: The argument that "A Gaussian Process with an RBF kernel [...] is a universal approximator to any function [1], which means "[...]" it will eventually see data from a synthetic function which is close to the downstream function." is weak to justify the choice, as in practice to perform well on downstream tasks you want both diversity. I understand that, for the particular applications studied, a GP with an RBF kernel (with some ranges of the hyperparameters, as the ones found to be good) may be a good choice and likely to yield functions that are similar to the downstream ones, but this comes from human expertise on this domains, and is not likely to generalize to other domains. E.g. training with Gaussian noise, will also produce a correct image once in a while, but it's a bad choice to train image models. With this, my concerns regarding this process vs alternatives remain.
For real data from other domains, I refer to real data that is easy to collect (not data from engineering problems), like images/sound/video... that is rasterized and adapted to match simple statistics of the downstream data, like the mean and variance of the process. This would provide data of high diversity and with potentially useful correlations. I understand that this is out of the scope of the paper.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the prompt reply. We agree that a Gaussian Process with an RBF kernel, or any synthetic data distribution, is not likely universally good for every black-box function. In the main experiments, we used GP-generated data and found it to work well in our tasks, but we also conducted additional experiments in which we replaced the GP with other synthetic distributions. These include different GP kernels (GP-Cosine, GP-Linear, GP-Periodic), randomly initialized 1-layer neural networks (Random MLP), and neural network checkpoints trained on the few-shot data (Trained MLP). For each neural network generated from Random MLP, we randomly select one in 6 different initialization methods (uniform, normal, xavier uniform, xavier normal, kaiming uniform, kaiming normal) to generate the network weights. For Trained MLP, we first initialize neural networks with different depths (2, 3, 4, 5, 6), different hidden dimensions (16, 32, 64, 128, 256, 512, 1024), and different initialization methods (uniform, normal, xavier uniform, xavier normal, kaiming uniform, kaiming normal), and train these networks on the few-shot data which will then be used for generating data.
Tables 1 and 2 in the attached pdf file show the performance of SynTO on the few-shot random and few-shot poor settings when pretrained with different data distributions. Overall, the model achieves good performance across different data distributions. This ablation study shows that the pretraining data can be generated from other distributions than GPs, as long as they generate a diverse set of synthetic functions for pretraining. This experiment also shows the robustness of SynTO to the pretraining data distribution.
The Trained MLP distribution can be used to guarantee both diversity and closeness to the downstream function, and is a good option when one is not sure about what pretraining distribution to use. However, we note that this can have two drawbacks. First, it requires training a large set of neural networks on the few-shot data before training SynTO. Second, it does not allow for optimizing other functions in the same domain, as the pretraining functions were trained to mimic a specific function. In other words, a result such as the one in Table 3 would not be possible. | Summary: The paper presents a method for tackling few-shot black-box optimization problems in which the model queries a few hundred data points from the black-box function. The proposed method utilizes synthetic pretraining, where a family of synthetic functions is employed to generate data for in-context learning of a transformer-based model. After pretraining, the model adapts to downstream function using few-shot data.
Strengths: - The paper proposes and addresses the problem of few-shot black-box optimization, a problem with many potential real-world applications.
- The proposed inverse modeling approach is robust to the quality of synthetic data and allows for gradient-free optimization during testing.
- As Gaussian Processes (GPs) are used to generate synthetic functions, there are no extra costs associated with data generation.
- Experimental results show performance comparable to previous state-of-the-art methods, while also demonstrating increased robustness to the quality of few-shot data.
Weaknesses: - There's no guarantee that real-world downstream functions follow a Gaussian Process with a Radial Basis Function (RBF) kernel. In other words, performance might be sensitive to the similarity between downstream functions and those generated by a GP with an RBF kernel. While the authors mention the universal approximation property, it isn't certain that the pretraining stage covers a sufficient range of functions over a sufficient number of training iterations in practice. For instance, the hyperparameters of the RBF kernel used in the experiments are bounded, and the range was heuristically chosen.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - How sensitive is the proposed approach to the choice of synthetic functions? For instance, if we change the kernel of the GP or use randomly initialized neural networks, how is downstream performance affected?
- How does the downstream performance vary with the context size? Additionally, does the proposed method perform well when the downstream context size differs significantly from those used during pretraining?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and for the recognition of the significance of the paper and the strong performance of SynTO. We answer each of the reviewer's concerns below.
> There's no guarantee that real-world downstream functions follow a Gaussian Process with a Radial Basis Function (RBF) kernel.
We agree with the reviewer. Ideally, the pretraining distribution should be both diverse and close to the true function. However, given a very small labeled dataset, it is hard to find such a distribution. In practice, we found that GP-generated data is sufficient for the model to perform well on most tasks, including both discrete and continuous input spaces.
One way to increase the similarity between pretraining functions and the downstream function is to make use of the few-shot data for generation, e.g., learn multiple neural networks on the few-shot data and use those networks to generate more data. While this is against our original setting where the few-shot data is only available at test time, we still conducted this experiment to gain more insights into the performance of SynTO. We show below that this works well in practice but not better than GP-generated data. Moreover, this has two drawbacks. First, it requires training a large set of neural networks on the few-shot data before training SynTO. Second, it does not allow for optimizing other functions in the same domain, as the pretraining functions were trained to mimic a specific function. In other words, a result such as the one in Table 3 would not be possible.
> For instance, if we change the kernel of the GP or use randomly initialized neural networks, how is downstream performance affected?
We wanted to first point the reviewer to Appendix C.1 which studies the importance of the hyperparameters of the RBF kernel on the performance of SynTO. Additionally, we performed an ablation study where we pretrain SynTO on different data distributions, including different GP kernels (GP-Cosine, GP-Linear, GP-Periodic), randomly initialized 1-layer neural networks (Random MLP), and neural network checkpoints trained on the few-shot data (Trained MLP). For each neural network generated from Random MLP, we randomly select one in 6 different initialization methods (uniform, normal, xavier uniform, xavier normal, kaiming uniform, kaiming normal) to generate the network weights. For Trained MLP, we first initialize neural networks with different depths (2, 3, 4, 5, 6), different hidden dimensions (16, 32, 64, 128, 256, 512, 1024), and different initialization methods (uniform, normal, xavier uniform, xavier normal, kaiming uniform, kaiming normal), and train these networks on the few-shot data which will then be used for generating data.
Tables 1 and 2 in the attached file show the performance of SynTO on the few-shot random and few-shot poor settings when pretrained with different data distributions. Overall, the model achieves good performance across different data distributions, with GP-RBF being the best in most settings. This ablation study shows the robustness of SynTO to the pretraining data distribution.
> How does the downstream performance vary with the context size
Tables 5 and 6 in the attached pdf file show the performance of SynTO when we vary the number of context points during pretraining on the few-shot random and few-shot poor settings, respectively. Overall, the model is robust to the context size and achieves good performances with a wide range of context sizes. A larger context size tends to perform better in the random setting, but a bit poorer in the poor setting.
> Does the proposed method perform well when the downstream context size differs significantly from those used during pretraining?
Tables 3 and 4 show that when the pretraining context size is much smaller (10, 50) or much larger (200, 500) than the downstream context size (~100), SynTO still performs well, even though the performance does degrade slightly in some settings. | Summary: This paper investigated the problem of few-shot black-box optimization, and presented Synthetically pre-trained Transformer for Optimization (SynTO). By combining synthetic pretraining with in-context learning to enable few-shot generalization, SynTO demonstrate its superior performance on Design-Bench.
Strengths: + Synthetically pre-trained Transformer for Optimization (SynTO).
+ Combining synthetic pretraining with in-context learning to enable few-shot generalization.
+ Demonstrating superior performance on Design-Bench.
+ The paper is well written.
Weaknesses: - Can the problem in this paper be directly solved by few shot learning methods? If so, Some experiments may be required to compare the proposed method with the existing few shot learning methods. Else some discussion may be required to explain this issue.
- It seems that SynTO shares similar structure with BONET. Thus, some discussion is required to explain the performance gain of SynTO w.r.t. BONET.
- Ablation studies are required to clarify which components of SynTO explain the performance superiority.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can the problem in this paper be directly solved by few shot learning methods? If so, Some experiments may be required to compare the proposed method with the existing few shot learning methods. Else some discussion may be required to explain this issue.
- It seems that SynTO shares similar structure with BONET. Thus, some discussion is required to explain the performance gain of SynTO w.r.t. BONET.
- Ablation studies are required to clarify which components of SynTO explain the performance superiority.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and for the recognition of the strong performance of SynTO and the good presentation of the paper. We answer each of the reviewer's concerns below.
> Can the problem in this paper be directly solved by few shot learning methods?
While it is true that SynTO is a few-shot learning method and makes use of only a small labeled dataset, few-shot offline BBO works in a different setting as compared to most of the commonly referenced few-shot learning works. Other few-shot learning works [1, 2, 3] are proposed for classification problems, often in the domain of computer vision and sometimes natural language processing. This is not applicable to few-shot BBO, where a model is required to either predict y which is continuous (forward approaches), or to generate high-dimensional input x (inverse approaches). The most applicable prior approach for this problem is the neural process methods, which were proposed for meta or few-shot regression. In this regard, Table 4 already shows the superior performance of SynTO compared to Transformer Neural Processes (TNPs) [4], the current state-of-the-art neural process model.
[1] Koch, Gregory, Richard Zemel, and Ruslan Salakhutdinov. "Siamese neural networks for one-shot image recognition." ICML deep learning workshop. Vol. 2. No. 1. 2015.
[2] Snell, Jake, Kevin Swersky, and Richard Zemel. "Prototypical networks for few-shot learning." Advances in neural information processing systems 30 (2017).
[3] Sung, Flood, et al. "Learning to compare: Relation network for few-shot learning." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
[4] Nguyen, Tung, and Aditya Grover. "Transformer neural processes: Uncertainty-aware meta learning via sequence modeling." arXiv preprint arXiv:2207.04179 (2022).
> It seems that SynTO shares similar structure with BONET. Thus, some discussion is required to explain the performance gain of SynTO w.r.t. BONET.
We note that while SynTO and BONET both make use of the transformers architecture, the two methods are very different in many aspects:
- BONET was proposed for offline BBO where we assume access to thousands of labeled data samples, while SynTO is specifically designed for few-shot BBO where we only have a few hundred labeled data points.
- BONET is trained in a supervised manner on offline data pairs (x,y), while we pretrain SynTO on synthetic data and adapt to downstream functions using in-context learning.
- BONET first constructs trajectories of the form (x_1, x_2, …, x_N) in which the x’s are sorted by the regrets, and then models the distribution of these sequences. In contrast, SynTO models the conditional distribution of the target given context p(x_target | y_target, x_context, y_context), where the points do not follow any order as the architecture is permutation invariant to the context.
- BONET employs an autoregressive transformers architecture, while SynTO does not.
Empirically, SynTO outperforms BONET on few-shot BBO settings. Only using labeled data for training makes BONET vulnerable to overfitting, while pretraining on a large amount of synthetic data allows better generalization for SynTO.
> Ablation studies are required to clarify which components of SynTO explain the performance superiority.
Pretraining on a large and diverse set of synthetic functions is the key to the superior performance of SynTO in the few-shot setting. In the main results, we pretrained SynTO on data generated from GPs with an RBF kernel which we found to work well in practice. We showed the importance of the hyperparameters of the RBF kernel on the performance of SynTO in Appendix C.1. Additionally, we performed an ablation study where we pretrain SynTO on different data distributions, including different GP kernels (GP-Cosine, GP-Linear, GP-Periodic), randomly initialized 1-layer neural networks (Random MLP), and neural network checkpoints trained on the few-shot data (Trained MLP). For each neural network generated from Random MLP, we randomly select one in 6 different initialization methods (uniform, normal, xavier uniform, xavier normal, kaiming uniform, kaiming normal) to generate the network weights. For Trained MLP, we first initialize neural networks with different depths (2, 3, 4, 5, 6), different hidden dimensions (16, 32, 64, 128, 256, 512, 1024), and different initialization methods (uniform, normal, xavier uniform, xavier normal, kaiming uniform, kaiming normal), and train these networks on the few-shot data which will then be used for generating data.
Table 1 and 2 in the attached pdf file show the performance of SynTO on the few-shot random and few-shot poor settings when pretrained with different data distributions. Overall, the model achieves good performance across different data distributions, with GP-RBF being the best in most settings. This ablation study shows the robustness of SynTO to the pretraining data distribution.
In addition to the pretraining data distribution, we also conducted an ablation study on the architecture of SynTO, in which we replaced the VAE model with a diffusion model (SynTO-Diffusion). We take the diffusion architecture from [5].
Table 3 and 4 in the attached pdf file show that SynTO + VAE outperforms SynTO + Diffusion in all tasks and settings. We hypothesize that SynTO with a too powerful decoder may learn to only model the distribution over x_target and ignore the conditioning variables (x_context, y_context, y_target), which consequently hurts the generalization of the model.
[5] Krishnamoorthy, Siddarth, Satvik Mehul Mashkaria, and Aditya Grover. "Diffusion Models for Black-Box Optimization." arXiv preprint arXiv:2306.07180 (2023).
---
Rebuttal Comment 1.1:
Comment: The rebuttal has addressed most of my concerns. I will raise my rate to: 6: Weak Accept. | Rebuttal 1:
Rebuttal: We conducted additional experiments to gain more insights into the performance of SynTO. The experiments are:
- (Table 1 and 2) Pretraining SynTO on different data distributions, including different GP kernels (GP-Cosine, GP-Linear, GP-Periodic), randomly initialized 1-layer neural networks (Random MLP), and neural network checkpoints trained on the few-shot data (Trained MLP).
- (Table 3 and 4) SynTO with different decoder architectures, i.e., diffusion vs VAE.
- (Table 5 and 6) Pretraining SynTO with different context sizes.
In all experiments, we evaluate the performance of SynTO on all 4 tasks Dkitty, Ant, TF8, and TF10. For each setting we train 3 models with 3 seeds and average the results.
Pdf: /pdf/82aa1c16c1d55f6399472b92400f426c43bf4695.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
MeGraph: Capturing Long-Range Interactions by Alternating Local and Hierarchical Aggregation on Multi-Scaled Graph Hierarchy | Accept (poster) | Summary: The authors propose a novel methodology for capturing long-range interactions in graph-based learning models, that is based on mult-scale graph construction and merging. Specifically, following the Select-Reduce-Connect framework [1], the authors introduce a novel pooling method called Edgepool that enables refinement of the original graph structure in an arbitrary continuum. They then perform graph pooling in order to create multi-scale graph instances of the original graph, and merge the inter-graph and intra-graph connections through the proposed MeGraph framework.
The authors perform extensive experimentation to over 10 datasets. They, also, show a limited (compared with the number of trainable parameters) ablation study on four synthetic datasets. They, also, propose a rather novel benchmark, called Graph Theory Benchmark (testing properties: graph diameter, single source shortest path, graph eccentricity, maximum connected component), where they evaluate the prediction capabilities of their model. The results suggest a solid performance of MeGraph framework for tasks that require both shallow and deeper architectures.
[1] Daniele Grattarola, Daniele Zambon, Filippo Maria Bianchi, and Cesare Alippi. Understanding pooling in graph neural networks. IEEE
Transactions on Neural Networks and Learning Systems, 2022.
Strengths: 1. The proposed framework is fairly novel. It shows some dependencies on previous frameworks (i.e. SRC framework needed for graph pooling).
2. The idea and methodology of the hierarchical pooling and merging, although it's not new (e.g. DiffPool), its quite interesting and intuitive for the problem of capturing long-range interactions.
3. The extensive experimentations to a broad set of datasets suggest a strong performance of the MeGraph framework.
4. There is a newly introduced benchmark, called Graph Theory Benchmark that can be used for various methods of evaluation.
5. The written text is fairly clearly written. However, it can be improved, mainly due to the fact that it's hard to follow the main contribution section. A more clear summarization of the contributions (methodology, and experimentation) would significantly improve the clarity of the paper.
6. The problem that MeGraph is approaching is quite important, mainly due to the known incapabilities of various graph neural network architectures to perform on par in deeper settings.
Weaknesses: 1. I am concerned about the computational complexity of the model. It seems that in the average case the combined computational cost of pooling, merging, and message passing is high, with respect to simple graph neural network models. The authors propose some variants to alleviate the high computational complexity, however it becomes more unclear how each one of the choices impact the behavior of the model. It seems (based on Tables 2, 4 ) that either the best performance comes from the computationally heavy variant of MeGraph or that there is no clear impact of the model choices.
2. The model consists of a high number of trainable parameters, and module choices. Although I value the effort of the authors for an ablation study, I think that a further investigation of the parameters of the modules ( especially with respect to the encoder-decoder choices, and the graph pooling methods) would be quite helpful.
3. There is no safe conclusion that long-range interactions can be captured because of the multi-scale method that the authors propose rather than the increased parameter space in MeGraph. The lack of any theoretical insight on why the specific choices contribute to capturing long-range interactions hinders convincing the reader that the multi-scale approach is appropriate for LRI.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Can the authors pose any theoretical motivations on why the combination of SRC framework with the MeGraph framework can contribute to more effectively capture long-range interactions?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: 1. As I mentioned, I think that there is a lack of theoretical motivation for the choice of the model's modules. It would be quite helpful to provide any information on positive benefits of the SRC framework,as well as the representations obtained from the intra-graph and inter-graph interactions.
2. The computational complexity is dependent on the choice of pooling methods, and encoder-decoder methods (as well as the message passing steps). It would be quite interesting to investigate on any potential improvement that would balance the increased parameter space with the training time.
3. The written text can be further be clarified. The complex architecture of MeGraph requires a clear, and easy-to-follow description, so that the readers can grasp its mechanics.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your invaluable reviews. We provide point-by-point responses below.
>0. The authors propose some variants to alleviate the high computational complexity, however it becomes more unclear how each one of the choices impact the behavior of the model. It seems ...
We'd like to clarify that, when discussing methods to reduce time complexity in L215, we are referring to the variants described in Appendix C.5, not those in the baseline section (h=1 and n=1). The variants from Appendix C.5 primarily replace certain GN blocks with the Identity function to decrease time complexity. These include U-Shaped, Bridge-Shaped, and Staircase-Shaped structures and we present their running speed on the treecycle dataset in Table R7. To prevent confusion, we'll make necessary revisions.
The h=1 and n=1 variants serve as baselines (refer to L224-226). Comparing the MeGraph model to its h=1 variant underscores the significance of using hierarchical (multi-scale) structures. When comparing the MeGraph model to its n=1 variant, it demonstrates the value of repeated information exchange throughout the mega graph hierarchy. We selected these two variants as baselines because it allows for fair comparisons by ensuring identical hyper-parameters, as detailed in Appendix F.1.
>1. I am concerned about the computational complexity of the model. It seems that in the average case the combined computational cost of pooling, merging, and message passing is high, with respect to simple graph neural network models.
In Section 3.4 (L213), we demonstrated the theoretical complexity of the MeGraph framework which is $1/(1−\eta)$ times that of a standard GNN. Here, $\eta$ is the ratio of the graph size before and after pooling. Notably, when $\eta \le 0.5$, $1/(1−\eta) \le 2$, suggesting the overall complexity is at most double that of the GNN when the graph size is halves for each pooling.
However, in practice, the pooling method often targets the node ratio, but the edge ratio might be higher. To illustrate, we present the node and edge counts in a graph hierarchy from the synthetic treecycle dataset, using a randomized graph pooling method with a $\eta_v=0.3$ node pooling ratio:
*Table R6*: Node and edge counts across hierarchies for a sample graph in the treecycle dataset.
|count|h=1|h=2|h=3|
|-|-|-|-|
|intra-nodes|871|261|78|
|intra-edges|972|964|850|
|inter-nodes|1132|339|/|
|inter-edges|1742|522|/|
The edge count decreases at a slower rate compared to the node count. Furthermore, the overhead of inter-conv becomes nonnegligible when the edge and node counts are similar. We present the running times for edgepool and random-pool for heights 1 to 3 when $n=3$:
*Table R7*: Training time (s) per epoch for various methods with n=3 layers on the treecycle dataset.
|method|h=1|h=2|h=3|h=3 X-Pool|h=3 bridge|h=3 staircase|
|-|-|-|-|-|-|-|
|edgepool|0.026|0.068|0.127|0.078|0.098|0.081|
|random |0.026|0.056|0.111|0.064|0.087|0.070|
To speed up, we can employ the X-Pool variant (refer L167) which approximately doubles the speed, but at a potential performance cost, as shown in the ablation study on cross update functions (L317). As discussed in Appendix C.5, substituting some GN blocks with identity functions can reduce runtime. The running times of two such variants (bridge and staircase) are displayed above, offering noticeable speed improvements. We leave exploring the balance between speed and performance in future studies.
>2.a They show a limited ablation study on four synthetic datasets.
We'd like to clarify that our ablation study is not limited to four synthetic datasets ([L280] Hierarchy vs. Locality), but also covers the Graph Theory Benchmark (for [L300] Varying S-EdgePool, [L314] Changing cross update function, [L319] Disabling bidirectional pathway) and real-world datasets (for [L310] Varying GN Block).
>2.b ..., I think that a further investigation of the parameters of the modules ( especially with respect to the encoder-decoder choices, and the graph pooling methods) would be quite helpful.
Thanks for your suggestions. As clarified above, we have conducted ablation studies on the graph pooling methods. We adjusted the hyper-parameters $\tau_c$ and $\eta_v$ for S-EdgePool and displayed MeGraph's performance with various poolings in Table 1. As stated in L309, irrespective of the pooling variant used, MeGraph consistently outperforms h = 1 baselines, suggesting the robustness of MeGraph against different structures of the mega graph.
We implemented Louvain pooling [B0] and a random pooling that assigns nodes to a random cluster. The ablation study results for these new graph pooling methods are shown in the **Overall Response**.
For shared components and hyper-parameters in standard GNNs and MeGraph, including encoders and decoders, their choices could influence model performance and the optimal choices are typically task-specific. As detailed in Appendix F.1 (Experimental Protocol), we first optimized the hyper-parameters for GFuN and selected the best configurations for each task. The effects of this tuning are seen in the performance differences between MeGraph(h=1) and GFuN, as shown in Tables 3, 4, 9 and 10-12. We will update Tables 10-12 accordingly.
>3. Can the authors pose any theoretical motivations on why the combination of SRC framework with the MeGraph framework can contribute to more effectively capture long-range interactions?
As discussed in Appendix D.1 (L714), hierarchical architecture like MeGraph provides a much smaller number of aggregation steps ($O(\log(|V|))$) for capturing long-range interactions than standard message-passing GNNs ($O(|V|)$).
>4. A more clear summarization of the contributions would significantly improve the clarity of the paper. ... The complex architecture of MeGraph requires a clear, and easy-to-follow description, so that the readers can grasp its mechanics.
Thanks for your suggestion. We will revise accordingly to make the text clearer. | Summary: In summary, the paper proposes a mechanism to learn on graph in a multiscale and hierarchical manner. This is one of the approaches that can address the long-range problem, in which the graph has a large diameter, i.e. the length of maximum shortest paths between all pairs of nodes is long, in graph learning.
Given the following strengths and weaknesses, I vote this paper in the borderline. There are certainly more works to do.
Strengths: I believe this paper asks the right question: Long-range problem in graph learning is certainly important. Existing GNNs based on the conventional message passing have severe limitations with long-range graphs (i.e. graphs with "long" maximum shortest paths). The reason is message passing only allows exchanging of information among neighboring nodes, and consequently we would need many layers / iterations of message passing (one can argue that the number of layers needed is proportional to the size of the graph) so that two distant nodes can reach each other. However, it is computationally infeasible to have many layers for large graphs and it will cause the problem of over-smoothing and potentially other problems with gradients and training.
Therefore, we need a method that allows distant nodes can exchange information via a small number of hops. Constructing a hierarchical and multiscale structure is one of the reasonable choices.
Weaknesses: * Novelty: The long-range problem has attracted increasing attention from the graph learning community. There are similar ideas / works about multiscale and hierarchical structure learning being developed in parallel with this work.
For example, this work "Multiresolution equivariant graph variational autoencoder" https://iopscience.iop.org/article/10.1088/2632-2153/acc0d8/pdf proposed a similar multiscale / multiresolution method, but the construction of the hierarchy is done via a flexible learning-to-cluster algorithm in data-driven manner. The follow-up paper "Multiresolution Graph Transformers and Wavelet Positional Encoding for Learning Long-Range and Hierarchical Structures" https://arxiv.org/pdf/2302.08647.pdf (accepted at Journal of Chemical Physics) proposed an extension with Graph Transformers and a new positional encoding motivated from multiresolution analysis and wavelet theory, that can efficiently learn long-range and hierarchical graphs.
Furthermore, a recent ICML 2023 paper "On the Connection Between MPNN and Graph Transformer" https://arxiv.org/pdf/2301.11956.pdf suggested that a simple method of adding a virtual node (i.e. a node connecting to all other nodes) can significantly boost the performance of the conventional message passing for the long-range graphs. This work has theoretical analysis that proves the equivalence between adding a virtual node and graph transformer.
* Theoretical analysis: The paper got rejected from ICLR 2023 https://openreview.net/forum?id=Oz0npxjLAsI. One of the main reasons is the lack of theoretical analysis in analyzing the expressive power of their proposal. I suggest the authors to investigate theoretically their proposed model.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N / A
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I have written about other limitations of this work in the "Weaknesses" section.
I think the authors should extend the Graph Theory Benchmark further. For example, the graphs should be much larger, it is interesting to see how each method behaves on a very large and very long-range graphs in terms of efficiency and efficacy. In particular, since this work is about multiscale representation of a graph, the benchmark should have Stochastic Block Model (SBM) or some graphs with clustering or hierarchical patterns.
If the benchmark is sophisticated enough, the authors can submit to the data & benchmark track. Recently, there is a separate track for data & benchmark in main ML conferences.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your invaluable reviews. We provide point-by-point responses below.
>1. Novelty: The long-range problem has attracted increasing attention from the graph learning community. There are similar ideas / works about multiscale and hierarchical structure learning being developed in parallel with this work.
Thanks for pointing out these related works. Indeed, our work understands graphs beyond multiscale and hierarchical structure learning, and the key idea of our work and that of these works differ.
* As stated in the abstract, the key idea of our work is to integrate the local and hierarchical structures of a multi-scale graph hierarchy into a single mega graph, and we proposed a MeGraph model that consists of multiple layers **alternating between local and hierarchical information aggregation on the mega graph**.
* According to [B1], "The key idea of MGVAE is learning to construct a series of coarsened graphs along with a hierarchy of latent distributions in the encoding process while learning to decode each latent into the corresponding coarsened graph at every resolution level". MeGraph is different from MGVAE as it explicitly builds a mega graph upon the graph hierarchy and alternates local and hierarchical information aggregation on that, which is not seen in MGVAE. The targets are also different where MGVAE aims to generate graphs and MeGraph aims to capture both local and long-range interactions.
* MGT [B2] first obtains a graph representation using GPS [42], builds substructures using the same clustering method as in [B1], and then applies Transformers on the substructures. The original graph and the substructures form a hierarchy of 2 layers. This method, based on a graph transformer, is a different way of capturing long-range interactions compared to ours. The WavePE [B2] also contains multiresolution information but is from the perspective of positional encoding.
* MPNN+VN [B3] mainly focuses on proving the equivalence between adding a virtual node and graph transformer. Adding a virtual (root) node can be regarded as a very special case of our MeGraph architecture when the graph pooling module pools all nodes into a single node.
Moreover, both MGT+WavePE [B2] and MPNN+VN [B3] conducted experiments in the Peptides-func dataset, while their results are inferior compared to ours (MGT+WavePE is 68.17% [B2], GatedGCN+RWSE+VN is 66.85% [B3], where MeGraph is 69.45%).
We would include the discussions and their experimental results in our revision.
[B1] Hy, T. S., & Kondor, R. (2023). Multiresolution equivariant graph variational autoencoder. Machine Learning: Science and Technology, 4(1), 015031.
[B2] Ngo, N. K., Hy, T. S., & Kondor, R. (2023). Multiresolution graph transformers and wavelet positional encoding for learning long-range and hierarchical structures. The Journal of Chemical Physics, 159(3).
[B3] Cai, C., Hy, T. S., Yu, R., & Wang, Y. (2023). On the connection between mpnn and graph transformer. arXiv preprint arXiv:2301.11956.
>2. Theoretical analysis: The paper got rejected from ICLR 2023. One of the main reasons is the lack of theoretical analysis in analyzing the expressive power of their proposal. I suggest the authors to investigate theoretically their proposed model.
We had provided supplementary theoretical explanations for the expressive power of MeGraph during that response period, and the reviewer appreciated our previous efforts. The according analysis is also provided in Appendix D.2 in the current submission. This explanation is also empirically supported by the results of using random poolings, which are provided in the Graph Poolings Ablation part of the **Overall Response**.
In Appendix D.1, we have also rephrased the analysis provided in [43], which shows hierarchical architectures like MeGraph require a much smaller number of aggregation steps ($O(\log(|V|))$) for capturing long-range interactions than standard message-passing GNNs ($O(|V|)$).
[43] Ladislav Rampášek and Guy Wolf. Hierarchical graph neural nets can capture long-range interactions. In 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP), pages 1–6. IEEE, 2021.
>3. I think the authors should extend the Graph Theory Benchmark further. For example, the graphs should be much larger, it is interesting to see how each method behaves on a very large and very long-range graphs in terms of efficiency and efficacy. In particular, since this work is about multiscale representation of a graph, the benchmark should have Stochastic Block Model (SBM) or some graphs with clustering or hierarchical patterns.
Thanks for your suggestion. We have created another version of our Graph Theory Benchmark. For each graph generation method, the generated dataset contains 500 graphs, and each graph contains 100 to 200 nodes. We also add a new graph generation method Stochastic Block Model (SBM). The generation details and the results are shown in the **Overall Response**. | Summary: This paper proposes MeGraph, a GNN architecture that interleaves local and hierarchical structural information in a graph at multiple-scales, to capture long-range interactions (LRI). The authors propose S-EdgePool that generalizes EdgePool by allowing more than two nodes to be clustered in order to achieve a desired pooling ratio. Using graph pooling, they build a "mega graph", which includes inter-edges connecting nodes at one height to the corresponding super-nodes at the next height. Furthermore, they propose a message passing scheme based on "MeeLayers", which first propagates information locally at each height, then propagates upwards and then backwards in the hierarchy. A complexity analysis is provided. Authors conduct an extremely extensive set of experiments on graph theory benchmarks, 10 standard LRI tasks plus three others proposed in the paper, LRGB, GNN benchmarks and OGB. Results show the superior performance of MeGraph in comparison with GCN, GIN, GAT, GatedGCN, Graph U-Net and a large variety of baselines specific to LRGB. An ablation study demonstrates gains from larger height and number of layers, as well as some other components of the method.
Strengths: S1. The paper investigates a fundamental problem: how to capture longe-range interactions in a graph using GNNs without incurring over-smoothing or over-squashing issues.
S2. The proposed solution is clean, well-justified and introduces some novel components (e.g., S-EdgePool, MeeLayer).
S3. The proposed solution matches or exceeds the performance of reference methods in a large variety of diverse benchmarks.
S4. Excellent presentation; writing is very clear despite the sophistication of the method.
S5. Extremely extensive set of experiments (9 out of 16 pages in the appendix contain additional tables/plots).
S6. Code and details of the experimental setup allow great reproducibility.
Weaknesses: W1. There is no discussion regarding the hierarchies discovered by MeGraph for specific datasets/tasks, which could shed some light into the way the method is working.
W2. HGNet also builds a multi-scale hierarchy via graph pooling and was shown to slightly outperform Graph U-Net, but is not included in the comparisons.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Q1. While perturbation-based graph explainability frameworks can be applied with MeGraph, they would not provide immediate insights into how the hierarchy comes into play. Is there any graph explainability frameworks suitable for this task or are there fundamental challenges in deriving explanations for outputs predicted by MeGraph?
Q2. Was there a particular reason not to include a comparison with HGNet?
Q3. Does S-EdgePool tend to create groups of nodes with roughly the same size? If so, how would the performance of MeGraph be affected in graphs where clusters of different sizes arise naturally?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes, the authors have discussed some obvious limitations in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your invaluable reviews. We provide point-by-point responses below.
>1. [W1] There is no discussion regarding the hierarchies discovered by MeGraph for specific datasets/tasks, which could shed some light into the way the method is working. [Q3] Does S-EdgePool tend to create groups of nodes with roughly the same size? If so, how would the performance of MeGraph be affected in graphs where clusters of different sizes arise naturally?
Thanks for your suggestion, we plotted the graph hierarchies discovered by MeGraph in the shortest path tasks of Graph Theory Benchmark, in Figure 2 of the attached pdf (in Overall Response).
For the S-EdgePool with $\eta_v=0.3, \tau_c=4$ (The 1st, 3rd, and 5th hierarchy), the size of the clusters are roughly the same size (as the cluster size is restricted as at most 4), and the structure of the graph is well preserved after pooling. For the S-EdgePool with $\eta_v=0.3$ (no cluster size limit), the size of the resulting cluster could vary, depending on the structure of the graph. In dense graphs (e.g. generated by the 'geo' method, 4th hierarchy in Figure 2 of the attached pdf), the size of the resulting cluster could be very large while leaving some node forms a cluster alone. The former S-EdgePool leads to better performance as indicated in Table 1.
MeGraph still performs well when clusters of different sizes arise naturally. We use Stochastic Block Model (SBM) to generate graphs with clusters of different sizes to illustrate. In Table R2, MeGraph with S-EdgePool still shows comparable or better performance compared to the baselines. Moreover, as shown in Figure 3 of the attached pdf, S-EdgePool preferentially performs clustering in the clusters of the original graph. For reference, we also show the Louvain pooling result in the right part of Figure 3, which can be regarded as a natural clustering in the original graph.
>2. [W2] HGNet also builds a multi-scale hierarchy via graph pooling and was shown to slightly outperform Graph U-Net, but is not included in the comparisons. [Q2] Was there a particular reason not to include a comparison with HGNet?
Thanks for your question. We thought HGNet has a similar architecture as Graph U-Net and could have similar issues, and therefore we didn't compare ours with HGNet. As suggested, we have included the comparison with HGNet, and the results are provided in Table R3 in **Overall Response** and Tables R4 and R5 below. It can be seen that MeGraph achieves better or comparable results than both Graph U-Nets and HGNet, where MeGraph achieves the best performance in most tasks (the best one is bolded in the table).
*Table R4*: GNN Benchmark results (Corresponding to Table 3)
| dataset | ZINC | AQSOL | MNIST | CIFAR10 | PATTERN | CLUSTER |
| :------ | :-----------------: | :-----------------: | :----------------: | :----------------: | :----------------: | :----------------: |
| MeGraph | **0.2597 ± 0.0053** | **1.0017 ± 0.0210** | **97.860 ± 0.098** | **69.925 ± 0.631** | **86.507 ± 0.067** | **68.603 ± 0.101** |
| U-Net | 0.3320 ± 0.0103 | 1.0629 ± 0.0182 | 97.130 ± 0.227 | 68.567 ± 0.339 | 86.257 ± 0.078 | 50.371 ± 0.243 |
| HGNet | 0.4743 ± 0.0032 | 1.1192 ± 0.0101 | 90.122 ± 1.012 | 60.122 ± 0.428 | 69.448 ± 0.021 | 35.514 ± 0.046 |
*Table R5*: OGB-G results (Corresponding to Table 4)
| dataset | molhiv | molbace | molbbbp | molclintox | molsider |
| :------ | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: |
| MeGraph | 77.20 ± 0.88 | 78.52 ± 2.51 | 69.57 ± 2.33 | **92.04 ± 2.19** | 59.01 ± 1.45 |
| U-Net | **79.48 ± 1.06** | **81.09 ± 1.66** | **71.10 ± 0.52** | 91.67 ± 1.69 | **59.38 ± 0.63** |
| HGNet | 77.96 ± 1.10 | 72.49 ± 0.93 | 70.26 ± 1.79 | 85.90 ± 0.90 | 58.91 ± 0.98 |
| dataset | moltox21 | moltoxcast | molesol | molfreesolv | mollipo |
| :------ | :--------------: | :--------------: | :---------------: | :---------------: | :---------------: |
| MeGraph | **78.11 ± 0.47** | **67.67 ± 0.53** | **0.886 ± 0.024** | 1.876 ± 0.058 | 0.726 ± 0.006 |
| U-Net | 77.85 ± 0.81 | 66.49 ± 0.45 | 1.002 ± 0.036 | 1.885 ± 0.069 | 0.716 ± 0.014 |
| HGNet | 77.85 ± 0.12 | 65.93 ± 0.61 | 0.924 ± 0.020 | **1.870 ± 0.056** | **0.706 ± 0.014** |
>3.[Q1] While perturbation-based graph explainability frameworks can be applied with MeGraph, they would not provide immediate insights into how the hierarchy comes into play. Is there any graph explainability frameworks suitable for this task or are there fundamental challenges in deriving explanations for outputs predicted by MeGraph?
Thanks for your question. We think perturbation-based graph explainability frameworks like GNNExplainer [57] could still be used. As explained in Section 3.2 Mega Graph Message Passing, we can regard the Mee layer as performing message passing over the mega graph. Though the mega graph keeps changing during the training stage, it will be fixed during testing time. Therefore, we can treat the mega graph as a standard graph and use GNNExplainer to explain the outputs predicted by MeGraph. The explanation would be a subgraph of the mega graph, containing the hierarchy information.
---
Rebuttal Comment 1.1:
Comment: I confirm that I have read the authors' response and reviewed the additional PDF provided as part of the rebuttal. I appreciate the authors' effort in including yet another baseline (HGNet) despite its similarity with the U-Net architecture. All of it reinforces my previous evaluation that this is a well-rounded submission.
---
Reply to Comment 1.1.1:
Comment: We truly appreciate your consideration on our response and additional results. Your invaluable suggestions have helped us a lot in improving our work. | null | null | Rebuttal 1:
Rebuttal: # Overall Response
We thank all reviewers for the consistently positive feedback and invaluable reviews. We provide point-by-point responses below by commenting on each of your reviews.
We report the following new results as suggested by the reviewers.
* As suggested by Reviewer Y5z6, we created another version of the Graph Theory Benchmark with larger graphs, and also include a new random graph generation method Stochastic Block Model (SBM).
* As suggested by Reviewer qUTa, we performed an ablation study on new graph pooling methods.
* As suggested by Reviewer NF28, we visualized the graph hierarchy discovered by MeGraph in the Graph Theory Benchmark. We compared two versions of S-EdgePool and found that the one that leads to better performance also better preserves the graph structure after pooling, suggesting the potential correlation.
For clarity in the response, we use Lxx to refer to Line xx in our submitted manuscript, e.g. L35 means Line 35.
## Larger Graphs in Graph Theory Benchmark
We created another version of our Graph Theory Benchmark. For each graph generation method, the generated dataset contains 500 graphs and each graph contains 100 to 200 nodes. For clarity in the response, we refer this version to as 'large' and the original version as 'medium', respectively.
We also add a new graph generation method Stochastic Block Model (SBM). We randomly sample the size of each block to be random from (5, 15), and the probability of edge within the block to be random from (0.3, 0.5) and those for other edges to be random from (0.005, 0.01). To make all the tasks well-defined, we filtered out the unconnected graphs during the generation.
The results are shown in Tables R1-R3 below.
* As shown in Table R1, the MeGraph model significantly outperforms the h=1 version and the Graph-UNets. The conclusion still holds (in Table R3) after adding the SBM graph generation method.
* Table R2 shows the results on the SBM-generated graphs only, where the shortest path among nodes is usually short as the nodes in the same block are usually well connected. In such graphs, Graph-UNets failed to identify local paths which standard GNNs (h=1 setting) can identify. This result matches our argument in our Introduction ([L35] hierarchical information propagation cannot take over the role of local information aggregation). In contrast, MeGraph performs comparable to or better than standard GNNs (h=1 setting).
*Table R1*: Graph Theory Benchmark (large version, averaged over different graph generation methods, excluding SBM generation):
|Category|Model|SP<sub>sssd</sub>|MCC|Diameter|SP<sub>ss</sub>|ECC|
|-------|----|----------------|---|-------|-------------|----|
|Megraph|$h=1,n=5$|360.813|34.865|208.490|360.017|237.996|
|Megraph|$h=5,n=5,\eta_v=0.3,\tau_c=4$|26.280|14.619|21.100|16.622|48.626|
|U-Net|$h=5,n=9,\eta_v=0.3,\tau_c=4$|110.998|28.052|43.794|110.998|81.322|
*Table R2*: Graph Theory Benchmark (large version, SBM generation only)
|Category|Model|SP<sub>sssd</sub>|MCC|Diameter|SP<sub>ss</sub>|ECC|
|-------|-----|---------------|-----|-------|-------------|-----|
|Megraph|$h=1,n=5$|0.023|104.048|0.4486|0.173|0.7500|
|Megraph|$h=5,n=5,\eta_v=0.3,\tau_c=4$|0.059|47.816|0.4033|0.078|0.4919|
|U-Net|$h=5,n=9,\eta_v=0.3,\tau_c=4$|1.117|62.839|0.6390|1.722|0.6770|
*Table R3*: Graph Theory Benchmark (large version, averaged over different graph generation methods, including SBM generation)
|Category|Model|SP<sub>sssd</sub>|MCC|Diameter|SP<sub>ss</sub>|ECC|
|-------|-----|-----------------|-------|-------|-------------|-------|
|Megraph|$h=1,n=5$|328.014|39.4772|189.577|324.033|219.746|
|Megraph|$h=5,n=5,\eta_v=0.3,\tau_c=4$|**23.8963**|**16.8321**|**19.2185**|**14.9676**|**44.9234**|
|U-Net|$h=5,n=9,\eta_v=0.3,\tau_c=4$|101.009|30.3711|39.8708|100.070|75.1185|
|HGNet[43]|$h=5,n=9,\eta_v=0.3,\tau_c=4$|413.793|45.188|299.189|420.884|359.354|
## Graph Poolings Ablation
We implemented the Louvain pooling [B0] and a random pooling method that assigns nodes to a random cluster. We conduct an ablation study for these two graph pooling methods on the treecycle and treegrid dataset, the results are illustrated in Figure 1 of the attached pdf.
For the random pooling, the hierarchical information becomes useless, and the MeGraph still achieves accuracy that is similar to the standard GNNs (h=1 variant), which supports our discussion in Appendix D.2 (MeGraph can degenerate to standard GNNs).
For Louvain pooling on the treecycle dataset, performance is marginally improved compared to EdgePool. This might be because the fixed clustering from the Louvain algorithm is better suited for this task compared to EdgePool. Such observations indicate the MeGraph architecture's robustness across various reasonable pooling methods.
[B0] Fast unfolding of communities in large networks, Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, Renaud Lefebvre, Journal of Statistical Mechanics: Theory and Experiment 2008(10), P10008 (12pp)
## Plot Graph Hierarchy
We plotted the graph hierarchies discovered by MeGraph in the shortest path tasks of Graph Theory Benchmark, in Figures 2 and 3 of the attached pdf. In Figure 2, the S-EdgePool with $\eta_v=0.3, \tau_c=4$ well preserves the structure of the graph after pooling, while the S-EdgePool with $\eta_v=0.3$ (no cluster size limit) sometimes pooled too many nodes together, breaking the graph structure. The former S-EdgePool leads to better performance as indicated in Table 1. We also plot the hierarchy for SBM-generated graphs in Figure 3, indicating that EdgePool can handle graphs that naturally contains clusters of different size.
Pdf: /pdf/4bfb5335a4134f9699d531474c8af4a100820385.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Dis-inhibitory neuronal circuits can control the sign of synaptic plasticity | Accept (poster) | Summary: The authors extend previous work on deep feedback control (DFC) learning to make the functional form of plasticity a more realistic reflection of what is observed biologically. In particular the authors:
1. Introduce a feedback control mechanism using targeted, neuron-specific inhibition signal that allows for synaptic plasticity that more closely resembles experimentally observed plasticity (e.g. BCM), compared to the error-based learning employed in a wide variety of previous papers.
2. Show that learning with the extended plasticity rule still comes relatively close to backpropagation-level performance on several benchmarks (MNIST, FMNIST).
3. Discuss at length the testable predictions of their extended model.
Strengths: The strengths of the paper are as follows:
1. Reducing an algorithm derived from optimization principles to the point that it can replicate LTD/LTP experiments is quite difficult, and many (but not all) preceding algorithms do not appear to succeed in this regard, relying more on error-based signals that are less clearly related to experimentally observed plasticity phenomena.
2. Even with biophysically motivated modifications, the algorithm still performs quite well on image classification tasks, which is also difficult.
3. The paper is very clearly written and organized.
Weaknesses: The authors themselves identify several key weaknesses, which I will elaborate on below. However, to me the principal weakness is that the contribution is very incremental: previous studies (e.g. Payeur et al. 2021) have demonstrated that related learning algorithms can, in certain regimes, resemble BCM-like learning, and the high performance and 'locality' of the DFC family of algorithms has already been explored extensively in previous studies (e.g. Meulemans et al. 2021 & 2022). Therefore, it seems to me that the key improvement this study demonstrates is that the DFC algorithms can, with some extra tweaks, also resemble this type of Hebbian learning.
Other weaknesses:
1. As the authors note, their current learning scheme involves a controller with access to a highly complex Jacobian. This Jacobian is a function of the fixed-point of the network dynamics, and so as far as I can tell, for every individual stimulus, the network's feedback weights would have to be different in order to match the principled feedback control dynamics. Previous DFC studies demonstrated that it's possible to get away with simpler controllers with fixed weights, but this approximation is not used in this study, and it's not clear why--as is, though the learning rule locally resembles BCM, the feedback signal actually used to achieve high performance on MNIST/FMNIST is essentially almost as complicated than the backpropagation error signal itself.
2. The unrealistic 1-1 mapping between inhibitory and excitatory neurons makes it difficult to pin down where exactly the controller feedback should be expected to be at the level of a cortical microcircuit.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Could you elaborate on the relationship between this algorithm and the algorithm proposed in Payeur et. al 2021? In particular, that algorithm also replicates BCM-like LTD/LTP phenomena--it seems as though the ability to replicate BCM-like learning dynamics is not unique to the DFC family of models. What are the key differences in testable predictions between this model and Payeur 2021? Or older predictive coding-based models like Urbanczik & Senn 2014?
Is the method employed in this study inherently unique to the DFC family of algorithms, or does it apply equally well to the other algorithms as well? E.g., could similar interneuron-modulated plasticity also be applied to predictive coding-based models?
If the network is driven to a near-optimal performance regime by the controller for every stimulus, and this is occurring in a biological system, would there ever be any observable improvements in performance? Or would the animal be performing instantaneously very well, with the only observable progressive change being a reduction in the energy required for the controller? If this is true, which biological systems could this algorithm adequately model?
In this model, is top-down feedback exclusively isolated to inhibitory neurons? Is this compatible or incompatible with models that propose feedback is also (or exclusively) directed to apical dendrites of pyramidal neurons?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: There are no obvious negative societal impacts of this work, and the authors very adequately address the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > (...) to me the principal weakness is that the contribution is very incremental: (...)
While our results are comparable to previous work in terms of performance, we argue that our model offers distinct computational and conceptual advantages:
First, the dis-inhibitory controller we use causes the error signal to be encoded in the inhibitory current received by each neuron. This alleviates the need for the neuron to distinguish between its feed-forward inputs and overall output rate.
Second, our work demonstrates that a Hebbian learning rule with a minor tweak can decode top-down dis-inhibitory error signal and thus allow top-down feedback to steer the sign of plasticity. This learning rule resembles phenomenological plasticity models and thus provides a plausible mechanism through which credit signals can influence local plasticity rules. Thus, our work proposes a concrete circuit-level mechanism for error coding through modulation of recurrent inhibition.
>(... this model) involves a controller with access to a highly complex Jacobian. (...) for every individual stimulus, the network's feedback weights would have to be different (...)
We have performed additional simulations in which the feedback weights are proportional to a stationary Jacobian. This actually improved the stability of our model and shows that our model’s performance is robust to more plausible feedback weights. (cf. Section 1 of the general rebuttal and PDF Tables 1-2 for details).
>The unrealistic 1-1 mapping between inhibitory and excitatory neurons makes it difficult to pin down where exactly the controller feedback should be expected (...)
We agree that the one-to-one connectivity in our model is unrealistic. However, this minimal model is sufficient to illustrate the mechanism of dis-inhibitory control. It is yet unclear where top-down control of cortical circuits originates. In any case, we believe that dis-inhibitory interneurons (such as VIP+ cells) could be directly targeted by a controller. Alternatively, top-down feedback could consist of balanced excitatory and inhibitory currents. In this case, VIP+ interneurons could act as a gating mechanism for feedback control.
>Could you elaborate on the relationship between this algorithm and the algorithm proposed in Payeur et. al 2021? (...)
In Payeur et al. (2021), the sign of plasticity is controlled by deviations from a baseline burst probability. Their proposed mechanism is plausible for cortical L5 neurons with a separate apical dendritic compartment spatially segregating top-down and bottom-up inputs. It is not clear how a burst-dependent mechanism would work for cortical L2/3 neurons or other brain areas that lack segregated dendritic compartments and do not show stereotypical burst firing.
>What are the key differences in testable predictions between this model and Payeur 2021? Or (...) Urbanczik & Senn 2014?
The main difference between these three models is which quantity determines the sign of plasticity. Urbanczik & Senn (2014) postulates that the sign of plasticity depends on differences between local dendritic and somatic voltage. Burstprop (Payeur et al. (2021)) postulates that the sign of plasticity depends on the proportion of bursts in the output of the neuron. Our paper postulates that the amount of inhibitory current into the neuron determines the sign of plasticity.
>(...) does (inhibition-modulated plasticity) apply equally well to the other algorithms as well? (...)
Yes! Thanks for asking. Although we derived our model within an adaptive control theory framework with a strong controller (strong nudge), the concept of decoding error signals from changes in inhibitory activity equally applies to models with weak feedback (weak nudge). For weakly nudged networks, our model is directly related to predictive coding models which are usually considered in the weak nudge limit (but see also Song, [...], Bogacz (2023)).
>If the network is driven to a near-optimal performance (...) would there ever be any observable improvements in performance? (...) which biological systems could this algorithm adequately model?
No, we do not expect the animal to perform instantaneously very well. Feedback signals can only be computed when the animal received the feedback for its actions, e.g. a reward.
For simplicity, we considered the idealized regime in which an agent is presented with a completely unambiguous target signal immediately. A behavioral paradigm that comes close to this setting is fear conditioning, where a painful unconditioned stimulus (US) is a strong, unambiguous value target. Learning is expressed as a freezing response to the initially neutral conditioned stimulus (CS). If a control mechanism exists that drives a network initially responsive to the CS to respond to a target value given by the US, neurons that are part of the top-down controller should reduce their response to the US proportionally to the learned response to the CS. This has been observed for VIP+ interneurons in the amygdala (Krabbe et al. (2019)), corroborating a role for dis-inhibitory circuits in learning.
>(...) is top-down feedback exclusively isolated to inhibitory neurons? Is this compatible (...) with models that propose feedback (...) to apical dendrites of pyramidal neurons?
In our model, feedback is exclusive to a specific class of inhibitory neurons and affects excitatory neurons indirectly through dis-inhibition. There are presumably different top-down signals in the brain that may originate from different brain areas and preferentially target different cell types. Therefore, we do not consider this an overly restrictive assumption. There is likely other top-down feedback to excitatory neurons, which we did not address here. An alternative implementation to interneuron-mediated feedback could be that dis-inhibitory circuits provide a gating signal for excitatory top-down feedback, which would otherwise be canceled out by inhibition.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your very detailed feedback. Though I still believe the results in this paper are incremental relative to previous work, and that this is the principal weakness of the paper, you have done a lot to convince me of the rigor of your analyses. Adding these points (especially your additional figures) will certainly increase the quality of the paper. I will maintain my score, but will increase my confidence (score: 6; conf. 5). | Summary: The authors propose a neural plasticity mechanism with a key role for disinhibition. In particular, they apply a deep feedback control framework whereby feedback driven inhibitory neurons mediate changes in the feedforward excitatory connections. The proposed rule is argued to hold desirable properties in that it is local, captures/predicts experimentally observed plasticity, and can guide error-modulated learning.
Strengths: - the paper is generally well written (though there are some typos, see below)
- the proposed plasticity is novel and well theoretically groundeded
- the experimental predictions are well presented and appear feasible to perform
Weaknesses: - I am not fully convinced of the extent of novelty within this work. It seems to me that the authors made only slight alterations to the setup in [1, 2] such that inhibitory neurons are now included in the architecture (instead of + Qc we now have - (- Qc)) . Of course the brain does include inhibitory neurons, so adding them explicitly is arguably one step closer to plausibility, but I think the authors could present a stronger argument by addressing more the functional/computational differences/benefits with this addition compared to the previous models [1,2]. Comparing it to theorem 2 in [1], is the key difference that the interneurons enable the network to avoid the need to store/discriminate between the feedforward (ff) activity and the total activity of the excitatory neurons? If so I think this could be more clearly conveyed
- I think the authors could more in relating their work to other computational works which consider the role of inibitory neurons on plasticity. For example, would [3] make different predictions in Fig 2? Moreover, the authors do not relate their model to [4], which to my knowledge seems to have significant intersection in terms of modeling and possibly predictions.
- though I found the writing in general good, there were still a fair few sloppy errors and places which were unclear to me (see below)
References:
[1] Meulemans et al. 2021, Credit assignment in neural networks through deep feedback control
[2] Meulemans et al. 2022, Minimizing Control for Credit Assignment with Strong Feedback
[3] Sacramento et al. 2018, Dendritic cortical microcircuits approximate the backpropagation algorithm
[4] Greedy et al. 2022, Single-phase deep learning in cortico-cortical networks
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Fig 1a: what's the dfiference between error and feedback signals?
- Fig 1b: what is f and g? Why are they different?
- Fig 1 caption: eqprop not defined
- line 93: "While these error-modulated learning rules prove functionally useful...they fall short of capturing established properties of
experimentally observed plasticity, such as a postsynaptic activity threshold". Forgive me if I'm being naive: what is meant here postsynaptic activity threshold? It's not clear to me.
- line 99: "However, the model (burstprop) assumes a rigid circuit architecture to decode errors from multiplexed spike trains and thus does not generalize to other neuronal circuits". I don't understand the logic othis sentence: could the authors elaborate?
- line 102: "..necessitate feedback signals to be weak.."; this sentence confused me a bit. It seems to me there's a difference between 1. the strength of feedback signals and 2. whether feedback signals activity influence neuronal activity. For example, in burstprop one might have a very high apical potential (and burst rate) but little/no influence on the event rate
- line 130: define L
- I found the explanation of how optimal feedback weights are found confusing (equation 7). Firstly, J contains L vectors for each u_i, but this is multiplied by only one u_i? Secondly, it seems equation 7 just shows how the change in r relates to any changes in u - this is true in whatever weights are chosen, I don't understand how this is an equation to be 'solved'. Finally, from what I understand in [1] it's true that that if the column space of Q is equal to the row space of J (line 145) then equivalence to Gauss-Newton optimisation is possible, but is this choice of Q necessarily optimal in this case?
- equation 8: use of subindices meaning datapoints here whilst layers elsewhere is confusing
- lines 152-159: it was only clear to me after looking at [2] that minimising the surrogate loss should also minimise the task loss. This should be presented more clearly
- line 156: I don't see the logic of 'as a result' from the previous sentence; doesn't it just follow from equation 5?
- line 167, 169: Eq. not equation
- Fig 2: is the x axis post-synaptic rates? this should be in caption. Same with frequency in 2d
- Fig 2b: inset diagram is unclear; is the black line the true inverse function? I am also curious to see an imperfect approximation where the approximation is above/below the true values at low/high values respectively
- Fig 2 caption: 'resembles experimentally observed plasticity in simulated in vitro condition' - this seems a strong statement when only one paradigm (an isolated neuron) is actually shown
- line 197: "The dis-inhibitory feedback signal creates a stable equilibrium state for the weight dynamics that coincides with the postsynaptic target computed by the feedback controller". I'm sorry, I didn't understand this sentence
- for the student-teacher task in section 5, what is f? is it the output of a randomly intialised teacher network of the same architecture as the student network but without feedback? If without feedback, why does the solution for H in 3c bottom not go to zero? Also, is it necessary to set the feedback weights as the transposed Jacobian. Would it also work if its column space was equal to the row space of J?
- For simulations in image tasks, n=3 seeds seems quite low to me. Could you repeat for a higher number? (like n=10)
- For readability I'd recommend rotating table 2 (perhaps switching rows with columns is necessary)
typos:
line 105. Full stop after references.
equation 8: bold Q
line 203: appendix section 'xyz' (also correct in appendix itself)
line 207: poor grammar
line 244: appendix C and D
Tabel 1 caption: full stop at end
line 284: comma after 'fashion-MNIST'
line 336: has -> have
References:
see refs for weaknesses above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations were well addressed by authors
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >I am not fully convinced of the extent of novelty within this work. (...) is the key difference that the interneurons enable the network to avoid the need to discriminate between the feedforward (ff) activity and the total activity (...)
We realize that we did not sufficiently explain the advantage of a dis-inhibitory feedback circuit. The core *computational* advantage is that encoding the error in inhibitory activity removes the need for a neuron to store a copy of its feed-forward inputs (e.g. Sacramento et al. (2018), Meulemans et al. (2021, 2022)).
Our model also improves upon existing work *conceptually* by demonstrating that a learning rule resembling phenomenological models of Hebbian plasticity can be used to decode the error and thus allow feedback connections to steer the sign of plasticity. This builds a plausible bridge between prior studies on credit assignment and the translation of credit signals into tangible local weight adjustments.
>I think the authors could more in relating their work to other computational works (...).
We agree that the comparison to existing models of bio-plausible credit assignment was not sufficient. We have repeated the simulated *in vitro* slice experiments for several related models (see Section 2 of our general rebuttal and Fig. 1 in the attached PDF)
>Fig 1a: what's the dfiference between error and feedback signals?
We interpreted error signals as explicit relay of an error while feedback signals influence neuronal activity.
>Fig 1b: what is f and g? Why are they different?
We mean two nonlinear functions of the postsynaptic activity. Phenomenological models usually assume a calcium trace. Models of credit assignment usually use the derivative of postsynaptic activity.
>line 93: (...) what is meant here postsynaptic activity threshold?
Experiments on Hebbian plasticity have found an activity threshold above which long-term depression changes to potentiation for co-activated synapses. In phenomenological models, this threshold is usually expressed in terms of a postsynaptic quantity (calcium / voltage / rate).
For example, Artola et al. (1990) found that the sign of plasticity was determined by postsynaptic voltage. Sjöström et al. (2011) observed that the sign of plasticity in an STDP protocol depended on postsynaptic firing rate. Lim et al. (2015) found evidence for a postsynaptic activity threshold by investigating *in vivo* firing rate distributions.
>line 99: (...) I don't understand the logic othis sentence (...)
We realize this sentence was not clear. What we meant is that burstprop assumes electrically segregated dendritic trees that are limited to L5 pyramidal cortical neurons. It is unclear whether a similar mechanism would work for L2/3 neurons or other brain areas that do not show such a segregation.
>line 102: (...) It seems to me there's a difference between 1. the strength of feedback signals and 2. whether feedback signals activity influence neuronal activity (...)
Thanks for raising this point. We will restructure our introduction section taking into account this categorization.
>I found the explanation of how optimal feedback weights are found confusing (equation 7). (...)
Thank you for pointing out the lack of clarity. Given the changes to the feedback weights we use (see general rebuttal Section 1), we will update the whole paragraph on the choice of feedback weights $Q$.
>(...) if the column space of Q is equal to the row space of J (line 145) then equivalence to Gauss-Newton optimisation is possible, but is this choice of Q necessarily optimal in this case?
Indeed, the $Q$ we use is just one of many feedback weight matrices fulfilling this condition. In the final manuscript, we'll avoid referring to feedback weights as “optimal” to prevent misinterpretation.
>Fig 2: is the x axis post-synaptic rates? (...)
Yes. We will make this clear in the figure caption.
>Fig 2b: (...) is the black line the true inverse function?
Yes. We will edit the figure caption.
>I am also curious to see an imperfect approximation where the approximation is above/below the true values at low/high values respectively
The impact of imperfect approximations is demonstrated in Figure 4 of the Supplementary Material in our original submission. We will make sure to highlight this supplementary Figure more prominently.
>Fig 2 caption: 'resembles experimentally observed plasticity in simulated in vitro condition' - this seems a strong statement (...)
We will revise the sentence to:
“(...) resembles plasticity observed in single-neuron electrophysiology experiments”
>line 197: (...). I'm sorry, I didn't understand this sentence
Apologies for the confusion. We mean that the top-down controller influences the learning rule so that a stable fixed point exists when the neuron’s output equals the target.
>for the student-teacher task in section 5, what is f? (...)
$f$ is the output of a randomly initialized teacher network with a smaller hidden layer compared to the student. The teacher network is a conventional ANN without inhibitory units or feedback. One possibility the network does not reach exactly 0 error is the time-delay of the control signal in the online learning paradigm.
>Also, is it necessary to set the feedback weights as the transposed Jacobian? (...)
We have reproduced our results using more relaxed assumptions on the feedback weights (see Section 1 of the general rebuttal and Tables 1-2 in the attached PDF).
### Other suggestions
Thank you for many more helpful suggestions and finding several typos. We will address all typos and implement the following changes in the revised manuscript:
- Define EqProp in Figure 1 caption
- (line 156) change the phrase to “Thus, following Eq. (5)...”
- (line 130) define $\mathcal{L}$
- repeat all simulations n=10 times
- rotate table 2
- change the sub-indices in Eq. (8)
- (lines 152-159) add a section on the relationship between $\mathcal{H}$ and $\mathcal{L}$
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the detailed and informative response.
I'm impressed with the work the authors have done in the rebuttal and my main concerns have been addressed. Overall, I would not call this model a massive leap forward from the Meulemans et al. works, but I think the authors do make sufficiently novel and interesting predictions for neuroscience, a field in which gains are typically incremental. I will upgrade my score to 7. | Summary: This paper uses adaptive control theory to derive plasticity rules for a fairly plasubile model of multi-layer networks in the brain, which are capable of matching the performance of backpropagation without restrictive assumptions such as mirrored or very weak feedback connections. Specifically, each excitatory neuron has a dedicated inhibitory neuron, which is in turn inhibited by the error signal propagated along feedback connections. The plasticity rule for excitatory units is essentially Hebbian but modulated by the inhibitory signal, matching in vitro plasticity experiments. A brief theoretical derivation of the rule is augmented with experimental results on both toy and relevant problems.
Strengths: * This work presents a relatively well-founded model for feedback of error in the brain, which seems to me one of the most biologically-plausible models of backprop to date.
* All of its presentation is clear and should be accessible to readers with a wide variety of related backgrounds.
Weaknesses: * There are some lingering issues of biological plausibility: There are still significant constraints placed on the feedback weights; each excitatory neuron is assumed to have its own inhibitory neuron (which doesn't match the ratios found in the brain); and Dale's law is not enforced.
* The experimental evaluation is a bit terse; I would like to see the performance of this learning rule on a broader range of tasks (especially a non-toy problem with some temporal structure).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * Line 150: learning is duplicated
* Figure 2(a): The inset mentioned in the caption seems to be missing
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: All of the concerns I had were addressed thoroughly in the discussion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >There are still significant constraints placed on the feedback weights
We acknowledge that there are constraints on the feedback weight matrix. Specifically, we used the transpose of the network Jacobian to obtain the feedback weights, which caused feedback weights to vary over time. To address these constraints, we have repeated our simulations with feedback weights derived from a stationary Jacobian. In fact, this has improved our results. Please refer to Point 1 in the general rebuttal and the attached PDF for a thorough discussion of these simulations.
> each excitatory neuron is assumed to have its own inhibitory neuron (which doesn't match the ratios found in the brain)
The proposed model indeed relies on (1) biologically unrealistic one-to-one connectivity and (2) ratios between excitatory and inhibitory neurons. Reducing the number of interneurons to reflect neurobiological ratios has complex implications for the dimensionality of the credit signal, which to address is beyond the scope of this paper. However, we have some exciting preliminary data on low-rank feedback signals (see section 3 in our general rebuttal and Figure 2 in the attached PDF) that suggests that these issues can be overcome. We intend to address this question in the future.
>Dale's law is not enforced
Dale’s law is enforced within each layer, where the E-I connections and I-E connections are purely excitatory or inhibitory, respectively. Output connections from one layer to the next are indeed not Dalian and can be both negative and positive. Such between-layer negative connections should be interpreted as being mediated through feed-forward inhibition.
>The experimental evaluation is a bit terse; I would like to see the performance of this learning rule on a broader range of tasks (especially a non-toy problem with some temporal structure).
Thank you for this suggestion. We were not able to perform simulations on additional datasets due to their computational demand yet. We will try to train networks trained with our learning rule on several datasets with temporal structure, such as the Google Speech Commands dataset for keyword spotting and the Heidelberg Digits or TIDIGITS datasets for classification.
> Line 150: learning is duplicated.
> Figure 2(a): The inset mentioned in the caption seems to be missing
Thank you for spotting these typos. We will fix the mistakes in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response, and especially the additional simulations! Although I do maintain that one-to-one connectivity prevents these results from being a true breakthrough in biologically-plausible backprop, I believe this is a good paper that makes tangible progress and stand by my original score. | Summary: This paper adds recurrent inhibition to each layer of a DNN architecture in order to facilitate a more biologically-plausible form of credit assignment. It shows how this circuit can explain some of the features of plasticity found in vitro and that DNNs with this circuit can learn to perform simple visual tasks.
Strengths: The microcircuit is biologically motivated
The insights provided into how the artificial experimental conditions in plasticity studies lead to specific results is helpful
Weaknesses: The motivation and innovation was not entirely clear to me. The background is focused on how credit assignment requires distant neurons to interact, yet the problem of how information reaches each layer in the network is not what is tackled here. There is also discussion of the microcircuit "decoding" the credit signal, but it seems like the credit signal is fairly directly given to the layer, and so the microcircuit is more of a relay than a decoder.
One of the main results seems to be that a linear approximation to the inverse activation function can make the weight update rule slightly more biologically realistic and still works fairly well. This is fine, though not a very impactful result.
I found some of the results descriptions confusing (see below)
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Why is there no weight matrix for the recurrent connections?
Can the authors elaborate on how their formulation supports stability? For example, I didn't fully understand this claim "On the other hand, our model suggests that a rapid compensatory mechanism could be implemented as a combination of recurrent inhibitory microcircuits and a linear inhibitory threshold in the synaptic plasticity rule"
"Experiments on excitatory plasticity in vitro are commonly performed in the presence of Tetrodotoxin, a sodium channel blocker, to minimize interference of inhibitory activity" Doesn't TTX block all activity, not just inhibitory?
"Specifically, we block recurrent inhibition " From the diagram it looks like you block recurrent excitation, not inhibition (There would be no point in varying the activity of the inhibitory neuron if its outward connections were blocked).
Is "in the absence of any inhibitory input" supposed to say "absence of input to the inhibitory cells" ?
It's not clear to me why the "postsynaptic" term in equation 9 drops out when the E->I connection is dropped
"we manually set the feedback weights to the transposed network Jacobian at each time step" does this mean that the feedback is different at each timestep? That is not very biologically-plausible.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Already mentioned above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The motivation and innovation was not entirely clear to me. (...)
We realize that we did not explain the paper's motivation clearly. What our article contributes is a plausible mechanism consistent with experiments of how neuronal circuits can translate credit signals into weight changes. We do not propose a new strategy on how these credit signals are propagated. Thus our article is more about how feedback afferents can control the sign of plasticity at individual neurons.
We will rewrite the relevant parts in the introduction and discussion sections and propose a new title: “Dis-inhibitory neuronal circuits can control the sign of plasticity”, inspired by Richards & Lillicrap (2018).
> (...) it seems like the credit signal is fairly directly given to the layer, and so the microcircuit is more of a relay than a decoder.
Yes and no. We agree that the feedback is given directly to the layer. But for a neuron to use this feedback for plasticity it needs to estimate the error signal from the magnitude of inhibitory current. Specifically, it needs to correct for its own contribution to the received recurrent inhibition. In that sense, we do think the term “decode” for both the learning rule and the circuit is justified. However, we will work this point out more clearly in our Discussion.
>(...) a linear approximation to the inverse activation function can make the weight update rule slightly more biologically realistic and still works fairly well. This is fine, though not a very impactful result.
We respectfully disagree with this assessment. We think it is impactful because it provides a missing piece of the puzzle of how existing works on credit assignment could plausibly translate credit signals into local weight changes. Most previous models did not link their work to phenomenological learning rules observed in neurobiology.
>Why is there no weight matrix for the recurrent connections?
Our model adopts a one-to-one connectivity structure between excitatory and inhibitory neurons. Although biologically implausible, this minimal architecture is sufficient to demonstrate the central notion of our article: credit signals could be relayed through dis-inhibitory afferents, encoded within inhibitory currents and subsequently decoded using Hebbian plasticity mechanisms.
Other prominent models, such as Sacramento et al. (2018), have employed similar 1-1 connectivity paradigms, underlining their utility in specific contexts. While we have some promising preliminary results on more realistic E-I connectivity patterns (see Section 3 of the general rebuttal and Fig. 2 of the attached PDF), a detailed study of this topic goes beyond the scope of this article.
>Can the authors elaborate on how their formulation supports stability? (...)"
In traditional Hebbian plasticity rules, runaway LTP can lead to unstable dynamics. As highlighted by theoretical studies (e.g, Zenke & Gerstner 2017, Yger & Gilson 2015), plasticity should be accompanied by a rapid compensatory mechanism, i.e. the learning curve should dip down into the LTD region at high firing rates or synaptic weights, to counterbalance a positive feedback loop. However, there is inconclusive evidence for such a mechanism from pair-based-recordings. Interestingly, the model presented in this article shows the desired behavior, but it arises as a circuit phenomenon (See Figure 2a-b in the Manuscript) from an interplay between two features:
- Recurrent Connectivity: Increases in excitatory firing rate lead to increases in inhibitory current due to recurrent connectivity.
- Linear error decoding: Without a top-down error signal to decode, the linear model decoding the error from inhibitory activity can overestimate the control signal, causing LTD. The linearization parameters play a key role in the stability of the learning rule, as illustrated in Fig. 4 of the Supplementary Material.
>"Experiments (...) are commonly performed in the presence of Tetrodotoxin (...) to minimize interference of inhibitory activity" Doesn't TTX block all activity, not just inhibitory?
Thank you for pointing this out. TTX is commonly used in glutamate uncaging experiments to reduce interference from background activity. In paired patch-clamp recordings, experimenters instead often apply GABA antagonists, which specifically block inhibitory currents, or discard recordings in which background activity was present, in order to isolate the synapse under investigation.
We will change the sentence to:
“Experiments on excitatory plasticity are commonly performed under conditions designed to minimize the interference of inhibitory activity, for example by applying GABA antagonists.
>(...) From the diagram it looks like you block recurrent excitation, not inhibition (...).
We will change the sentence to:
"(...) specifically, we block recurrent connections from excitatory to inhibitory neurons, so that inhibitory activity is independent of excitatory activity (...)"
>Is "in the absence of any inhibitory input" supposed to say "absence of input to the inhibitory cells" ?
We will replace this phrase with “In the absence of any inhibitory activity, i.e. $r^I = 0$, (...) ”.
>It's not clear to me why the "postsynaptic" term in equation 9 drops out when the E->I connection is dropped
Thank you for spotting this error. When the inhibitory firing rate equals zero, the learning rule reduces to $\Delta w \propto r_{\text{pre}} \left(r_{\text{post}} - \theta \right) \phi’(r_{\text{post}})$.
>"we manually set the feedback weights to the transposed network Jacobian at each time step" does this mean that the feedback is different at each timestep? (...).
Thanks for raising this point. Indeed, in our initial experiments, the feedback weights were dynamically adapted at each timestep based on the transposed network Jacobian. We have conducted additional simulations using static feedback weights (see Section 1 in the general rebuttal and Tables 1&2 in the attached PDF).
---
Rebuttal Comment 1.1:
Comment: I appreciate the clarifications and corrections from the authors. The intended impact of the work is now clearer. I will increase my score by 1. | Rebuttal 1:
Rebuttal: We’d like to thank all reviewers for their helpful and constructive comments on our manuscript. We have responded to each of you individually in our rebuttals below. Based on their collective feedback, we have performed additional simulations which we think strengthen the manuscript. We discuss their outcomes in the following and we include new tables and figures in the attached PDF document.
### 1. Inhibitory control works with a simpler controller
Several reviewers pointed out that our feedback weights were implausible because they changed at every timestep, i.e. $Q_i(t) = \frac{\partial r_L}{\partial u^I_i(t)}$. We agree that this may look like too much of a constraint.
To address this, we derived the feedback weights from the network Jacobian at the uncontrolled (i.e. $c=0$) steady state. Specifically, we set $Q_i$ proportional to $\frac{\partial r_L}{\partial \tilde u^I_i}$ where $\tilde u^I_i$ are the inhibitory membrane potentials of layer $i$ at the uncontrolled steady-state. Thus, the feedback weights are static within a single trial. We reproduced all our simulations on computer vision benchmarks in this setting and summarized the results in Table 1 of the attached PDF. All our results still hold in this setting. In fact, using these stationary feedback weights sped up our simulations due to reducing computational demand and slightly increased performance of our learning algorithm. We thus plan to replace Table 1 in the final manuscript with these new results with $n=10$ independent runs each.
Using the open-loop steady state Jacobian as described removes the temporal dynamics of the feedback weights. However, the Jacobian is still input-dependent. Although our focus was not to solve the control problem, we acknowledge that related work used a similar feedback control mechanism (Meulemans 2021, 2022) and demonstrated learning with feedback weights that are not input-dependent and instead reflect the *average* transpose of the Jacobian across the dataset. To test whether our model would also support this static feedback weight regime, we performed the same simulations as in Table 1, but averaged the calculated feedback weights for each mini-batch. The results of these simulations are summarized in Table 2 of the attached PDF. Such feedback weights reflect the *average* Jacobian. They could be learned online from noise correlations, as shown by Meulemans et al. (2021).
Together these findings indicate that our model is robust to the details of the feedback weights. We plan to include these results in the final manuscript. Furthermore, we expect that existing online learning algorithms for the feedback weights could be adapted to our dis-inhibitory control model in the future and will discuss this possibility in the revised discussion section.
### 2. More comprehensive comparison of in-vitro slice experiments with previous models
Some reviewers were concerned that we did not clarify sufficiently how our experimental predictions differ from existing bio-plausible credit assignment models. We agree that we could have done a better job at describing the commonalities and differences.
To address this point, we repeated the same experiments for the microcircuits and associated learning rules presented in related papers on credit assignment. Specifically, we compare to dendritic errors rules, (PDF Fig. 1a–c; Gilra & Gerstner (2018), Urbanczik & Senn (2014), Sacramento et al. (2018)),deep feedback control (PDF Fig. 1d,e; Meulemans et al. (2021 & 2022)) and burst multiplexing (PDF Fig. 1f,g; Payeur et al. (2021), Greedy et al. (2022)).Most models cannot reproduce the experimentally observed BCM-like learning curve. While burst-multiplexing models could reproduce a learning rule with BCM-like moving threshold if output spikes are Poisson distributed, the proposed underlying mechanism requires a spatially well-separated apical dendrite, specific to cortical L5 neurons. It is, therefore, difficult to see how L2/3 cortical neurons or, for instance, neurons in the basolateral amygdala could use a similar mechanism. In both cases, however, prominent disinhibitory circuit motifs are known to exist in neurobiology. In reality it is entirely conceivable that both burst-multiplexing and inhibitory control work side-by-side. We will clarify these commonalities and differences by adding a discussion and similar comparison along with a summarizing figure in the supplementary material of our final manuscript.
### 3. Preliminary results on learning with more biologically plausible inhibitory connectivity
Some reviewers raised a concern about the plausibility of the one-to-one inhibitory connectivity. We agree with the reviewers that this connectivity is biologically implausible. We made this simplifying model choice because it allowed us to directly relate the model to existing frameworks with neuron-specific error signals. Reducing the number of interneurons to reflect realistic ratios requires feedback to be low-dimensional which opens up new exciting research questions beyond the scope of the present article. We have started to empirically test the notion of learning with low-dimensional feedback and obtained promising preliminary results (see Figure 2 of the attached PDF). Specifically, we investigated training ANNs with low-rank feedback weights in a Direct Feedback Alignment setting (Nøkland, 2016). While these results indicate that learning with low-dimensional feedback (and thus realistic number of interneurons) signals is possible, relating these findings to a recurrent architecture compromising excitatory and inhibitory neurons requires additional work that is beyond the scope of the current paper.
Pdf: /pdf/ce61160dc29b48334d84843ed6b1f87881384d66.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Energy-Efficient Scheduling with Predictions | Accept (poster) | Summary: This paper studies an energy-efficient scheduling problem under the setting of prediction. In this problem, each job has a release time and processing time. The job arrives online and the algorithm can determine the speed of the machine. The higher speed means a higher energy cost. The total energy cost integrates the speed cost over all time points. Besides the energy cost, the objective contains a job cost function that relies on the schedule. In this paper, the authors assume this cost function is subadditive to capture more applications.
The schedule is defined as the speed of the machine each time. The goal is to find a feasible schedule with the minimum total cost.
This paper considers the above problem under the setting of prediction, where their prediction is the release time and processing time of each job. The main contribution of this work is a learning-augmented algorithm that achieves (1+\epsilon) consistency and bounded robustness. The authors also give the ratio function depending on the prediction error. Finally, the authors verify the performance of the proposed algorithm in some real datasets.
Strengths: The submission is carefully written and structured, so reads well given the technicality of the material. Especially, the authors provide sufficient intuitions to help understand the algorithm.
The proposed learning-augmented algorithm can be applied to several problems. Although one still needs to utilize the classical online and offline algorithms for each specific problem, the proposed framework provides some intuitions to show how to combine these two algorithms.
Weaknesses: I am one of the reviewers of the previous version of this paper. My previous major concerns are: (1) the authors didn’t provide a comparison of the proposed framework and the existing works, thus it is unclear how good this framework is if we apply it to some specific problem; (2) the considered problem is too general and may not admit a polynomial time algorithm.
In this version, the author addresses these two points appropriately. For the first point, the authors added Table 1 to discuss four related special cases. For the second point, the authors considered the approximation ratio in the proof of Theorem 3.4.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would appreciate it if the authors add a discussion about how to compute the error function given two instances.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This is a theoretical paper, there is no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful and positive review.
$\bullet$ “I am one of the reviewers of the previous version of this paper. My previous major concerns are […] In this version, the author addresses these two points appropriately.”\
We are happy to hear that the previous weaknesses were addressed appropriately.
$\bullet$ “I would appreciate it if the authors add a discussion about how to compute the error function given two instances.”\
First, we would like to emphasize that our algorithm does not require knowledge of the prediction error $\eta$. Nevertheless, if one wanted to compute this prediction error, they would have to compute the optimal offline cost OPT of three different instances. There are previous works that have studied the offline version of energy-efficient scheduling. In particular, [28] gives an optimal offline algorithm for the energy minimization with deadlines problem and [2] gives an optimal offline algorithm for the energy plus flow time problem in the case of unweighted unit-size jobs.
---
Rebuttal Comment 1.1:
Comment: I understand that the algorithm does not require to compute the error function. But, my point is that the error function is important for training predictors, and it can be viewed as an initial version of the loss function. Besides this, the error function can also be used to mark the good and bad predictions during the training process. So it is useful to give an algorithm to compute the error function.
---
Reply to Comment 1.1.1:
Comment: We agree with the reviewer that computing the prediction error can be helpful during the training process. We will include a discussion about computing the prediction error in the paper. | Summary: The authors study energy-efficient scheduling with predictions. There
are already previous works on minimizing energy consumption in a setting
with deadlines. The authors provide a unified way to address both
the scheduling with deadlines as well as (weighted) flow time plus energy cost.
It was already known that no algorithm can be both 1-consistent and robust
and therefore one has to aim for a trade-off between consistency and robustness.
They provide consistency and robustness trade-offs which scale roughly with
(1/lambda), where lambda is the trade-off parameter. They also show a lower
bound on robustness scaling with sqrt(1+1/lambda), i.e., there is still a
space for improvement.
They also discuss deterioration of their bounds between consistency and
robustness as a function of a prediction error defined in a natural way,
especially with their extension to approximately correct predictions.
I did not notice any lower bounds on the dependence on prediction error,
also the formula stating the dependence is not easy to understand (maybe a plot
would help?). But overall, I find the contribution of this paper solid and
I will be happy to see it accepted.
Strengths: they provide an unified framework for a broad range of problems in energy efficient scheduling
Weaknesses: * resulting bounds for the Deadline setting are weaker than in previous works
* results not tight yet
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * you could elaborate a bit on how does your competitive ratio behave as a dependency on prediction error, if possible to give a clean answer.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: * Authors stated their theoretical results formally, including all assumptions made
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful and positive review.
$\bullet$ “elaborate a bit on how does your competitive ratio behave as a dependency on prediction error”\
Such a discussion is provided in Appendix G.1. We would be happy to move that discussion to the main body of the paper. A slightly edited version of this discussion is given below for convenience.
The competitive ratio smoothly approaches the consistency bound as the prediction error tends to 0. In addition, it distinguishes the effect of two possible sources of errors:
(1) when there is a growing number of predicted jobs that do not arrive ($\eta_1 = 0$ and $\eta_2$ goes to 1), the upper bound degrades monotonically to $O(1/\lambda)$.
(2) when there is a growing number of unpredicted jobs that arrive online ($\eta_1$ goes to infinity and $\eta_2 = 0$), the competitive ratio first deteriorates and then improves, with an asymptotic rate equal to the competitive ratio $\gamma_{on}$ achieved by the online algorithm OnlineAlg. This behavior is because our algorithm mostly follows the online algorithm when the cost of the additional jobs dominates.
---
Rebuttal Comment 1.1:
Comment: thank you for your explanation. | Summary: The paper considers speed scaling scheduling with learning augmented predictions. In contrast to previous works that considered the deadline-based version of the problem, the current paper studies a more general model that allows for different quality of service objectives to be optimised alongside the energy consumption of the schedule. These include for example the well studied total flow time objective.
Similarly to other results in the learning augmented algorithms literature, the employed approach is that of combining at runtime an offline and an online algorithm for the problem.
Finally, experimental results over real and sytnthetic datasets are provided.
Strengths: The biggest strength of the paper is the generality of the considered model and how it implies algorithms with predictions for a number of different speed-scaling settings.
Other strengths include that the paper is well written, and obtains a clean and general framework.
Weaknesses: The prediction error is quite unnatural in my personal opinion. But going for it is that it generalizes the prediction error considered in [7].
The major weakness is that jobs are considered to be predicted correctly if the prediction for the job is 100% accurate. This is a bit unnatural and it could be argued that if the predicted parameters of every job are very close to the real ones then the predictions are adequate and should be useful. This is tackled in Section 4 (and mainly Appendix C) but to my understanding it only allows for small errors per job -- which is a weakness compared to other papers in the area in general and specifically previous papers on speed scaling with predictions.
The technique employed in the paper is quite standard and I couldn't identify any really new idea. Nevertheless the results are not obvious and had to be worked out.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I am wondering if there is a way to simplify the competitive ratio formula in Theorem 3.4 -- even perhaps at a slight loss of the proven guarantee? I did check the discussion in the appendix which is helpful but still does not give a very clear picture of the obtained result.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: No conceivable limitations or potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful review. We believe there is a misunderstanding regarding the major weakness raised by the reviewer, which we address first below. Please let us know if this is not the case, we would be happy to also answer any follow-up questions.
$\bullet$ “This is tackled in Section 4 (and mainly Appendix C) but to my understanding it only allows for small errors per job -- which is a weakness compared to other papers in the area in general and specifically previous papers on speed scaling with predictions.” \
The competitive ratio we achieve gracefully degrades as a function of the (per job) shift tolerance parameter $\eta^{shift}$, which is chosen by the algorithm designer depending on the instance. In particular, $\eta^{shift}$ can be set to be large to allow for large errors per job (at the cost of a worse competitive ratio). Thus, we disagree with the claim that our algorithm only allows for small errors per job.
$\bullet$ “The technique employed in the paper is quite standard and I couldn't identify any really new idea. Nevertheless the results are not obvious and had to be worked out.”\
We respectfully disagree with the claim that our techniques are standard. In particular, many algorithms with predictions first trust the predictions and then, if the cost becomes too large, switch to an online algorithm that ignores the predictions. Our approach is to instead first run an online algorithm that ignores the predictions and then switch to trusting the predictions. We are not aware of previous work that has used this approach. If the reviewer has a reference to a paper that uses such an approach, we would be grateful if they shared it.
$\bullet$ “I am wondering if there is a way to simplify the competitive ratio formula in Theorem 3.4 -- even perhaps at a slight loss of the proven guarantee? I did check the discussion in the appendix which is helpful but still does not give a very clear picture of the obtained result.”\
We attempted to further simplify the formula but, unfortunately, weren’t able to do so. We agree with the reviewer that the formula is complicated. We are happy to move some of the discussion about this formula from the appendix to the main body of the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response and for clarifying that the algorithm allows for larger errors.
I find it quite far-fetched to consider it a different technique whether one starts following the prediction or a robust algorithm or vice versa. And it is also not novel, many papers in the area combine an algorithm following the predictions and a robust algorithm in an "experts sense" so that they compare well versus the in hindsight best of them. If the robust algorithm has better cost at the beginning of the instance then the algorithm with predictions will start by following that. See "Online Metric Algorithms with Untrusted Predictions" for an example. Another class of problems where that can be the case is in cases where one cannot "recover" from a wrong decision. For instance in the secretary problem any reasonable algorithm with predictions would not hire the secretary at the early stages even if the predictions suggest so.
My evaluation remains unchanged. | Summary: This paper adds to the literature on energy efficient scheduling by defining an algorithm that extends the problems of "energy minimization with deadlines" and "energy plus flow time minimization" to the case where predictions about future jobs are available. The paper assumes an existing algorithm for online and offline energy efficient scheduling (wrt to a particular objective function) as well as an algorithm that makes predictions. In this case they give consistency - how far cost of the with-predictions variant deviates from the without-predictions variant - and robustness - how bad things can get when predictions are very wrong - bounds for their algorithm and compare them to existing results in the literature.
They explore several other properties of these algorithms and give an extension of the algorithm to small deviations - i.e. the case where predictions don't have to be exactly correct to be considered "good".
Experiments are done on two synthetic and one real (SNAP College Message) dataset.
Strengths: The paper tackles an important problem, where improved algorithms will have a positive environmental impact.
The paper contributes a new bound on a problem that hasn't been studied yet. It also adds insight into previously studied variants of the problem.
Relevant claims seem to be justified with proofs - although I was unable to fully check everything, but what I did check was consistent.
Weaknesses: This paper is abstract and jargon / notation heavy and will be difficult for someone without deep familiarity with the problem domain to read. On place that I had trouble were understanding how speed is assigned to a job.
The variable lambda is prominently used, but is not given an intuitive definition until line 195. I would like to see a definition of this "confidence level" variable nearby where the intuitive definition of alpha is given.
Around line 196, I would like to see examples or citation of what some of these ONLINE and OFFLINE algorithms are, as well as an example of or reference to an algorithm for prediction.
Line 29: Replacing: "is equal to speed to some power" with "is equal to speed *raised* to some power" would make this easier to understand. It does make sense as-is, but the added clarification would have helped me.
Line 174: Using eta as a function and a numeric variable is confusing.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I don't understand on line 162, prediction error is defined with an alpha-norm, but perhaps reading the relevant citation more carefully could help there.
I'm wondering about the assumption that the speed of the machine is equal to the sum of the speeds of all jobs currently executing. Is overhead from context switching / cache misses negligible in real life, or is this just a simplification for theoretical analysis?
Have the authors considered codifying their proofs in proof assistant software (e.g. Lean, Coq)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The discussion is sufficient.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful and overall positive review. We hope the answers below address the reviewer’s concerns. Please let us know if this is not the case, we would be happy to also answer any follow-up questions.
$\bullet$ “One place that I had trouble were understanding how speed is assigned to a job.”\
The assignment of speed $s_j(t)$ to a job $j$ at time $t$ is the main technical part of our algorithm and it is indeed subtle as it depends on both $j$ and $t$. More precisely, if time t is during the first phase of the algorithm, then $s_j(t)$ is the speed according to the online algorithm OnlineAlg for all jobs $j$. If $t$ is during the second phase of the algorithm, then $s_j(t)$ is assigned according to the offline algorithm OfflineAlg for the correctly predicted jobs $j$ and it is assigned according to OnlineAlg for the incorrectly predicted jobs $j$. We will add some discussion in the description of the algorithm to clarify this.
$\bullet$ “The variable lambda is prominently used, but is not given an intuitive definition until line 195.”\
lambda is the parameter that controls the tradeoff between consistency and robustness in our results. Intuitively, the more we trust the predictions, the smaller lambda should be chosen. On page 2, we define lambda as being a parameter such that our results hold “for any lambda in (0, 1]” (see caption of Table 1). We will clarify this in the next version of the paper.
$\bullet$ “Around line 196, I would like to see examples or citation of what some of these ONLINE and OFFLINE algorithms are, as well as an example of or reference to an algorithm for prediction.”\
In Section 3.3, we provide a discussion of such algorithms, as well as many references. We would be happy to move some of that discussion to earlier in the paper. Regarding a “reference to an algorithm for prediction”, we provide several references in the introduction for relevant algorithms with predictions.
$\bullet$ “I don't understand on line 162, prediction error is defined with an alpha-norm, but perhaps reading the relevant citation more carefully could help there.”\
We emphasize that this prediction error on line 162 is not the prediction error we use in this paper but the one considered in [7], so we do not provide a detailed discussion of it. This other definition follows from the impossibility result shown in [7] (Appendix C).
$\bullet$ “Is overhead from context switching / cache misses negligible in real life, or is this just a simplification for theoretical analysis?”\
It is common in the scheduling literature (with or without predictions) to assume that context switching has a negligible cost and to ignore it. This assumption is justified in many, but not all computing environments. However, there are some environments that have a high cost for preemption (such as those that involve a physical process or those that depend upon a massive amount of data), and typically this is handled by not allowing any preemptions. There have been some works that assign a cost to preemption and/or limit the number of preemptions. Our work does not handle these cases, but it is a good future research direction.
$\bullet$ “Have the authors considered codifying their proofs in proof assistant software (e.g. Lean, Coq)?”\
No, we have not. We strongly believe that using such software is not standard for papers in this area at NeurIPS.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications.
> “Have the authors considered codifying their proofs in proof assistant software (e.g. Lean, Coq)?”
> No, we have not. We strongly believe that using such software is not standard for papers in this area at NeurIPS.
I agree it is not a requirement. I still recommend looking into it as a way to increase the quality and interoperability of any theoretical work and remove margins of error.
---
Reply to Comment 1.1.1:
Comment: Thank you for the recommendation, we will look into it. Is there a particular software you recommend? | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
AMAG: Additive, Multiplicative and Adaptive Graph Neural Network For Forecasting Neuron Activity | Accept (poster) | Summary: This manuscript looks at time series forecasting in neural multi-channel electrical signals. They propose a new graph neural network based approach called AMAG that infers connectivity between channels to improve the ability to forecast over competing approaches, most of which are designed to infer latent dynamics or predict relevant task related behavior. There are multiple steps in this approach, and the entire system is trained to maximize multi-step prediction. Experimental results suggest the ability to capture relevant connectivity.
Strengths: This approach appears to accurately capture connectivity in synthetic signals, which appears to largely hold up in real world data as well.
Predictive performance shows large improvements in the forecasting (both one-step and multi-step) problem compared to competing approaches.
An ablation study implies that all components of the network are beneficial, motivating each component.
Weaknesses: The major evaluation metric is forecasting in neural time series. It is unclear how important this metric is by itself, whereas one often wants to evaluate how well we can extract relevant information (such as neural decoding) from the neural signals. There is an implication that improving forecast of neural signals would improve these other problems, but it would be much stronger to explicitly show that.
My biggest hesitation is that it is unclear where the predictive improvements are coming from. Most of the compared methods have performed well in many contexts, and the proposed structure is not so different than these existing methods. The largest difference seems to be the GNN operating in the latent space of the system, whereas other methods capture these relationships in other ways, such as in the encoder function. The ablation study does not clearly enough elucidate such differences, as all variants of the model seem to be extremely highly performing. (I am assuming that Table 3 corresponds to Monkey C since the AMAG result matches up, but it doesn't explicitly state). In fact, the ablated models still seem to be state-of-the-art, which is confusing. I would encourage the authors to really explore the differences between their approach and existing methods and clearly show where the majority of the performance improvements happens.
There's a number of fuzzy details that need additional explanation throughout, such as the aforementioned ablation not clearly describing the data, and explicitly stating what the training of reconstruction and reconstruction with mask are in Figure 2.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: What component is responsible for the major performance improvements compared to existing approaches?
How does this approach do at capturing other information such as neural decoding? Is there evidence that improving at this forecasting task will really improve other downstream tasks?
For AM-G with the non-learnable adjacency, do you use the initialized correlation? How sensitive is performance to that choice?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The only thing that I would add is that the authors should discuss more clearly that their improvements and results are focused on forecasting in the discussion, and that results are not shown for many of the other tasks where competing models have been effective. Other than that, the comments seem fair.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**. We thank the reviewer for pointing out the need for further information regarding the forecasting metric versus additional metrics. We focused on forecasting since the metric indicates the extent to which future signals, assumed unknown, could be predicted. For perfect forecasting accuracy, the unknown signals would be fully recovered. Note that in one-step forecasting, the accuracy in terms of R2 is close to such accuracy, while for multi-step, there is a significant gap still to overcome. There are also direct applications of estimating future activity, such as anomaly detection of neurons’ recording, reducing the latency of the BCI system, especially when the behavior signal itself is insufficient to predict future behavior, or there is no observable behavior. Examples include sleep spindles, epilepsy detection, and emotion detection, to name a few. We also investigated additional metrics, such as the reconstruction of connectivity.
We agree that behavior decoding could be another useful metric. We examined how the forecasted signal (multi-step) can be used for behavior decoding. We trained a behavior decoder using linear regression on ground truth neural and behavior signals applied to Monkey C. Then we applied the decoder to AMAG and other baselines' forecasted signals to examine how well their signals can be used for behavior decoding. Our results show improvement in decoding compared to the baselines (below). We will add these results and discussion to the revised version of the manuscript.
| | GT | GraphS4mer | DCRNN | GWNet | AMAG |
|----|-------|------------|-------|-------|-------|
| R2 | 0.656 | 0.350 | 0.44 | 0.453 | 0.555 |
**W2**. As suggested by the reviewer, we included new variants in the ablation study by removing Self-connection (amAG) and removing both Add + Modulator (--AG) modules, i.e., the graph within AMAG, to elucidate the major component leading to SOTA forecasting (table below). The performance significantly drops for both amAG and –AG variants compared to full AMAG (R2=0.658).
Combining the results with Table 3 in the main paper, we find that the two components: both the Self-connection module (amAG) and Add/Modulator (--AG) are essential for AMAG performance. Variants with only one module of the graph, i.e., only Add or only Modulator, along with Self-connection lead to reasonable performance (-MAG=0.611, A-AG=0.616 ). The combination of all of them together provides the best accuracy. We will add these results and discussion to the revised version of the manuscript.
| | R2 | CORR | MSE |
|------|-------|-------|--------|
| amAG | 0.425 | 0.652 | 0.0486 |
| —AG | 0.423 | 0.649 | 0.0488 |
**W3**. In the ablation study, all results were obtained for the multi-step forecasting task (Monkey C).
The reconstruction in Fig. 2 refers to: given an input neural signal, the model is trained to learn to reconstruct the exact input.
Reconstruction with masking indicates that the input into the model is masked (i.e., has missing data) such that the model learns to both reconstruct the input and fill in the missing information. Please see Appendix Section 2.1 for further details re. the experimental design.
**Q1**. We thank the reviewer for W1 and this question to elucidate the components responsible for the major functionality of AMAG. As discussed in W2, both Self-connection and at least one module of the graph: Add or Modulator, are essential. The combination of all three provides the best performance as Add and Modulator modules supplement each other.
**Q2**. Indeed, the forecasting task is a fundamental task and the recovery of unknown future signals can facilitate related applications and decoding, e.g., behavior decoding. As shown in reply to W1, AMAG forecasted signal facilitates a more accurate prediction of future behavior (R2=0.555) than other methods (R2~=0.45).
**Q3**. We do use correlation initialization in AM-G, and it plays a role when the adjacency matrix is non-learnable. For example, when we substitute correlation initialization with random initialization, R2 drops from 0.623 to 0.585 (see below). Furthermore, fixed random initialization is more sensitive to the choice than fixed correlation initialization as we show for two random initialization variants in the table below (Rand Init 1 and Rand Init 2) compared to correlation initialization (Corr Init).
| AMAG versions | R2 | CORR | MSE |
|--------------------|-------|-------|--------|
| AM-G (Rand Init 1) | 0.585 | 0.774 | 0.0346 |
| AM-G (Rand Init 2) | 0.596 | 0.773 | 0.0336 |
| AM-G (Corr Init) | 0.623 | 0.811 | 0.0285 |
---
Rebuttal Comment 1.1:
Title: Thank you for your clarifications
Comment: These additional ablations are helpful. Thank you.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your positive feedback and your dedicated time to review our paper. We are pleased to know that we have successfully addressed your concerns. | Summary: This paper presents AMAG — a graph neural network for modeling and forecasting neural population dynamics. The graph neural network has mechanism to describe the additive and multiplicative interactions between neurons, and a sample-dependent matrix to adjust the additive component. Experiments are carried out on both synthetic data and real neural signals, in comparison with several baselines.
Strengths: This paper is well written and easy to follow. The rationale behind AMAG design is clearly described, especially the basis of utilizing prior knowledge about additive and multiplicative processes.
The results overall show an improvement of forecasting performance by AMAG, and the experimental results especially those on actual monkey data are very interesting.
The selected results on visualization of interaction strength and the neural population trajectory embedding, i.e, those in Fig 4, are also very informative and interesting.
Weaknesses: 1. The quantitative results listed (e.g., Table 2) lack sufficient statistics, especially considering that the margin of improvements in some metrics are quite small. Some measure of std would be necessary.
2. Other than the quantitative improvements (as shown in Table 1/Fig 3), the significance of such improvements can be better explained. E.g., in Fig 3, indeed it is evident that AMAG showed closer results to the true trajectory in the highlighted areas of the curve, however, in the remaining parts of the curve, it also falls short of capturing the true trajectories by a quite large margin (similar to the other methods), such as the big deflection in front of the highlighted area in the first curve of panel 2, or the deflection following the highlighted area in the second curve of the same panel -- Overall, it would seem that the improvements over other baselines are less significant or on par with the other major errors of the results. For signals that have a lot of temporal fluctuations, it would be good to understand what are the significance in minor to moderate level improvements in signal details when there are still substantial errors in the rest of the signals.
3. While the key innovation of the paper is motivated by leveraging prior knowledge about the additive and multiplicative interaction among neurons, the results provided limited insights into whether this modeling hypothesis could be verified from the results. There are ablation studies to show that both components are important (which is a strength of the paper), but there lack additional analyses into what are the different interactions being learned (e.g., by looking into the two adjacency matrices).
4. Effect of the initialization on the two adjacency matrices should be elaborated with additional results (what happens when different correlation matrix is being used). What are the differences between the A_a and A_m after being initialized by the same matri?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In addition to address the comments raised in the weakness section above, please also considering clarifying the following questions.
1. For the synthetic dataset presented in 4.1, what type of interaction mechanisms is used in the ground truth generation of data, i.e.., the function of G? Is it only additive, or multiplicative? The adjacency matrix A defines which interaction?
2. Similarly, in the investigations based on the adjacency matrix in 4.2, which A's are being examined (A_a for additive interaction, or A_m for multiplicative interaction)?
3. Overall, as mentioned in the weakness section, since the use of these two different interaction mechanisms is a key contribution in this paper, the authors should not only make it clear which A's are being considered (both in generation of data and in results), but it'd also be interesting if the authors add more analyses and results on how A_a and A_m may look like, and what are the underlying insights such differences or commonality could offer.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors discussed briefly the limitations associated with the study and future works necessary to address these limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**. We thank the reviewer for recommending including statistics in Table 2. We agree that adding multiple runs would elucidate the extent of variation and robustness of the results. Originally, we included runs as separate tables (one in the main paper and one in the Appendix). Following the reviewer’s recommendation, we included standard deviation of three runs for multi-step forecasting methods (As shown in Table R2 in the attached pdf). The improvement remains significant considering the standard deviation. We will update Table 2 and other tables with statistics in the revised paper.
**W2**. The quantitative evaluation we examined, presented in Tables 1 and 3, is the collective evaluation comparing how predicted signals match the ground truth across multiple samples and multiple time points. Under the scenario, each sample and each time point equally contribute to the reported accuracy.
The examples shown in the right panel of Fig 3 are forecasted signals in the multi-step forecasting scenario (Monkey C), where AMAG results in R2=0.658. LRNN and NDT, in this case, result in 0.507 and 0.567, respectively. Considering that all three methods do not match the ground truth perfectly, there could be deflection in each of them.
However, when comparing within the highlighted time window of the first curve of panel 2, AMAG is closer to the ground truth (GT) than other methods. Whereas, at an earlier time to the highlighted time window, there is indeed a deflection.
This deflection is for all three methods and close inspection shows that AMAG is slightly closer to GT than other methods. Similarly, for the second curve of panel two, AMAG is significantly better in the highlighted window, and in the window that follows it, all three methods appear to perform similarly. Overall, we observe that while there could be some windows in which all methods are similarly close or similarly deflect, there are windows in which AMAG has a significant advantage (these are the windows we highlighted). These improvements lead to an overall major improvement in the evaluation metric.
**W3**. We thank the reviewer’s comment regarding further examination of the adjacency matrices for elucidation of primary components of AMAG. As suggested, we additionally visualized the learned adjacency matrices for both the Add module (Aa) and the Modulator module (Am) when AMAG is trained to perform one-step forecasting (Fig. R2 left panel) and multi-step forecasting (Fig. R2 right panel) on Monkey A. For each of the cases, we examined two scenarios to exclude initialization sensitivity: when both Aa and Am are initialized with the correlation matrix (Fig. R2 top) or both matrices are initialized randomly (Fig. R2 bottom). In the visualized matrix, the i-th row shows other channels’ contributions to the i-th channel.
These figures show that each matrix by itself (Aa or Am) appears consistent across initialization and forecasting length. However, when comparinge Aa with Am, while there is resemblance in their global pattern (which supports that a single module could generate reasonable prediction), each has distinct local patterns. Am appears to be more dense, while Aa is more sparse. Looking at specific target channels, some channels appear to contribute more in the additive module, while other channels could be more important in the modulator module.
**W4**. We thank the reviewer for suggesting studying the effect of different initialization. As described in W3, Aa and Am are different after the learning regardless of initialization methods. For example, when both two matrices are initialized with the correlation matrix for one-step forecasting ( top row of Fig. R2 left panel), Aa and Am show different patterns (as discussed in W3). The conclusion generalizes to the random initialized weight matrix and multi-step forecasting cases.
Quantitatively, we experimented using random initialization of Aa and Am in Appendix (Fig.A2), showing that AMAG achieves similar performance using different initialization (0.659 with random init vs. 0.658 using correlation init), but the learning process of correlation initialized AMAG can be more stable (Fig. A2 in Appendix).
With additional experiments on non-adaptable Aa and Am (AM-G) after initialization, correlation initialization can achieve better performance AM-G (Corr Init R2 = 0.623), compared to using random initialization AM-G (Rand Init 1 R2=0.585) (see below).
| AMAG versions | R2 | CORR | MSE |
|--------------------|-------|-------|--------|
| AM-G (Rand Init 1) | 0.585 | 0.774 | 0.0346 |
| AM-G (Rand Init 2) | 0.596 | 0.773 | 0.0336 |
| AM-G (Corr Init) | 0.623 | 0.811 | 0.0285 |
In summary, the adaptable Aa and Am will converge to different matrices and can achieve similar performance with different initialization. However, when Aa and Am are non-adaptable (AM-G), the performance of the model can be affected by initialization methods.
**Q1**. To generate the synthetic data, we only use the additive adjacency matrix (Aa) as described in Eq. 2 of the main paper. In this case, adjacency matrix A defines the additive interaction.
**Q2**. For the analysis Investigating the adjacency matrix learned by AMAG, we examined Aa, i.e., additive interaction.
**Q3**. As discussed in Q2, the analysis of the adjacency matrix in 4.2 is based on Aa for additive interaction. We also visualized Aa and Am learned for one-step and multi-step forecasting with both random initialized matrices and correlation-initialized matrices in the attached pdf (Fig R2) (as discussed in W3 and W4). We find that regardless of the initialization method, in the one-step forecasting scenario, Aa matrix converges to similar patterns. Similarly, Am also converges to similar patterns, however Aa and Am have similar global patterns and different local patterns. This indicates that different channels contribute differently in terms of additive and multiplicative interaction. | Summary: This paper proposes a graph neural network with additive and multiplicative message passing operations for neuron activity forecasting. The proposed model AMAG consists of a temporal encoder (TE), a spatial interaction (SI) module, and a temporal readout (TR) module. TE and TR modules are sequence models such as GRU and Transformer. The SI module consists of Add and Modulator modules, which are motivated by the additive and multiplicative interactions between neurons. Experiments on synthetic data and real-world neural recordings suggest the superiority of AMAG compared to non-GNN and GNN baselines.
Strengths: 1. There is some originality in the design of the additive and multiplicative message-passing operations in the SI module.
2. The methods are technically sound.
3. Overall this manuscript is easy to understand.
Weaknesses: 1. A major weakness of the manuscript is in the experimental design: the authors only split the data into train and test sets, and the hyperparameters and best model were chosen based on the test set (Appendix 2.3). The model could have been overfitted on the test set and made the reported results questionable, even though the authors showed results on a different train-test split in the Appendix.
2. Comparisons to GNN baselines are not very fair. For instance, fully connected graphs are used for DCRNN and GraphS4mer. In the original papers for DCRNN (Li et al., 2017) and GraphS4mer (Tang et al., 2022), the graphs are pruned using a threshold and are therefore sparse. The authors also replace the S4 component in the baseline GraphS4mer with GRU, which is not the original GraphS4mer architecture.
3. Given that there is a major issue in the experimental design, I would not consider the contribution significant.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Experimental design: Model selection should be done on a separate validation set, and the test set should be held-out for reporting results only. Please follow this best practice for experiments.
2. Baselines: Please have a more fair comparison for GNN baselines. e.g., use sparse graphs and use the original GraphS4mer architecture (S4 instead of GRU).
3. Why is $A_a$ needed in the Add module? Would sampled-dependent $S$ be sufficient? It would be interesting to see an ablation experiment where $A_a$ is not included.
4. Are the main results based on GRU or Transformer for TE/TR modules? Please clarify.
5. Why aren’t GNN baselines included in Table 1 for one-step forecasting?
6. Figure 4: Please explain how the trajectory in panel C is obtained.
7. Please show comparisons to the strongest baseline(s) in Figure 3 and Figure 4A.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, limitations are discussed in Discussion. The authors do not foresee any negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1** & **Q1**. We thank the reviewer for pointing out this concern for experiments set up. As the reviewer noted, we provided additional experiments in the Appendix on a second train-test split for all four datasets. The test set of the first experiment is effectively the validation set since we train the model with the same set of hyperparameters of the first train-test split without additional hyperparameter search on the second train-test experiment. Since we did not search for hyperparameters when the model was trained on the second training and testing set, the results shown in the Appendix should not be overfitted to the second test set.
The results in the Appendix have the same trend as the results shown in the main paper, demonstrating that AMAG successfully generalizes to a different set of training and testing data.
Also, we performed hyperparameter search similar to AMAG for other methods and used the same set of hyperparameters when training the model on the second train-test split. This is to ensure that the results are comparable.
We will add an explanation of these details in the revised version of the paper.
**W2** & **Q2**. We thank the reviewer for pointing out the need for further details regarding which graphs were used in baseline methods. In both DCRNN and GraphS4mer we did use K-nearest neighbor (KNN) (as mentioned in GraphS4mer paper) pruned graphs.
In GraphS4mer we pruned the graph such that each neuron is connected to half of other neurons. We also tried threshold pruning (also mentioned in GraphS4mer) but setting the threshold is sensitive, especially with a large threshold (in terms of similarity), feature smoothing loss could become “NaN.”
For consistency, we pruned the graphs for DCRNN with KNN pruning. We additionally did experiments using threshold pruning with DCRNN on Monkey C with similarity thresholds of 0.5 and 0.8 where testing R2=0.573 and R2=0.445 (compared to DCRNN with KNN pruning being R2=0.618).
We will add these details and further discussion of graph pruning in the revised version of the paper.
We wish to point out that in regard to pruning, the advantage of AMAG is that AMAG automatically learns which connections should be kept instead of manually setting the sparseness of the model.
The reason that we did not use S4 is because S4 is suitable for long sequences; Based on the experiment, the performance in terms of R2 becomes 0.556 (Graphs4mer-S4) when using S4 instead of 0.579 (Graphs4mer-GRU). In the final version of the paper we will also include variations of baselines and their results and add further discussion of each baseline.
**Q3**. We thank the reviewer for the question. Aa is the primary weight matrix initialized with a correlation matrix that controls message passing between the channels without limitation in range. In contrast, each element in S is the output of the sigmoid function, ranging from 0 to 1 adapted for individual samples. S is viewed as the regularization term adjusting the weight in Aa matrix based on each input sample and is not sufficient by itself.
When keeping S only we observed that the performance is very similar to removing both S and A - the included ablation results( approximately R2=0.605 on the Monkey C dataset). We will add this ablation variant to Table 3 in the revised version of the paper.
**Q4**. We appreciate the reviewer's question regarding GNN baseline methods, such as DCRNN, GWNET, and GraphS4mer to one-step forecasting. These were originally designed for the multi-step forecasting scenario with the results in the original papers being for the multi-step forecasting. Furthermore, since AMAG accuracy (R2, CORR) turned out to be close to 1, the one-step scenario would not be necessarily an ultimate test since other methods applied in this scenario could either be close to AMAG or lower than AMAG. We thus did not include the comparison of these methods with AMAG and focused on a more challenging (in terms of accuracy) scenario of multi-step forecasting. We will add these notes to the final version of the paper.
As a result of the reviewer's question, we tried to adapt GNN baselines to one-step forecasting. Since GraphS4mer learns a dynamic graph for each time window (several steps), performing one-step prediction requires learning the dynamic graph for each step which could be redundant. DCRNN and GWNET could be adapted (results in the attached pdf Table R1). We find that all three compared methods obtain approximately similar results. AMAG is better on Monkey A, and B, and GWNET is better on Monkey M and C. In addition, to achieve similar performance, AMAG used 0.12 million parameters while GWNet used 2.2 million parameters. This discussion and results will be incorporated into the final version of the paper.
**Q5**. The results in the main paper are based on GRU for one step and transformer for multi-step.
**Q6**. First, we get each step feature using either original 96-channel neuronal recordings (96 dimensions) or hidden states (64*96-dimension outputs of SI). Then, we perform PCA on the concatenated features of all timesteps and all samples. Figure 4C shows the first 2 dimensions in the PC space for original neuronal recording (left) and hidden states (right).
**Q7**. Replot Figure 3 in attached pdf Fig.R1 (A).
The purpose of Figure 4A is to show that the SI module of AMAG is capable of selecting important channels. Specifically, testing if we mask channels with higher weight in the adjacency matrix (High importance) could cause a larger performance drop compared to masking channels with lower weight (Low importance). Thus comparisons are made per method. In addition, we examined the effect of masking the same set of channels of other methods, especially when the prior knowledge of channel interaction is not explicitly included in model design, i.e. NDT which was chosen since it has the same temporal structure as AMAG (using Transformer for temporal encoding).
---
Rebuttal Comment 1.1:
Comment: Thank you for the replies and additional experiments.
Re: response to Q1 & W1, I’m still not convinced that a second train-test split is sufficient. If there is an overlap in test set data between the first and the second train-test splits, the selected model hyperparameters based on the first test set could still work well for the second test set. And this is not the best practice in ML research.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the follow-up.
Our intent in using two train-test sets, as described, was to make sure that, for variable size datasets and with a limited amount of samples, there would be enough samples for training. As we observed earlier, non-graph-based methods such as NDT can easily overfit to the training set, and while regularization such as weight decay could help, including more training samples would be more advantageous. Thus two train-test splits were chosen for evaluating these methods instead of the train-val-test split, and other models followed the same evaluation strategy.
Considering the inherent nature of the random split, there indeed may be an overlap between the first and the second test sets (test-1 and test-2). The extent of the overlap accounts for approximately 10% of testing data across all four datasets. Since the overlap proportion is relatively small, the performance on test-2 primarily reflects models' performance on unseen data constituting 90% of unseen test-2 samples. Notably, this portion was not utilized for hyperparameter selection. Therefore, AMAG accuracy on test-2 is unlikely to be attributed to hyperparameters chosen based on the first test set.
We further investigated this point by validating AMAG and GNN-based baselines with a train-val-test split. Namely, the validation set in this split is the same as test-1, i.e., val=test-1, such that the val set is the set on which hyperparameters were tuned. From the remaining samples, we randomly selected train and test sets, with the test set having the same number of samples as the val set.
In Table D1, we report the performance on the new test set from this train-val-test split for multi-step forecasting. The table is to be compared with Table 2 (or Table R2, which includes variance) and Table A2. Compared across the three tables, the metrics do vary based on which test set was used for evaluation. This variation is per dataset, and its trend is mostly consistent with the order of the metrics that the methods achieve. E.g. for Monkey M, all metrics across methods worsen in Table D1 and Table A2 in comparison to Table 2 (R2). For Monkey A, all metrics across methods improve in Table D1 and Table A2 in comparison to Table 2(R2). \
While there is variation between test sets in the values of the metrics, AMAG consistently achieves better accuracy than other methods regardless of which test set the methods are evaluated on. This holds on the train-val-test split as well.
---
Table D1: Multi-step Forecasting on the New Test Set
---
| | Monkey M | | | Monkey C | | | Monkey B | | | Monkey A | | |
|------------|:--------------:|:--------------:|:---------------:|:--------------:|:--------------:|:---------------:|:--------------:|:--------------:|:---------------:|:--------------:|:--------------:|:---------------:|
| | R2 | Corr | MSE | R2 | Corr | MSE | R2 | Corr | MSE | R2 | Corr | MSE |
| GWNET | 0.272$\pm$8e-3 | 0.524$\pm$1e-2 | 0.0721$\pm$8e-4 | 0.606$\pm$4e-3 | 0.779$\pm$3e-3 | 0.0309$\pm$4e-4 | 0.588$\pm$2e-3 | 0.769$\pm$2e-3 | 0.0242$\pm$4e-4 | 0.724$\pm$1e-3 | 0.851$\pm$2e-4 | 0.0168$\pm$9e-5 |
| GraphS4mer | 0.267$\pm$3e-3 | 0.531$\pm$3e-3 | 0.0731$\pm$2e-4 | 0.586$\pm$7e-3 | 0.769$\pm$4e-3 | 0.0322$\pm$6e-4 | 0.659$\pm$3e-3 | 0.812$\pm$3e-3 | 0.0194$\pm$1e-4 | 0.753$\pm$8e-4 | 0.869$\pm$6e-4 | 0.0149$\pm$5e-5 |
| DCRNN | 0.288$\pm$3e-3 | 0.545$\pm$4e-3 | 0.0707$\pm$7e-4 | 0.606$\pm$2e-3 | 0.782$\pm$2e-3 | 0.0302$\pm$2e-4 | 0.635$\pm$4e-3 | 0.797$\pm$2e-3 | 0.0208$\pm$3e-4 | 0.756$\pm$2e-3 | 0.870$\pm$9e-4 | 0.0148$\pm$1e-4 |
| AMAG | **0.331$\pm$4e-3** | **0.575$\pm$8e-4** | **0.0694$\pm$4e-4**| **0.657$\pm$2e-38** | **0.811$\pm$2e-3** | **0.0266$\pm$2e-4** | **0.665$\pm$2e-3** | **0.817$\pm$1e-3** | **0.0192$\pm$3e-4** | **0.763$\pm$4e-3** | **0.874$\pm$2e-3** | **0.0144$\pm$2e-4** | | Summary: The authors introduce a graph neural network in their study to predict neural activity. This network comprises a temporal encoding and decoding layer specific to each channel, with a spatial interaction layer positioned in between. The inclusion of the explicit spatial interaction layer aids in capturing the underlying spatial relationships, as demonstrated using synthetic data. The proposed model exhibits state-of-the-art performance in both one-step and multi-step forecasting tasks when evaluated on actual neural data.
In terms of the model-readout, it incorporates both additive and multiplicative operations, with both being necessary to attain the demonstrated accuracy. This observation is supported by an ablation study.
Strengths: - impressive enhancements in one-step forecasting performance
- notable improvements are observed in multi-step forecasting, although to a lesser extent
- explicit spatial interactions aid interpretability
Weaknesses: - there is a presence of numerous non-standard acronyms in the paper that are not introduced upon their first mention
- the utilization of graph neural networks for analyzing neural signals is not entirely novel, this study showcases somewhat incremental progress in this field
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What is PSID, what is DCRNN? After the introduction section, subsequent acronyms are not explicitly introduced or defined.
- TE and TR can be transformer or GRU. Which one did you use for the results in the paper?
- Why is the improvement in multi-step forecasting comparatively moderate despite the impressive enhancements in one-step forecasting? Intuitively, one would expect the performance gap between methods to increase with an increasing number of forecasting steps.
- How many steps are used in multi-step forecasting?
- RoBERTa in Table 2 is not mentioned in the caption
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**. We appreciate the reviewer pointing out the use of non-standard acronyms in the paper. These are names of related methods introduced by their authors. We will make sure to include the full name of these methods at their first mention along with explanations of their origin, and provide citations in the revised version of our paper.
**W2**. We agree with the reviewer that there have been previous works using graph neural networks for neural signals. But most of the previous works focus on neural signals recorded from EEG and fMRI where the temporal dynamics could be different from the field potential (Local Field Potential and ECoG) datasets that we study in this work. In addition, most of the previous works address the classification task, e.g., emotion detection or disease detection. Whereas we focus on the forecasting task.
Forecasting of neural signals is important for both scientific understanding and application. Indeed, as demonstrated in Section 4.1 of the main paper, through learning to forecast, AMAG learns to recover the underlying interaction between channels. Furthermore, learning to forecast can be used for anomaly detection monitoring of the neuronal recordings and for reducing the latency of the Brain-Computer Interfaces (BCI) especially when future behavior is not simply related to past behavior. We show in this work that AMAG can achieve SOTA forecasting performance with a novel design of GNN architecture with both additive and multiplicative messaging passing operations.
In addition to forecasting, we demonstrate that AMAG facilitates estimation of connectivity (on the synthetic dataset) and learned adjacency matrix on the experimental recordings dataset which shows meaningful ECoG arrangement (Figure 4 in main paper).
**Q1**. We thank the reviewer for pointing out the need to define the acronyms early in the paper.
PSID is the abbreviation of Preferential Subspace IDentification algorithm [54, 55], where the method identifies the behavioral relevant and irrelevant neuron subspaces by learning to predict the next step of neural signals and behavior signals in three stages. RNN PSID extends the algorithm with the RNN structure [55].
DCRNN refers to Diffusion Convolutional Recurrent Neural Network [39] . The model combines the Diffusion Graph with GRU to perform traffic forecasting tasks. These methods were introduced and discussed in the Related Work section, which follows the Introduction. In the revised paper, we will make sure we define them earlier to avoid confusion.
References from the main paper:
[39] Yaguang Li, Rose Yu, Cyrus Shahabi, and Yan Liu. “Diffusion convolutional recurrent neural 458 network: Data-driven traffic forecasting”. In: arXiv preprint arXiv:1707.01926 (2017).
[54] Omid G Sani, Hamidreza Abbaspourazad, Yan T Wong, Bijan Pesaran, and Maryam M 503 Shanechi. “Modeling behaviorally relevant neural dynamics enabled by preferential subspace 504 identification”. In: Nature Neuroscience 24.1 (2021), pp. 140–149.
[55] Omid G Sani, Bijan Pesaran, and Maryam M Shanechi. “Where is all the nonlinearity: flexible 506 nonlinear modeling of behaviorally relevant neural dynamics using recurrent neural networks”. 507 In: bioRxiv (2021).
**Q2**. For one-step prediction, the results reported in the main paper are when both TE and TR are GRUs. For multi-step prediction, TE and TR are Transformers. The results for multi-step with GRU is R2=0.658, as shown in Appendix Table A3. We additionally ran one-step forecasting using Transformer and obtained R2=0.969. These results led us to choose TE, TR differently in one-step and multi-step forecasting.
**Q3**. We would like to clarify that the improvement of AMAG vs non-GNN methods on one-step forecasting range from 0.06 to 0.09 in terms of R2 and comparable to other GNN methods. The improvement for the multi-step forecasting is dataset dependent, ranging from 0.01 to 0.09 in terms of R2 (comparing the same set of methods as in one-step forecasting Table 1).
In most cases, the improvement is comparable between multi-step and one-step forecasting, except on Monkey A dataset, in which improvement of AMAG over other non-graph methods is more limited for multi-step forecasting.
Such a situation is possible since better one-step forecasting does not guarantee better multi-step forecasting. One can think of the multi-step forecasting as a greedy search where even if the optimal solution at each step is obtained, it does not guarantee a global optimal solution in the multi-step case, but in many situations it will reach close to it.
**Q4**. For all four datasets, in multi-step forecasting we predict future 15 steps, the period when monkeys are moving the cursor from the center target to the surrounding target.
**Q5**. We thank the reviewer for noticing this typo. The label should be TERN (RoBERTa is the name that was used in an earlier work [31], while in a later work a similar model was renamed to TERN [46].) The label will be corrected in the revised version of the paper.
[31 Bryan Jimenez. “Neural Network Models For Neurophysiology Data”. PhD thesis. Purdue University Graduate School, 2022
[46] Ganga Meghanath, Bryan Jimenez, and Joseph G Makin. “Inferring Population Dynamics in Macaque Cortex”. In: arXiv preprint arXiv:2304.06040(2023).
---
Rebuttal Comment 1.1:
Comment: Thank you for providing clarifications. I will await the discussion with the other reviewers before deciding whether to modify or maintain my score.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer’s feedback and will include the clarifications that the reviewer pointed out in the revised version of the manuscript. | Rebuttal 1:
Rebuttal: We thank all reviewers for their insightful feedback.
We tried to address all the questions for each reviewer in the rebuttal session below with references to weakness (**W**) and questions (**Q**).
Pdf: /pdf/e8004cffab42285c7a63be8c205084e339ccb2c7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a graph neural network-based model to forecast neural activities, which advances the DNN technology for neural understanding. It also proposes a method to leverage the causality structure of the signal into the moded design that generalizes the neural reconstruction task.
Strengths: The paper uses a graph neural network to do future prediction tasks, which is more complex than a reconstruction task.
The method has been verified in neural signals from monkeys and shown to outperform other state-of-the-art methods, including LFADS, and TERN.
Weaknesses: Some analysis on the importance of each module, self-connection, add module, and modulator module need to be discussed. For example, why are they all necessary?
Furthermore, the trade-off study between the model complexity and other sota methods is worth investigation.
Also, what is the computation bottleneck of the method, and how well does it scale?
Is there any theoretical guarantee that the model can predict the neural activity if the trajectory satisfies some regularity?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It would be nice if the authors provided some theoretical analysis of why GNN based model can outperform the non-GNN-based model for neural forecasting.
Also, architecture complexity analysis is an interesting discussion.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: limitations have been addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**. We thank the reviewer for suggesting additional analysis of the roles of Self-connection, Add and Modulator modules. These three modules are motivated by typical components in modeling neural activity, i.e. current activity, external additive input and gain modulation and according to our experiments are necessary to achieve AMAG accuracy as shown in the ablation study in Table 3 (main paper; Monkey C multi-step forecasting task). In particular, when Add or Modulator modules are ablated, R2=0.611 and R2=0.616, respectively, compared to R2=0.658 for full AMAG.
We also ablated both the Self-connection in AMAG (amAG), where, in this variant, R2 drops to 0.425, elucidating the importance of the Self-connection module. This analysis will be added to the final version of the paper.
**W2**. We thank the reviewer for suggesting to estimate and compare AMAG complexity. We estimated the number of parameters, training time, and maximum memory cost (Table R3 in the attached pdf). Graph-based methods use much fewer parameters than non-GNN methods, with AMAG consistently using minimal or comparable (DCRNN for one-step forecasting) number of parameters when compared to graph model baselines (DCRNN, GWNet, GraphS4mer). Less parameters comes at the expense of computation time and memory cost. We will include these estimates and the discussion in W3 below in the revised version of the paper.
**W3**. As presented above in W2, AMAG generally requires fewer parameters and less training time (compared to graph-based methods). The major bottleneck of AMAG is the requirement for larger memory, thus, scaling AMAG could be limited by the availability of working memory.
**W4**. We appreciate the reviewer raising the important point of analyzing how the proposed AMAG model predicts neural activity. Analysis of complex, non-linear models, such as GNNs / AMAG for time-series data is not well determined, though there are some initial interpretations and explorations made [1, 2].
While thorough analysis requires novel analytical tools and is outside of the scope of our work, under some simplifications analytical interpretation could be provided by simplifying AMAG to be linear and including only additive message passing and assuming that future signals rely on signals from two prior steps. Also, we constrain L2 norm of all weight matrices to be bounded by 1 for stability of linear RNN. Then upper bound error for $t+1$ can be expressed as
$\| \hat{\boldsymbol{X}}\_{t+1} - \hat{\boldsymbol{X}}\_{t+1} \|
\leq 3\|A\|\_2 \| \boldsymbol{X}\_{t-1} \|\_2 + \| A \|\_2\| \delta\_1 \|\_2 + \| \boldsymbol{X}\_{t-1} \|\_2 + \| \delta\_t \|\_2 \| + \|\delta\_{t+1} \|\_2$
Here, $A$ represents the adjacency matrix, $\delta_1$ and $\delta_2$ indicate variations in signals between consecutive time steps. Thus, if channel interactions remain small ($\| A \|_2$ value), and signals exhibit smooth transitions (small $\| \delta_t \|_2 $) at each step, then the linear model will likely produce smaller prediction errors.
We will include such simplifying scenarios in the Appendix of the revised version.
[1] Agarwal, Chirag, et al., 2022.
[2] Li, Yiqiao, Jianlong Zhou et al., (2022).
**W5** & **Q1**. We agree with the reviewer that such analysis could be instrumental to further interpret GNNs and AMAG. While rigorous analysis of such high-dimensional and nonlinear systems requires novel advanced analysis tools and is out of the scope of the current study, we identify empirically that GNNs advantage primarily lies in the ability to capture the topology of the activity.
Indeed, ECoG recordings exhibit spatial-temporal interactions among recording channels which could be represented as a graph. GNNs, by design, explicitly model the electrode interactions, thus including the prior inherent topology of the dataset. This prior knowledge could constrain GNN to be closer to the optimum. Indeed, if we completely remove the graph structure in AMAG (amAG), the accuracy drops to R2= 0.423 (Monkey C; multi-step forecasting).
In non-GNN-based methods, channel signals are simply concatenated without topology information. The topology needs to be learned and thus typically these methods require more training samples. In datasets that we consider, the number of samples ranges from 874 to 1605. For such amounts of data, non-GNN methods, e.g., NDT, can overfit to the training data. Adding regularization terms, e.g., weight decay, could help, but will also limit the model's capacity. We demonstrate this in Fig. R2 (B) of the attached pdf by visualizing how R2 could be affected by the attention dimension and the weight decay. Adding weight decay to NDT training can improve (orange triangle vs purple circle). However, as the attention dimension increases, the effect of regularization diminishes, and for dim>1024 weight decay does not contribute to improvement.
In contrast, GNN-based methods can be viewed as a type of ‘adaptive regularization’ which constrains the model learning trajectory. We will add this discussion to the revised version of the manuscript.
**Q2**. We measured the complexity of the model in terms of the number of parameters, training efficiency, and memory cost. As discussed in reply to W2, graph-based models typically require fewer parameters but longer training time compared to non-graph-based methods.
The major computational bottleneck for AMAG is the requirement for larger working memory. One way to reduce memory cost and computation time would be to compress the original graph into a smaller hidden graph in the embedding space, with each node in the embedding representing multiple nodes in the original graph which is plausible considering that ECoG array records shared source input. Pruning could also reduce the graph size and reduce the memory cost, however, these methods will require manual setting of the pruning threshold and other hyperparameters. | null | null | null | null | null | null |
A Sublinear-Time Spectral Clustering Oracle with Improved Preprocessing Time | Accept (poster) | Summary: This paper studies the problem of constructing a spectral clustering oracle with sublinear pre-processing and query time complexity. The paper introduces a new algorithm which improves on previous methods with respect to the running time at the expense of a slightly worse approximation guarantee. In contrast with previous methods, the new algorithm is more practical as demonstrated by experiments on synthetic data.
Strengths: On the theoretical side, this paper introduces a new sublinear-time clustering oracle with guarantees which improve on the previous state of the art in terms of running time. The new proof techniques are introduced in quite a natural way and the intuition is clearly explained. An important contribution of this paper is that the proposed algorithm is practical and admits an implementation.
Weaknesses: The theoretical improvements over the previous algorithms are quite small (in particular, the improvement over [30]), however I don't view this as a major weakness given that the new algorithm is more practical.
The statement of Theorem 1 is quite difficult to follow. For example, the parameter $\xi$ doesn't seem to be introduced or explained intuitively.
The experimental evaluation is quite limited - it does not include comparison with any other method, and does not report the running time of the algorithm. If the algorithm cannot be compared with that of [30] or [14], consider stating why this is the case in section 4.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What is the purpose of $\xi$ in Theorem 1? As far as I can see from the Theorem statement, it only appears in the running times, and always in the denominator - why not always set it to $1$?
Is it possible to experimentally compare the algorithm with [30] and [14]?
### Typos
* Line 133 - space missing after '.'
* Line 144 - space missing after reference [7]
* Line 145 - Noted -> Note
* Line 246 - dose -> does
* Line 262 - space missing after 'Theorem'
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the careful reading and the helpful comments. We fixed the typos in the updated manuscript. We slightly summarized your questions and provided detailed answers below.
**Question 1: The statement of Theorem 1 is quite difficult to follow. What is the purpose of $\xi$ in Theorem 1? It only appears in the running times, and always in the denominator - why not always set it to $1$?**
(1) The primary intention behind the incorporation of $\xi$ into Theorem 1 was to more effectively illustrate the tradeoff between preprocessing/query time and the precision of the approximation for the dot product of spectral embeddings. This approximation forms a pivotal component of our clustering oracle. However, upon further consideration, we acknowledge that including $\xi$ in Theorem 1 is not essential (see below).
(2) There is a typo for the upper bound of $\xi$ in the statement of Theorem 1, that is, the correct range of $\xi$ is $(\frac{1}{n^5},\frac{\gamma k}{5}]$ (instead of $\xi\in (\frac{1}{n^5},1)$). This range can be seen from the proof of Theorem 1 on page 8.
(3) We concur that incorporating $\xi$ into Theorem 1 is not essential. Instead, it is sufficient to substitute $\xi$ with the upper bound $\frac{\gamma k}{5}$ and subsequently adjust the associated running times accordingly. In the next version of our work, we intend to implement this modification by directly replacing $\xi$ in Theorem 1 with the aforementioned upper bound.
**Question 2: Is it possible to experimentally compare with the algorithm in [30] and [14]? If it cannot be compared with that of [30] or [14], consider stating why this is the case in section 4.**
The algorithm in [14] is hard to implement. As highlighted in our paper, the algorithm in [14] initially approximates the $k$ cluster centers by sampling around $O(1/\varepsilon\cdot k^4\log k)$ vertices, and subsequently undertakes the enumeration of approximately $2^{O(1/\varepsilon\cdot k^4\log^2 k)}$ potential $k$-partitions (Algorithm 10 in [14]). This enumeration process is extremely time-intensive and becomes impractical even for modest values of $k$. As suggested, we will explicitly state this factor in Section 4 to provide a clear rationale for our approach.
Regarding the algorithm introduced in [30], it appears that its implementation is likely viable. However, our primary advancement over [30] is evident in the significantly reduced conductance gap we achieve. To thoroughly explore the comparative performance of our algorithm against [30], we aim to conduct new experiments. These experiments will focus on determining the specific graph ranges within which our algorithm outperforms [30]. In the upcoming version of our work, we are committed to implementing the algorithm from [30] and conducting the suggested experiments to provide a more comprehensive understanding of the algorithms' respective capabilities.
**Question 3: The experimental evaluation is quite limited - it does not include comparison with any other method, and does not report the running time of the algorithm.**
We are pleased to share that we have generated new experimental results that enrich our evaluation efforts. We kindly request your review of the attached PDF file within our rebuttal, where these results are detailed. Notably, these additions encompass experiments focusing on the algorithm's running time and its robustness, providing a more comprehensive assessment of its performance. Your consideration of these findings is greatly appreciated.
* [Running time experiments] Please see Table 1 in the PDF file. The experimental results show that the running time of a single query is between 0.5-0.7 seconds, and the pre-processing time of our oracle is between 15.9-18.2 seconds.
* [Robustness experiments] Please see Table 2 in the PDF file. We evaluate our robust algorithm on an SBM graph after deleting delNum edges in each cluster (chosen randomly), where delNum is a parameter. We found that as long as delNum is not too large, our oracle has a small misclassification error, e.g. for 50 edge deletions in each cluster, the error is only $0.8\textperthousand$.
---
Rebuttal Comment 1.1:
Comment: Many thanks for your detailed response. I am pleased to see that you will implement some of my suggested improvements. I will keep my current (positive) score. | Summary: This paper studies oracles for spectral graph clustering, i.e., local algorithms that answer membership queries for single nodes in a spectral clustering of a graph. There is a line of research on testing cluster structure in degree-bounded graphs, and recently, the learning version of the problem studied in this paper has become popular. Besides a result for robust clustering oracles by Peng, this works is closely related to a paper by Gluch et al. Compared to the work of Gluch et al., this submission improves the preprocessing time to $O(n^{1/2 + O(\epsilon / \phi^2)})$, which is better by a factor of approximately $2^{poly(k/e)}$ at the expsense of requiring a conductance gap of approximately $\Omega(1/poly(k))$, which is worse by approximately a $poly(k)/\log(k)$ factor, and a misclassification error of $O(poly(k) \epsilon)$, which is worse by approximately a $k / \log(k)$ factor. The misclassification error is the fraction of vertices that are assigned to the wrong cluster (compared to the ground truth clustering). The query time of the two algorithms is roughly the same. In a nutshell, the result in this submission trades an additional polynomial dependency in conductance gap and misclassification error against the removal of an exponential dependency in preprocessing time. The authors experimentally confirm the misclassification error and query complexity proven in their theorems.
This work builds up on the dot product oracle introduced by Gluch et al. The algorithm in the latter work estimates the means of the clusters (in an embedding space) and uses the dot product oracle to estimate the closest cluster center (mean) for a query node. The exponential preprocessing time arises from the former part. In the present submission, the authors propose an algorithm that doesn't estimate the cluster means, but compares the dot product between node embedding directly. Intuitively, if two nodes belong to the same cluster, they have a large dot product with their cluster mean, and so they should also have a large dot product with each other. This modification also results in the aforementioned trade off, i.e., the increased misclassification error and the stronger requirement on cluster separation.
The question answered by this paper arises naturally from the work of Gluch et al.: Do we need to compute the cluster means explicitly? While the answer may not be surprising (no, but there is a trade off), it requires some work to actually prove this, as the formal argument is not simple and obvious. The intention of the experiments is not clear to me, as there is no comparison with other algorithms, or insights how the theoretical algorithm needs to be modified and tuned for applications.
Rebuttal: Rating changed from weak accept to accept due to authors' rebuttal responses.
Strengths: * The question explored in this paper is natural.
* The paper confirms an intuitive concept of spectral embeddings for clustering.
Weaknesses: * The result in this paper is not very surprising or better than one would expect. It seems more like a reasonable trade off.
* The experiments seem currently not very useful.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Note: It seems to me that a comparison of the conductance gap of your result and [14] is missing.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for taking the time to review our paper. We address your concerns in the following.
**Summary: The intention of the experiments is not clear to me, as there is no comparison with other algorithms or insights on how the theoretical algorithm needs to be modified and tuned for applications.**
**S1: The intention of our experiments.**
(1) We aim to elevate our oracle beyond its theoretical framework. The outcomes of our experimentation, presented in Table 1 within our manuscript, distinctly demonstrate that our oracle's misclassification error remains notably minimal in instances where the input graph showcases an underlying latent cluster structure. This empirical validation reinforces the practical utility and efficacy of our oracle beyond theoretical conjecture.
(2) We note that there is a tradeoff between computational cost and clustering quality. The main reason that we are adding the experiment on the query complexity, presented in Table 2 within our manuscript, is to show that for a small target misclassification error, our algorithms only require a **sublinear amount** of data, which is often critical when analyzing large social networks, since one typically does not have access to the entire network.
**S2: Why there is no comparison with other algorithms?**
We briefly comment on the implementation of the two most relevant sublinear-time clustering oracles given in [14] and [30].
(1) Implementing the algorithm from [14] poses challenges. As highlighted in our paper, the algorithm in [14] initially approximates the $k$ cluster centers by sampling around $O(1/\varepsilon\cdot k^4\log k)$ vertices, and subsequently undertakes the enumeration of approximately $2^{O(1/\varepsilon\cdot k^4\log^2 k)}$ potential $k$-partitions (Algorithm 10 in [14]). This enumeration process is extremely time-intensive and becomes impractical even for modest values of $k$. We will explicitly state this factor in Section 4 to provide a clear rationale for our approach.
(2) Regarding the algorithm introduced in [30], it appears that its implementation is likely viable. However, our primary advancement over [30] is evident in the significantly reduced conductance gap we achieve. To thoroughly explore the comparative performance of our algorithm against [30], we aim to conduct new experiments. These experiments will focus on determining the specific graph ranges within which our algorithm outperforms [30]. In the upcoming version of our work, we are committed to implementing the algorithm from [30] and conducting the suggested experiments to provide a more comprehensive understanding of the algorithms' respective capabilities.
**S3: How the theoretical algorithm needs to be modified and tuned for applications?**
In the upcoming version, we'll provide the following detailed steps to translate the theoretical algorithm into a practical one.
To adapt the dot product oracle parameters, such as $t$ (random walk length), $s$ (sampling set size), and $R$ (number of random walks), we exploit the theoretical gap between intra-cluster and inter-cluster dot products in clusterable graphs. By constructing the oracle with various parameter settings and calculating intra-cluster and inter-cluster dot products, we generate density graphs. The setting with the most prominent gap in the density graph is selected (see Figure 1 in the attached PDF file within our rebuttal). Our misclassification error experiments adopt $t=25$, $s=20$, and $R=200$.
Determining the appropriate threshold $\theta$ (lines 2, 8, 9 of Algorithm 1, and line 3 of Algorithm 2) is the next step. By observing the density graph linked to the chosen dot product oracle parameters, we identify the fitting $\theta$. In our misclassification error experiments, $\theta$ is set at $0.0006$ and $0.00053$ for $p=0.02$ and $0.0225$, respectively. For all other $p$ values in the range $[0.025, 0.06]$, $\theta$ is fixed at $0.0005$.
Furthermore, for a WhichCluster($G,x$) query, the theoretical algorithm randomly selects an index if vertex $x$ belongs to multiple components of similarity graph $H$ (Algorithm 2). In practice, we return the index of the first component to which $x$ belongs. This adjustment enhances the algorithm's practical usability.
**Weakness: The experiments seem currently not very useful.**
Thanks for pointing out this. We are pleased to share that we have generated new experimental results that enrich our evaluation efforts. We kindly request your review of the attached PDF file within our rebuttal, where these results are detailed. Notably, these additions encompass experiments focusing on the algorithm's running time and its robustness, providing a more comprehensive assessment of its performance. Your consideration of these findings is greatly appreciated.
* [Running time experiments] Please see Table 1 in the PDF file. The experimental results show that the running time of a single query is between 0.5-0.7 seconds, and the pre-processing time of our oracle is between 15.9-18.2 seconds.
* [Robustness experiments] Please see Table 2 in the PDF file. We evaluate our robust algorithm on an SBM graph after deleting delNum edges in each cluster (chosen randomly), where delNum is a parameter. We found that as long as delNum is not too large, our oracle has a small misclassification error, e.g. for 50 edge deletions in each cluster, the error is only $0.8\textperthousand$.
**Question: It seems to me that a comparison of the conductance gap of your result and [14] is missing.**
Thanks for pointing out the missing comparison. We will fix this in our manuscript. We also provide the comparison of the conductance gap between the two results in the following.
In [14], the conductance gap is $\varepsilon\ll O(\varphi^3/{\rm log}(k))$, and our conductance gap is $\varepsilon \ll O(\varphi^2/{\rm poly}(k))$.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response!
With the improvements outlined by the authors in all rebuttal comments and their commitment to incorporate them in the final version, I'm convinced that the paper becomes stronger, especially in the experimental part, and gives more significant insight into how the algorithm can be applied in practice. I bump my rating from weak accept to accept. | Summary: This paper proposes a spectral clustering oracle with sublinear pre-processing time and query time. The query is in the form of $(G, x)$ where $G$ is a graph with underlying clusters and $x \in V$ is a vertex. The goal is to (1) construct the oracle efficiently, (2) report which cluster vertex $x$ belongs to efficiently.
Comparing to the previous work, the main contribution is improvement on the pre-processing time, which reduces exponential to polynomial on $O(k/\varepsilon)$, but blows up the misclassification error from $\log k$ to $\text{poly}(k)$, also slightly relaxing an assumption on the gap between inner and outer conductance. The query time is asym-same.
The main technique is to replace the exhaustive search for each sampled vertex to decide a vertex $x$ belongs to a cluster with center $\mu$, with estimating the inner product of their spectral embeddings. It is proved the magnitude of the inner product roughly shows if two vertices belong to the same cluster.
Strengths: - The paper is clearly written and well organized.
- The result is neat, the proposed algorithms are more easy to implement comparing to previous ones.
- I think for a theory paper, having experiments is always a plus. However, there is a mentality that either do it well or just don't do it. The experiments can be improved or at least clarified better.
Weaknesses: - The major concern is that the contribution of the main result is quite limited. Yes, $O(k/\varepsilon)$ is an important factor, but it still in the $\tilde{\Omega}(\sqrt{n})$ regime, not to mention compromise on others. I actually like the robustness result better.
- On the experiments, if the authors want to keep and improve the section, I would suggest:
- (1) Clarify the evaluation. I believe the theoretical result on the error is the number of query instances? Then the current report does not look like so, is it the fraction?
- (2) The issue of query complexity is not mentioned before. It is out of blue. Explain why you do it.
- (3) Add the robustness experiments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: For the oracle results, there is another measure-of interest on the space. It would be good to report and compare with previous ones on this.
Just some curiosity on the lower bound. Is there any result stating something like: to obtain a small error, at least some space / time is needed?
Another suggestion is to give some conceptual description of good and bad vertices before the definition, maybe in the main technique part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See above.
My reason to give the current assessment is mainly on the technical novelty limit.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for taking your time to review our paper. We are happy to know that you like our robustness result. We address your concerns in the following.
**The major concern is that the contribution of the main result is quite limited. Yes, polynomial on k/eps is an important factor, but it still in the Omega(\sqrt{n}) regime, not to mention compromise on others.**
Since the running time of the spectral clustering oracle is $f(k) \Omega(\sqrt{n})$ regime, it seems that improving the f(k) factor from exp(k/eps) to poly(k/eps) is a modest improvement. We wish to highlight that achieving this improvement was not straightforward, besides that our algorithm is more practical and conceptually simpler. Our improvement necessitated the intricate application of spectral embeddings and the acquisition of certain insights, which appear deceptively simple *in hindsight*. Remarkably, during a presentation in the Bangalore Theory Seminars, one of the authors, M. Kapralov of [14], raised an open question (refer to the segment commencing at 54:44 of the YouTube video): "if one can get polynomial dependency on $k$ or avoid enumerating over the candidate cluster centers..." and "get sublinear time for all $k=o(n)$".
Our result made progress towards solving this question, and our algorithm does not enumerate over the candidate cluster centers and has sublinear running time for a much broader range of $k$ (with a slightly worse conductance gap than [14]).
**Clarify the evaluation. I believe the theoretical result on the error is the number of query instances? Then the current report does not look like so, is it the fraction?**
Thanks, we will clarify this in the upcoming version. In the context of our study, the misclassified error pertains to the **fraction** of inaccurately categorized vertices within each cluster. In Theorem 1, we employ the assertion $|U_{\pi(i)}\triangle C_i|\le O({\rm poly}(k\cdot\varepsilon))|C_i|$ as a metric for quantifying this error. This can also be equivalently expressed as the fraction of misclassified vertices being $O({\rm poly}(k\cdot \varepsilon))$ within each individual cluster $C_i$. In our experiments, we directly utilize this error fraction within our report, as exemplified in Table 1 in our submitted manuscript.
**The issue of query complexity is not mentioned before. It is out of blue. Explain why you do it.**
We note that there is a tradeoff between computational cost and clustering quality. The main reason that we are adding the experiment on the query complexity is to show that for a small target misclassification error, our algorithms only require a **sublinear amount** of data, which is often critical when analyzing large social networks, since one typically does not have access to the entire network.
**Add the robustness experiments.**
Please see the attached PDF file (Table 2) within our rebuttal for our additional experiments on the robustness of our spectral clustering oracle.
For example, we evaluate our robust algorithm on an SBM graph after deleting delNum edges in each cluster (chosen randomly), where delNum is a parameter. We found that as long as delNum is not too large, our oracle has a small misclassification error, e.g. for 50 edge deletions in each cluster, the error is only $0.8\textperthousand$.
**For the oracle results, there is another measure-of interest on the space. It would be good to report and compare with previous ones on this.**
Thanks for the suggestion. Upon further consideration, we find that our data structure uses much smaller space than the one in [14] for some interesting regime of parameters. We will add the following clarification in the next version of our manuscript.
The space complexity of our data structure is $O\left({\rm poly}(k)\cdot n^{1/2+O(\varepsilon/\varphi^2)}\cdot {\rm poly}(\log n)\right)$, while the oracle in [14] needs $O\left({\rm poly}(k/\varepsilon)\cdot n^{1/2+O(\varepsilon/\varphi^2)}\cdot {\rm poly}(\log n)\right)$. That is, the first term in our space complexity does **not** depend on $\varepsilon$ in comparison to the one in [14]. The primary factor behind this enhancement lies in our ability to approximate the dot product of spectral embeddings with only an additive error of $1/{\rm poly}(k)$, whereas [14] necessitates achieving an additive error of ${\rm poly}(\varepsilon/\varphi)$.
It's worth highlighting that our space complexity significantly outperforms that of [14], particularly in cases where $k$ is fixed and $\varepsilon$ takes on exceptionally small values, such as $\varepsilon=1/n^{c}$, for sufficiently small constant $c>0$.
**Is there any result stating something like: to obtain a small error, at least some space/time is needed?**
Thanks for the question. We will add the following clarification in the next version of our manuscript.
We note that there is no clustering oracle that allows both $o(\sqrt{n})$ preprocessing time and $o(\sqrt{n})$ query time. The reason is as follows: in [GR97], the authors show that one needs $\Omega(\sqrt{n})$ queries to distinguish an expander with $n$ vertices from a graph that is a union of two disjoint expanders each with $n/2$ vertices. Note that if there exists an oracle with both $o(\sqrt{n})$ preprocessing time and $o(\sqrt{n})$ query time, then one can use this oracle to distinguish the above two graphs, by checking if the corresponding similarity graphs have one or two connected components. This will be a contradiction to the $\Omega(\sqrt{n})$ lower bound [GR97].
Reference: [GR97] O. Goldreich, D. Ron. Property testing in bounded degree graphs. STOC 1997.
**Another suggestion is to give some conceptual description of good and bad vertices before the definition, maybe in the main technique part.**
Thanks, we will give a brief informal description of good and bad vertices in Section 3.1 (our techniques) as you suggested.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the detailed response. I am happy to see that the space complexity also roughly shaves off an $\frac{1}{\varepsilon}$ factor. Together with the lower bound, the results make this manuscript a more complete work. As for the experiments, please just make sure the setup is clearly explained.
Since my major concern is on the technical depth and the authors have shown this improvement answers to a standing open problem. I am raising the score to 6. Thank you! | null | null | Rebuttal 1:
Rebuttal: In response to the review comments, we have incorporated two additional experimental outcomes and included a figure outlining the parameter tuning process for the theoretical algorithm. We invite you to refer to the attached PDF file for both the experimental results and the figure explanation. Your review of the attached PDF file is greatly appreciated. Thank you.
Pdf: /pdf/c0b19546ebdc667f0426afde56ce80e880236aeb.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Structure from Duplicates: Neural Inverse Graphics from a Pile of Objects | Accept (poster) | Summary: The proposed method in Structures from Duplicates leverages multiple appearance of an identical objects in a single image to reconstruct the geometry and material properties of this object. Each instance is assigned a virtual camera, such that the shared object representation is aligned in the same space. Therefore multi-view reconstruction can be applied reconstructing each instance in a separate view.
First each object is identified using a segmentation method and cropped from the image. Initial camera poses are estimated using Structure from Motion (SfM) method COLMAP on the cropped images of each segmented object. To avoid the issues that come with extreme differences in the view points for SfM methods, images of instances are rotated and optimal pairs identified.
From estimated view point and image pairs the surface geometry following NeuS, the object's decomposed BRDF and environmental lighting following InvRender are optimized following known inverse rendering approaches.
Results on synthetic data show comparable quality with multi-view reconstruction approaches, while using duplicates of the same object in a single image and present better performance in the multi-view case, leveraging virtual views from duplicates. Additional results on real data are presented that validate the result on synthetic data.
Strengths: The authors present an interesting solution to a common setting in the real world. The given introduction and related works motivates this setting of multiple instances well, which is an interesting setting by itself, that has not been explored by prior work in a similar way. The method explanation is thorough on all major details. Building on well-known and established decomposed neural rendering architectures and ideas (NeuS, simplified BRDF) is acceptable because this representation is not the main focus of the paper and the method does not lack to my understanding.
Presented results justify most claims made in the beginning and ablation studies validate design choices. Especially showing that explicitly modelling lighting, geometry and material properties has an impact on the reconstruction quality over multi-view reconstruction on a single object is an important insight of the presented work.
One could argue, that this paper lacks novelty and just applies known concepts, such as instances from repeating object classes in the same images and building a prior from that. In my opinion neglecting a pre-trained prior is a strong advantage of the paper.
Weaknesses: A major weakness of the presented paper is the variety in the results. The synthetic and real dataset are both on the smaller side and pretty homogeneous, as elaborated in the following:
- While the motivation is convincing and Fig.2 shows a good variety of examples for the presented setting, the presented results in the main paper lack some variety and complexity that can be seen in the presented real world examples like the screws and chairs. The main paper presents only simple, closed object geometries, such as bottles, cans, apples, or the toy plane in the synthetic data with a lack of thin structures and complex textures. On the other side the alarm clock and coffee maker in the supplementary of this work show that the reconstruction can lead to overly smooth geometry on complex structures. This is expected on the limited number of views, but not properly addressed in the main paper.
- Not purely a weakness, but a suggestion to strengthen the claims I would ask the authors to add results on more complex real geometries and scenes. To my best understanding this can be a complex task to capture and reconstruct, but it would have a positive impact on the limited scope provided by cans, bottles and packaging.
Ablation studies on the number of objects from 2 to 10 objects on synthetic data show results, in which a higher number of objects significantly improves the results on almost all metrics of the reconstructed object. I would argue that even a higher number could improve results further and with respect to the presented motivation such experiments are not unlikely in the real world. Therefore an upper bound for improvements could further strengthen the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Partially addressed in the weakness sections:
- Were ablation studies performed on all objects (e.g. 2-10 instances per object category), please clarify?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I think the authors adequately addressed all major limitations, such as a required instance segmentation and the reliance on COLMAP poses. Additionally they addressed potential solutions. I would suggest to add a short paragraph on the scope of the paper with respect to the experiments, that I suggested in the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Clarification on abaltion studies**: The analysis on the number of instances were performed on a randomly selected scene. To validate our analysis holds for other scenes, we conduct the same experiments on all objects. Due to resource constraints, we only managed to fnish training on two more *randomly selected* objects. The rest is still running in the background. As shown in the table below (Table 3 and Table 4), the performance of our approach scales with respect to the number of instances. We will include the numbers for all objects in the final version.
**Overly-smoothed geometry**: We conjecture the limitation stems from the representation power of NeuS [1] which forms basis of our geometry backbone. We, howver, note that our pipeline is backbone-agnostic. We can always replace existing components with the latest and great models. We originally chose NeuS because of its simplicity. We are currently exploring more advanced representations (*e.g.*, [2]) and will include our investigations in the final version.
**Expanding the limitation section**: Thanks for the great suggestion. We will include an additional paragraph discussing the scope of the paper in the final version.
| Cleaner | Rendering | | | Albedo | | | Roughness | Relighting | | | Env Light | Geometry | | | |
|---------|-----------|--------|---------|---------|--------|---------|-----------|------------|--------|----------|-----------|----------|-------------|----------|---------|
| Num | PSNR ↑ | SSIM ↑ | LPIPS ↓ | PSNR ↑ | SSIM ↑ | LPIPS ↓ | MSE ↓ | PSNR ↑ | SSIM ↑ | LIPIPS ↓ | MSE ↓ | CD ↓ | Precision ↑ | Recall ↑ | F1 ↑ |
| 2 | 14.390 | 0.457 | 0.439 | 12.358 | 0.412 | 0.504 | 0.077 | 14.090 | 0.423 | 0.457 | 0.045 | 0.050 | 0.808 | 0.576 | 0.673 |
| 4 | 15.957 | 0.538 | 0.337 | 13.244 | 0.487 | 0.401 | 0.081 | 15.318 | 0.510 | 0.357 | 0.052 | 0.025 | 0.780 | 0.828 | 0.803 |
| 6 | 17.260 | 0.630 | 0.290 | 14.052 | 0.551 | 0.354 | 0.100 | 16.542 | 0.608 | 0.297 | 0.048 | 0.029 | 0.744 | 0.762 | 0.753 |
| 8 | 19.381 | 0.689 | 0.242 | 15.370 | 0.607 | 0.304 | 0.159 | 18.083 | 0.661 | 0.256 | 0.064 | 0.018 | 0.872 | 0.967 | 0.917 |
Tabel 3: Experiment result for different number of duplicated objects (Cleaner).
| Fire | Rendering | | | Albedo | | | Roughness | Relighting | | | Env Light | Geometry | | | |
|------|-----------|--------|---------|---------|--------|---------|-----------|------------|--------|----------|-----------|----------|-------------|----------|---------|
| Num | PSNR ↑ | SSIM ↑ | LPIPS ↓ | PSNR ↑ | SSIM ↑ | LPIPS ↓ | MSE ↓ | PSNR ↑ | SSIM ↑ | LIPIPS ↓ | MSE ↓ | CD ↓ | Precision ↑ | Recall ↑ | F1 ↑ |
| 2 | 16.902 | 0.430 | 0.345 | 13.788 | 0.380 | 0.410 | 0.306 | 16.082 | 0.402 | 0.340 | 0.092 | 0.038 | 0.692 | 0.600 | 0.643 |
| 4 | 18.157 | 0.531 | 0.271 | 14.799 | 0.461 | 0.325 | 0.311 | 17.224 | 0.500 | 0.272 | 0.080 | 0.029 | 0.713 | 0.781 | 0.745 |
| 6 | 19.086 | 0.560 | 0.252 | 15.674 | 0.502 | 0.287 | 0.233 | 18.088 | 0.531 | 0.242 | 0.057 | 0.025 | 0.774 | 0.825 | 0.799 |
| 8 | 21.029 | 0.622 | 0.248 | 16.632 | 0.531 | 0.301 | 0.305 | 19.602 | 0.582 | 0.238 | 0.072 | 0.028 | 0.847 | 0.747 | 0.794 |
Tabel 4: Experiment result for different number of duplicated objects (Fire extinguisher).
---
Rebuttal Comment 1.1:
Comment: **Additional results on ablation studies** I still wonder if more than 10 objects (results in the main paper) would improve the reconstruction results significantly and if there is a number of objects that can count as an upper bound for reconstruction improvements. Do you have any insights on that?
**Representation agnostic method** Clarifying, that the presented approach is agnostic of the 3D representation and can be improved with more advanced models makes sense. Looking forward to the Neuralangelo [2] results!
---
Reply to Comment 1.1.1:
Comment: To understand how the performance of our model scales with respect to the number of instances, we increase the number of objects to 15 and 20. Due to time constraint, we only test on the new scene of boxes. The preliminary results are shown below.
We will include more comprehensive investigation (e.g., more than 20+ objects, more categories) in the final version. Our current hypothesis is that there do exist an "sweet spot" number due to the accumulated 6 DoF pose error brought by heavier occlusion, the limited capacity of the visibility field, and limited image resolution.
| | Albedo | Roughness | Env Light | Relight | Env Light | Normal error |
|-----|---------|-----------|-----------|--------------|-----------|---------------|
| Num | PSNR ↑ | MSE ↓ | MSE ↓ | PSNR ↑ | MSE ↓ | °↓ |
| 5 | 20.053 | 0.086 | 0.086 | 19.229 | 0.086 | 3.329 |
| 10 | 19.517 | 0.095 | 0.095 | 19.021 | 0.095 | 2.578 |
| 15 | 20.522 | 0.093 | 0.093 | 18.769 | 0.093 | 2.433 |
| 20 | 22.530 | 0.077 | 0.077 | 19.560 | 0.077 | 2.279 | | Summary: This paper presents a pipeline for recovering object shape and surface materials from one single of multiple identical duplicate instances. The pipeline first extracts instance masks, registers camera poses using COLMAP (with in-plane rotation augmentation) and recovers shape, materials and environment lighting through inverse rendering.
Strengths: ### S1 - A sound inverse rendering pipeline
- The proposed pipeline is carefully designed, starting from instance segmentation from Mask2former, to a camera pose estimation step using COLMAP with in-plane rotation augmentation, and finally to inverse rendering of shape, BRDF material and SG env map similar to [65].
- Given that the instances are mostly identical, this pipeline in general works well.
### S2 - Good writing
- The paper is well written, with clear descriptions of the problem setup and most technical details.
Weaknesses: ### W1 - Quite a trivial task assuming *identical* instances
- Fig. 1: "the world is full of identical objects" - it might be difficult to argue that different instances that appear in one scene are *identical*. This is in contrast to the claim in [66], which says "no two roses in the natural world are identical".
- This paper focuses more on manufactured and synthetic objects, which appear more similar thanks to the consistent manufacturing process, but in reality, different objects often vary in both shape and appearance, due to wearing or natural variations, for instance, multiple apples in a basket.
- This is the fundamental assumption for COLMAP to work on these instances. It can easily fail even if there is a slight variation in geometry and appearance, which raises significant concern on the method's robustness to instance variations.
- Once the multiple instances are registered, the rest of the pipeline seems to be a standard inverse rendering task, and is highly inspired from existing work, which I do not find sufficiently interesting.
### W2 - A special case of the rose paper [66]
- Line 111: "independently and concurrently, Zhang et al. [66]" - NeurIPS has a two-month rule for claiming concurrent work, including arXiv papers. [66] appeared on arXiv in Dec 2022, which is almost half a year from the NeurIPS submission deadline. It might be difficult to claim concurrent work.
- Essential, duplicates are an extreme form of the definition of similarity in [66]. The instance variation is ultimately what makes [66] interesting, as it breaks the assumption for a standard multi-view stereo approach, hence the generative modeling.
- If the proposed method can handle instance variations well, such as different rose instances, without assuming a prior pose distribution, it will make a much more compelling argument.
### Other comments
- I suppose the material network is inherited from [65], as the "latent code" $\mathbf{\rho}$ and its regularizer are the same as [65]. But this is not mentioned in the paper.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: It might be hard to change my opinion on this paper, as most of the technical aspects are clear to me. Rather, this is a slight trivial task, especially given [66] has demonstrated results on a more generic case. Yet, I'm still open to objection.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 1 poor
Limitations: The paper attributes the challenge of the modeling instance variation to the shape/material model, but the fundamental challenge is rather the pose estimation with COLMAP to begin with. COLMAP generally fails terribly with deformable objects - also the reason that recent deformable NeRFs typically mask out deformable objects running COLMAP to register cameras.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **The task is NOT trivial**: **We respectfully disagree with the statement that "the task is trivial."** How to make inverse graphics/3D reconstruction more robust and work under more extreme scenarios is a challenging and longstanding problem in computer vision. In this work, we take a step forward by exploring the potential of performing structure from motion and recovering object intrinsics and environmental extrinsics *from a single image without pre-trained priors*. Specifically, we focus on the scenarios where there are multiple (near-)identical objects within the scene. By carefully formulating a duality between multiple copies of an object in a single image and multiple views of a single object, we are able to resolve the ambiguities in 3D and effectively recover the properties of interest. **While the idea may seem straightforward AFTER learned, how to design the low-level details and how to make it work in practice requires careful insights and thoughts (as acknowledged by other reviewers).** We urge the reviewer to acknowledge this.
To our knowledge, we are the first attempt to conduct structure from motion from a single image. We agree with the reviewers that the setup may not be as common as others in practice. However, it's important to emphasize that the problem, in its essence, is profoundly ill-posed and cannot be resolved without relying on certain priors. While some methods, such as [3], may ease the "identical" constraint, they also introduce other strong assumptions (e.g., knowledge of the camera distribution and vast amounts of objects for the generative approach (which we discuss in greater detail below). In contrast, our method chooses a distinct approach, addressing the issue from a purely multi-view geometric perspective. Moreover, our preliminary findings, detailed below, suggest that our methodology can also accommodate minor variations.
**[3] is NOT a superset of our approach**:
We agree the setup of the two papers are indeed similar. However, instead of one paper being the superset of the other, the two approaches are **orthogonal**.
- **Assumptions**: First and foremost, while [3] is able to model the variations among the instances, they impose other strong assumptions such as knowing the camera distribution in advance. The strong camera assumption allows them to sidestep the pose estimation step (*i.e.*, SfM) and focus on modeling the variation. In contrast, we assume no knowledge about the poses and attempt to solve for the full inverse rendering pipeline from the beginning. We thus resort to the (near-)identical instances to recover the exact 6 DoF poses.
- **Approaches**: Secondly, [3] tackle the task through ***generative modeling***. Since they need to train a generative model *per scene*, their approach is very data-hungry. In contrast, our approach mainly exploits ***multi-view geometry*** to recover the underlying intrinsic and extrinsic properties. By explicitly baking the constraints into the modeling procedure, our approach becomes much more data-efficient. To validate our conjecture, we train [3] on three randomly selected scenes from our dataset, each of which has 10 identical instances. As shown in the pdf, the generative model failed to recover either of them. For comparison, we also test our approach on the `crane` image that [3] provided (the only publicly available data), where each instance is slightly different. By augmenting our geomtry backbone with a instance-specific deformation field, we are able to reconstruct resonable poses and recover sensible shape and material.
- **Extrinsics**: Thirdly, [3] assume a simple phong shading model and assume a dominant directional light, whereas we parameterize our materials with PBR materials and the lighting with enironmental map, allowing us to model complex real world scenarios more effectively. Finally, it is unclear how to extend [3] to multi-view setup. In contrast, our method is naturally compatible with multi-view observations.
**Concurrent to [3]**: We are aware of the NeurIPS policy and we do acknowledge that [3] is on arXiv more than 2 months ago. The two works were developed independently, and as evident from our paper, our design choices were not influenced by [3]. The authors did see the paper and were very excited that other researchers share similar interests and have been working on simlar topics. In our submission, rather than overlooking [3], we carefully discussed the differences between the two studies in the "related work" section. We tried our best to put [3] in the light it deserves, despite by the time of submission the paper was not accepted yet and the code was not released. Moreover, now that the code for [3] is available, we are eager to test our method using their dataset and vice versa, aiming for a comprehensive analysis and discussion in the final paper (preliminary results for both can be found in the PDF). Finally, it's crucial to highlight that, while the contexts of our studies are similar, our approaches to addressing the problem are significantly different, as elaborated above.
### Reference
[1] Wang, Y., Skorokhodov, I., & Wonka, P. (2023). PET-NeuS: Positional Encoding Tri-Planes for Neural Surfaces. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 12598-12607).
[2] Li, Z., Müller, T., Evans, A., Taylor, R. H., Unberath, M., Liu, M. Y., & Lin, C. H. (2023). Neuralangelo: High-Fidelity Neural Surface Reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8456-8465).
[3] Zhang, Y., Wu, S., Snavely, N., & Wu, J. (2023). Seeing a Rose in Five Thousand Ways. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 962-971).
---
Rebuttal Comment 1.1:
Title: Thanks for the detailed responses
Comment: I would like to thank the authors for preparing the detailed responses. I have posted my response in the general thread above as I was called out there.
One additional comment: it would be very helpful if the authors could summarize, once the pose is registered using COLMAP, what are the key differences / technical contributions between the proposed inverse rendering pipeline and [65]?
---
Reply to Comment 1.1.1:
Title: Difference to [65]
Comment: **Contributions from the inverse graphics perspective:**
We have expanded the scope of the inverse graphics family by introducing a novel "single-view duplicate objects" setting. While [65] follows the traditional multi-view single-instance (M-S) framework, our focus lies in a single-view multi-instance (S-M) scenario. It's important to note that these two problem formulations are very distinct.
Specifically, our proposed setting brings forth several advantages compared to the existing setup. In the conventional configuration, the relative poses between the objects and the lighting sources remain fixed. This poses a significant challenge when attempting to distinguish an object's albedo from its illumination. For instance, it is difficult to tell whether it is a yellow directional light on the left illuminating a white ball or it is a ball that is half-yellow and half-white. In contrast, in our setup, the relative poses between the object and the light sources differ for each "virtual" view (*e.g.,* the env light is rotating in SO(3) space). This allows us to more effectively recover geometry, materials, shadow, and illumination while utilizing the same number of instance observations compared to [65]. Experimental results supporting this claim can be found in Table 1.
[65] Modeling indirect illumination for inverse rendering
**Key differences in execution:**
- *Backbone*: In our approach, we utilize NeuS as our neural surface model; and for the visibility field, we opt for Siren instead of ReLU. (We will highlight the distinction in the final version.)
- *Metallic*: We additionally reason for metallic materials (refer to Line 209).
- *MLP distillation*: We distill the geometry MLP into a smaller one for fast classification.
- *Self-occlusion and Inter-occlusion*: Since we have multiple instances in our setup, it is essential to model both inter-object self-casted shadows and inter-object occlusions. Our model goes beyond simple object-centric representation.
**Remark on “Colmap”:** We stress that our pose estimation process IS NOT just a simple application of Colmap. Merely employing Colmap will not be sufficient, as we have pointed out in the paper (Line 152). Part of our contribution lies in how we jointly reason the 6 DoF poses with a carefully designed matching scheme and how to integrate it into the BA framework. We only use Colmap for its BA optimization. | Summary: This paper tackles the inverse graphics task of predicting the geometry, material and illumination from a single image containing multiple identical objects. The key insight is to leverage the multiple instances depicted in the image to frame this single-view multi-object reconstruction problem into a better constrained multi-view single-object reconstruction problem. Specifically, the proposed approach (dubbed SfD) first identify and crop the instances in the image, then estimate their 6DoF poses using SfM to create a set of calibrated multi-view images. Finally, a geometric reconstruction module based on prior MVS works is optimized using the resulting multi-views. The framework is validated through experiments conducted on a custom dataset dubbed Dup and introduced in this work.
Strengths: S1. Leveraging duplicated instances to frame a single-view multi-object reconstruction problem into a multi-view single-object one is an interesting idea. Besides, this idea is rather novel; it has only been studied by the concurrent CVPR'23 work of [66]
S2. Experiment section looks strong; the compared methods are relevant and the reported performances are better than prior works
S3. The presentation is very clear and well written. It is easy to walk through the method formulation, which does not contain any major technical flaw. Figures are neat, well designed, and help the understanding.
Weaknesses: **W1. New problem with low applicability**
Although conceptually interesting, this work is tackling a new problem which by essence is very limited in terms of applicability to real-case scenarios, thus questioning its usefulness. Indeed, I see very few cases where the approach developed here could be successful, and quite a lot of failure cases: for example, I think the proposed approach would fail on most of the examples advertised in Figure 2 (see W2 for details).
**W2. Lack of analysis**
The current presentation lacks some crucial analyses to better understand the proposed approach. In particular, I would expect experiments outlining:
- the contribution of each key component (quantitative ablation study): each of the 6 loss terms, rotation aware data augmentation, etc.
- its robustness to real data: in particular not exact duplicates like leaves of a tree
- its robustness to noisy input: an image with duplicates + some other objects (e.g. chairs and tables in Fig. 2) or an image with only one-side instance views (e.g. cars in Fig. 2)
- its robustness to high number of duplicates (e.g. 100/1000 objects like the pile of screws in Fig. 2)
- the failure cases in the general setting (e.g., from the video in the supmat, it seems the geometry of the coke can is actually not so great, why?)
**W3. Highly engineered method**
In addition to the low applicability of the formulated problem, the proposed approach is heavily engineered (6 loss terms, two-level approach including an optimization level with multiple stages, etc). Such amount of engineering typically makes it hard to apply the method to other cases or even to modify and build on top of it to improve the method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Table 1 is overflowing and appears way too early in the presentation (harms the overall readability)
- L142: for completeness, I would suggest formulating the changes in the intrinsic matrix when cropping is performed (in the supplementary material)
- Eq (4) and (5): there is no scaling factors to weight the different loss terms?
- in general, the paper looks crowed because spaces have been modified; I think the presentation would gain in readability by leveraging the extra page to make it breath a bit more (especially true for the tables)
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There is a small limitation section, which I think should be more detailed and illustrated to help the reader better understand the proposed method (see W2)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Importance of the objectives**: To better understand the contribution of the loss terms, we start from the full model and subtract each loss respectively. As shown in the table below, removing either component will degrade the performance.
- *Metallic loss*: Since the metallicness of natural materials are usually binary, incorporating the metallic loss can properly regularize the underlying metallic component and prevent overfitting.
- *Eikinol loss and mask loss*: The two objectives help us better constrain the surface and the boundary of the objects, making them more accurate and smooth. Removing either term will significantly affect the reconstructed geometry and hence affect the relighting performance.
- *Pretrained normal*: The pre-trained surface normal provides a strong geomeotry prior, allowing us to reduce the ambiguity of sparse view inverse rendering. Removing it degrades the performance on all aspects.
| | Rendering | | | Albedo | | | Roughness | Metallic | Relighting | | |
|------------------------|-----------|--------|---------|---------|--------|---------|-----------|----------|------------|--------|-----------|
| | PSNR ↑ | SSIM ↑ | LPIPS ↓ | PSNR ↑ | SSIM ↑ | LPIPS ↓ | MSE ↓ | MSE ↓ | PSNR ↑ | SSIM ↑ | LIPIPS ↓ |
| full model | 21.348 | 0.621 | 0.227 | 17.267 | 0.546 | 0.265 | 0.304 | 0.021 | 19.948 | 0.582 | 0.232 |
| w/o binary metal loss | 21.511 | 0.624 | 0.230 | 17.088 | 0.514 | 0.312 | 0.195 | 0.081 | 19.025 | 0.561 | 0.250 |
| w/o latent smooth | 21.313 | 0.620 | 0.245 | 17.001 | 0.543 | 0.284 | 0.230 | 0.021 | 19.710 | 0.575 | 0.245 |
| w/o pretrained normal | 20.467 | 0.584 | 0.241 | 16.512 | 0.516 | 0.261 | 0.420 | 0.971 | 18.806 | 0.547 | 0.257 |
| w/o eik loss | 20.844 | 0.576 | 0.367 | 16.790 | 0.483 | 0.420 | 0.508 | 0.971 | 19.310 | 0.555 | 0.298 |
| w/o mask loss | 19.609 | 0.503 | 0.472 | 15.138 | 0.386 | 0.556 | 0.286 | 0.021 | 18.114 | 0.480 | 0.378 |
Table 2: Ablation study for the contribution of each loss term.
**Applicability and complexity**:
Over the years, the community has been actively investigating how to harness multi-view information from videos or sparse, extreme-view images, and push forward the frontier of 3D reconstruction and inverse graphics. Our work can be seen as an attempt in such a stride. To our knowledge, this is the first effort to conduct structure from motion from a single image. We agree with the reviewers that the setup may not be as common as others in practice. We, however, note that the problem by itself is an interesting scientific attempt. Furthermore, based on our preliminary experiments, our approach also has the potential to deal with slight variations. Specifically, we test our approach on the `crane` image that [3] provided, where each instance is slightly different. By augmenting our geomtry backbone with a instance-specific deformation field, we are able to reconstruct resonable poses and recover sensible shape and material (see the pdf). Finally, while our full pipeline consists of multiple components, *why they are used* and *how they are used* are all carefully designed (*e.g.*, enforcing geometry/texture sharing through re-parameterizing the query space). The pipeline is also backbone-agnostic, allowing us to replace individual backbones with latest models. For instance, we can replace the geometry backbone from NeuS to [2] and potentially improve the geometry. We will investigate further and include them in the final version.
**Robustness**:
- *Robustness to non-identical objects*: We applied our method to the crane image provided by [3], where each instance varies slightly. By augmenting our geometry backbone with an instance-specific deformation field, we successfully reconstruct reasonable poses and recover coherent shape and material properties (see the PDF).
- *Robustness to a high number of objects*: We evaluated our method on an image taken in the real world featuring 70 duplicated objects. The results indicate precise reconstruction and high-quality rendering (see the PDF).
- *Duplicates + background objects*: Our instance segmentation is used to mask out background objects, enabling the method to concentrate on the objects of interest.
**Missing scaling factors in Eq. 4 and Eq. 5**: Thanks for the catch. We will fix them.
**Adjust the formulation of the intrinsic matrix explicitly**: In the main paper, we deliberately omit the changes for the sake of simplicity and readability (as noted in the footnote). Nonetheless, we totally agree with the reviewer that it will be helpful to explicitly formulate the changes in the supplementary material. We will revise it.
**Expanding the limitation section**: Thanks for the suggestion! We will incorporate more comprehensive discussions and analyses (*e.g.*, the ones raised by the reviewers and those addressed during the rebuttal process) in the final version.
**Paper formatting**: We will adjust the tables and spacing to make the paper less crowded in the final version.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I thank the authors for the detailed rebuttal, which I have read along with the other reviews. In general, it addresses most of my concerns and I still think this paper presents valuable insights for our community. I strongly encourage the authors to include these discussions in the final version to further increase its quality. I will keep my rating and recommend to accept this paper.
---
Reply to Comment 1.1.1:
Comment: Thanks for your comments; we will discuss all the results in detail in the final version. | Summary: The paper presents a method for reconstructing the geometry, material, and illumination of an object using as input an image containing multiple copies of the object. The method leverages the appearance of multiple instances of an object in a single image to essentially create a multi-view supervision signal. The method first segments the different instances of the object in the image, then performs 6DoF pose estimation followed by a structure from motion pipeline to create an "artificial" multi-view setup. The object geometry and appearance is modeled with Neural Fields following NeuS. The whole pipeline is supervised using a photometric loss between the renderings and the original image.
Strengths: 1. I find this paper particularly interesting. I really like the idea of formulating a duality between multiple copies of an object in a single image and multiple views of a single object. It is a very nice example of thinking outside the box.
2. I enjoyed reading the paper. It is well-written and easy to follow.
3. The individual components used are properly justified and the engineering behind the method is also really good. It's a lot of individual components (segmentation, pose estimation, SfM, geometry and light representation) stitched together and it requires a lot of effort to make them work.
4. The qualitative and quantitative results (Tables 2 & 3) are very good.
I would like to see this paper get accepted.
Weaknesses: 1. I can't find any particularly important weakness. One issue is that the problem the paper attempts to solve is not something someone will encounter very often in real-world scenarios, but it's definitely very interesting from a scientific perspective.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I would like to see what are some failure cases of the method. I couldn't find any in the main paper or supplementary material.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper has a decent discussion on the limitations of the proposed method. I could not identify any immediate potential societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your recognition! The authors were extremely thrilled when first coming up with the idea of formulating a duality between multiple copies of an object in a single image and multiple views of a single object, and how to leverage it for inverse graphics. We are extremely excited that the reviewer enjoyed reading our paper and shared similar similar fascination as we do.
**Failure cases**:
- Since our approach is category-agnostic and does not rely on pre-trained priors, our model cannot effectively model/constrain the geometry of unseen regions (similar to existing neural fields methods, *e.g.*, NeRF and NeuS). For instance, the bottom of the fire extinguisher should be hollow in practice. But since the region is not visible in the image, our model will predict a convex shape instead. One potential solution is to pretrain the networks on a large corpus of data/objects to equip the model with priors over the world. This however will introduce additional assumptions (implicitly) regarding the objects we will be facing. We leave the trade-off for future study.
- Furthermore, while our approach significantly improves the performance over the baselines, it sometimes still struggles with reconstructing intricate, fine-grained details. We conjecture the limitation stems from the representation power of NeuS[1], which forms basis of our geometry backbone. We, however, note that our pipeline is backbone-agnostic. As the field progresses, our framework can readily incorporate more powerful neural representations, such as Neuralangelo[2].
---
Rebuttal 2:
Comment: I read the other reviews and the rebuttal. It seems that the reviewers are split, with the paper getting 5 different ratings (SA, A, WA, BR, R).
I understand the objections that the reviewers raised about the potential usefulness of the method; it's hard to find many real-world examples where one would encounter this scenario. Also the reviewers highlighted (including me) that the method is mainly about engineering and putting together a lot of already well-known components. I also understand the limitations that other reviewers highlighted, such as that COLMAP fails in the presence of non-rigid deformations, but this is not immediately applicable in the use cases presented in this paper, because most objects as Reviewer 4hmW noted are industrially manufactured, so there is little variation between instances.
There are other papers that have explored somewhat constrained or "artificial" setups such as:
- Learning the Depths of Moving People by Watching Frozen People (CVPR 2019)
- Reconstructing 3D Human Pose by Watching Humans in the Mirror (CVPR 2021)
- Through-Wall Human Pose Estimation Using Radio Signals (CVPR 2018)
Overall still believe this is an interesting problem from a scientific perspective and I would like to see the paper get accepted.
---
Rebuttal Comment 2.1:
Comment: We appreciate the reviewer's thorough review of our paper and the subsequent responses to our rebuttals. We would like to thank the reviewer for recognizing the positive contributions of our work from a scientific perspective. We will also cite the three papers the reviewer mentioned and discuss them in the final version. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful comments and valuable suggestions. We are very excited that the reviewers appreciated the novelty of our approach [`Reviewer sRgJ`, `Reviewer qVSc`], found the idea particularly interesting (*e.g.*, " a nice example of thinking outside the box") [`Reviewer sRgJ`, `Reviewer qVSc`, `Reviewer vbZQ`], acknowledged our extensive evaluation and impressive results [`Reviewer vUuZ`, `Reviewer sRgJ`, `Reviewer qVSc`], and enjoyed our presentation [`Reviewer sRgJ`, `Reviewer qVSc`, `Reviewer 4hmW`, `Reviewer vbZQ`]
---
**Novelty and contributions**
How to make inverse graphics/3D reconstruction more robust and work under more extreme scenarios is a challenging and longstanding problem in computer vision. In this work, we take a step forward by exploring the potential of performing structure from motion and recovering object intrinsics and environmental extrinsics *from a single image without pre-trained priors*. Specifically, we focus on the scenarios where there are multiple (near-)identical objects within the scene. By carefully formulating a duality between multiple copies of an object in a single image and multiple views of a single object, we are able to resolve the ambiguities in 3D and effectively recover the properties of interest.
Over the years, the community has been actively investigating how to harness multi-view information from videos or sparse, extreme-view images, and push forward the frontier of 3D reconstruction and inverse graphics. Our work can be seen as an attempt in such a stride. To our knowledge, this is the first effort to conduct structure from motion from a single image. We agree with the reviewers that the setup may not be as common as others in practice. We, however, note that the problem by itself is an interesting scientific attempt (as acknowledged by several reviewers). Furthermore, based on our preliminary experiments, our approach also has the potential to deal with slight variations. Specifically, we test our approach on the `crane` image that [3] provided, where each instance is slightly different. By augmenting our geomtry backbone with a instance-specific deformation field, we are able to reconstruct resonable poses and recover sensible shape and material. We hope it can shed light on future research along similar directions, such as handling articulate objects or objects with large deformation.
Finally, while our full pipeline is inspired by multiple exisiting components, *why they are used* and *how they are used* are all carefully designed (*e.g.*, enforcing geometry/texture sharing through re-parameterizing the query space). It is neither a simple extension nor a naive composition. It is also far from trivial. Exploiting existing algorithms to realize a novel idea does not mean there is no technical contribution. We hope the reviewers, in particular `Reviewer 4hmW`, can acknowledge this.
---
**Additional experiments and visualizations**
Based on reviewers' request, we have included new experimental results in the attached pdf and outlined them below. For instance, we provide a more comprehensive analysis of the robustness of our approach (w.r.t camera noise, segmentation error, and the amount of instances). We also present an additional comparison with prior art [3] and incorporate more detailed ablation studies. We thank the reviewers for the great suggestions and we strongly encourage the reviewers to take a look at the additional analyses.
---
We now address the concerns of each reviewer individually.
Pdf: /pdf/f73b54901df47027ebcf6d323f9bb1e549eefd38.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents "Structure from Duplicates" (SfD), a novel inverse graphics framework introduced to reconstruct the 3D structure, material, and illumination of multiple identical objects from a single image. The key steps include identifying these duplicate objects in an image and estimating their 6DoF pose. Then, the model applies an inverse graphics pipeline to deduce information about the objects' shape, material, and the lighting of the scene, while taking into account that these objects share the same geometry and material properties.
Strengths: In my opinion, the key strengths of the proposed inverse rendering method are:
- The method outperforms baseline techniques in single image multi-object setup and multi-view inverse rendering scenarios, as shown in the experiments.
- The approach can leverage duplicate objects to constrain underlying geometry and better disentangle the effects of lighting from materials.
- It can be extended to multi-view setups.
- The model makes effective use of duplicate objects, with accuracy improving as the number of duplicates increases.
- It performs well with real-world data, achieving comparable performance to multi-view baselines when trained using only a single view.
- The model supports various scene edits - once the material and geometry of the objects are recovered, along with the scene's illumination, it can faithfully relight existing objects, edit their materials, and insert new objects into the environment.
Weaknesses: The potential weaknesses seem to be:
- Dependence on Similarity. A significant limitation is that the instances in each image need to be nearly identical; the method struggles when there are substantial deformations between different objects.
- Pose Estimation Errors. The model currently requires accurate 6 DoF poses as input and keeps the poses fixed, which may limit its applicability in certain scenarios.
- Potential Segmentation Errors. SfD begins by identifying multiple instances of an object within an image. This is prone to errors when the segmentation is inaccurate which is more of a problem on glossy and specular objects.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - How would the model perform when applied to real-world scenarios with more noise, variability, and less regularity in object structures?
- Could you discuss any strategies for mitigating the potential overfitting that might occur if you increase the capacity of the model to allow for instance-wise variations?
- Could you elaborate on the potential of integrating your approach with BARF. What kind of improvements or challenges do you expect?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are discussed above in the weaknesses section. Overall I am on the fence due to the limitation of the proposed approach, which is highly dependent on the similarity of the multi-objects in a single image.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Dependence on object similarity**: Over the years, the community has been actively investigating how to harness multi-view information from videos or sparse, extreme-view images, and push forward the frontier of 3D reconstruction and inverse graphics. Our work can be seen as an attempt in such a stride. To our knowledge, this is the first effort to conduct structure from motion from a single image. We agree with the reviewer that the setup may not be as common as others in practice. We, however, note that the problem by itself is an interesting scientific attempt. Furthermore, based on our preliminary experiments, our approach also has the potential to deal with slight variations. Specifically, we test our approach on the `crane` image that [3] provided, where each instance is slightly different. By augmenting our geomtry backbone with a instance-specific deformation field, we are able to reconstruct resonable poses and recover sensible shape and material and high-quality reconstruction (see Figure 1 in rebuttal pdf). We will explore further and include our investigations in the final version.
**Robustness to pose estimation error**: As we have stated in the limitation section, one potential solution is to combine our approach with BARF and optimize the 6 DoF poses jointly with the underlying representations. To validate the conjecture, we test our joint optimization on a real-world image containing 70+ instances (see 2nd row of Figure 3 in pdf). Since each instance only takes a fraction of the pixels, the poses derived from SfM are very noisy. However, with the help of BARF, we are able to significantly reduce the pose error and recover the underlying object intrinsics and environmental extrinsics.
**Robustness to segmentation error**: To understand the effectiveness of the segmentation mask, we compare the performance of our approach using both the ground truth (GT) mask and the estimated mask on a randomly selected scene. As shown in the table below and in the pdf, the results are comparable, indicating that our model is robust to the mask to a certain degree.
| | Rendering | | | Albedo | | | Roughness | Metallic | Relighting | | |
|------------------------|-----------|--------|---------|---------|--------|---------|-----------|----------|------------|--------|-----------|
| | PSNR ↑ | SSIM ↑ | LPIPS ↓ | PSNR ↑ | SSIM ↑ | LPIPS ↓ | MSE ↓ | MSE ↓ | PSNR ↑ | SSIM ↑ | LIPIPS ↓ |
| full model | 21.348 | 0.621 | 0.227 | 17.267 | 0.546 | 0.265 | 0.304 | 0.021 | 19.948 | 0.582 | 0.232 |
| w/o clean segmentation | 20.700 | 0.602 | 0.228 | 16.681 | 0.509 | 0.266 | 0.232 | 0.133 | 19.124 | 0.565 | 0.227 |
Table 1: Robustness test for noisy segmentation. | null | null | null | null | null | null |
A Theory of Link Prediction via Relational Weisfeiler-Leman on Knowledge Graphs | Accept (poster) | Summary: This paper focuses on graph neural networks (GNNs) for link prediction on knowledge graphs. It introduces the concept of conditional message-passing neural networks, which compute pairwise node representations based on a source node and a query relation. The paper analyzes the expressive power of these networks with a relational Weisfeiler-Leman algorithm and a logical characterization. Experimental analysis is conducted to validate the theoretical findings.
Strengths: 1. The paper offers a systematic understanding of conditional message-passing neural networks for link prediction on knowledge graphs.
2. The paper provides insights into the expressive power of conditional message-passing neural networks over the general message-passing neural networks.
3. The paper conducts experimental analysis to empirically validate the theoretical findings. It explores the impact of different model choices, e.g., initialization, global readout functions, providing practical insights into the performance of graph neural networks for link prediction on knowledge graphs.
Weaknesses: 1. The title and some lines in the introduction seem to be a little bit overclaimed. For instance, the title named a theory of link prediction seems to be claimed on the general link prediction task while the main discussion in the main paper is about the Knowledge Graph link prediction. besides, it is hard for me to understand lines 32-36. I could see from this paragraph that the paper focuses on the NBFNet-based model with conditional message passing. Nonetheless, it seems not so clear to me what higher-order messaging passing neural network computing the pairwise representation on the node. I would like to suggest authors to strengthen the focus of the paper at the beginning and then detail the difference between the RGCN, and CompGCN models with NBFNet.
2. The second bullet of the contribution part at line 52 claims that the paper designed a relational Weisfeiler-Lemma algorithm. It leads to the potential misleading that the paper is the first to propose the relational Weisfeiler-Lemma algorithm. Nonetheless, as discussed in line 97 in related work, the relationship is already included in the discussion in 'Weisfeiler and Leman go relational'. I would like to suggest the authors for a clear comparison with the related work on existing relational Weisfeiler-Lemma algorithms.
3. The intuition for logical characterization is not so clear. It seems to be hard for me to understand the logical characterization. How it contributes to the expressiveness is not so clear. I would like to suggest the author have more discussion on motivation and why the contribution is significant.
4. Lack of explanation on why only focuses on the inductive Link Prediction setting. Despite the NBFNet shows strong ability typically inductive setting. Nonetheless, the algorithm seems not so relevant in the transductive or inductive setting. I still think the experiments on both transductive and inductive settings should be taken into consideration.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See in Weakness
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and your comments on our paper. We address each below.
>“The title and some lines in the introduction seem to be a little bit overclaimed. For instance, the title named a theory of link prediction seems to be claimed on the general link prediction task while the main discussion in the main paper is about the Knowledge Graph link prediction.”
We do not mean to overclaim our findings: we state link prediction in the title, because link prediction on simple graphs can be seen as a special case of link prediction on KGs, where only one relation type is allowed. In fact, all of the studied models and the presented theory equally applies to link prediction on single-relational graphs. The empirical focus is on KGs, but we make this clear in the paper. If there is an ambiguity regarding this, we will do our best to tone things accordingly to avoid any confusion.
>“Besides, it is hard for me to understand lines 32-36. I could see from this paragraph that the paper focuses on the NBFNet-based model with conditional message passing. Nonetheless, it seems not so clear to me what higher-order messaging passing neural network computing the pairwise representation on the node.”
This statement refers to the following: In principle, one could design a relational variant of higher-order GNNs, especially those with expressive power matching that of folklore 2-WL. These models compute strong pairwise node invariants, which is crucial for link prediction. However, they are inherently unscalable, which hinders their use in practice. This is precisely what motivates a trade-off between the expressive power and scalability: our paper studies models which are scalable while also being able to compute pairwise node invariants. We can improve this explanation also in the paper.
>“The second bullet of the contribution part at line 52 claims that the paper designed a relational Weisfeiler-Lemma algorithm. It leads to the potential misleading that the paper is the first to propose the relational Weisfeiler-Lemma algorithm. Nonetheless, as discussed in line 97 in related work, the relationship is already included in the discussion in 'Weisfeiler and Leman go relational'.”
We will rephrase this sentence to avoid any misleading statements as:
*“We define a relational Weisfeiler-Leman algorithm (building on similar works such as Barceló et al [1]), and prove that conditional message passing neural networks can match the expressive power of this algorithm.”*
As you kindly acknowledge, we give appropriate credit to this work in multiple places in the paper. Specifically, we cover this in the related work section, and separately, on line 484 as *“...generalizes results from Barceló et al [1]”*, and finally, in Appendix A.1 as: *“The expressive power of R-MPNNs has been recently characterized in terms of a relational variant of the Weisfeiler–Leman test [1]”*.
>“The intuition for logical characterization is not so clear.…How it contributes to the expressiveness is not so clear.”
Logical characterizations are by now a well-established topic related to the expressiveness of GNNs (starting from the work of Barceló et al [2]). Unlike the characterization of GNNs based on the WL test, these logical characterizations are *uniform*, in the sense that they hold over the set of all graphs. This means that for any unary formula F(x) in the logical language there is a GNN A such that, *over every graph G*, the set of nodes from G that satisfy F(x) is the same as the set of nodes from G that are classified positively by A. We believe that this logical characterization provides a clear picture of what C-MPNNs can do. On the one hand, the logic is “guarded”, i.e., it is only possible to existentially quantify over nodes that are directly linked to our given node x. This represents the “local” nature of C-MPNNs. On the other hand, the logic has counting quantifiers, which represents the fact that C-MPNNs can distinguish how many neighbors of x satisfy a certain property.
Logical characterizations are also important since they provide a link between the procedural behavior of GNNs and the more declarative and better understood formalisms of logic. For example, and as established by Barcelo et al [2], *“if one proves that two GNN architectures are captured with two logics, then one can immediately transfer all the knowledge about the relationships between those logics, such as equivalence or incomparability of expressiveness, to the GNN setting.”* As an application of this, we have shown in the paper that C-MPNNs with global readout are strictly more powerful than standard C-MPNNs for expressing logical classifiers: we have done so by showing that there is a logical classifier that the former can recognize but the latter cannot.
>“Lack of explanation on why only focuses on the inductive Link Prediction setting. Despite the NBFNet shows strong ability typically inductive setting. Nonetheless, the algorithm seems not so relevant in the transductive or inductive setting. I still think the experiments on both transductive and inductive settings should be taken into consideration.”
Yes, the proposed algorithms are not limited to the inductive setting. As a matter of fact, we did take this into account in our paper, and also reported transductive link prediction experiments in Appendix B.3 of our original submission.
Following the suggestions, we further experimented on transductive link prediction on biomedical knowledge graphs, **Hetionet** and **OGB-Biokg** which is reported in Table 3 of **rebuttal.pdf**. These empirical results are very much in line with the presented theory.
*[1] Pablo Barceló, Mikhail Galkin, Christopher Morris, and Miguel Romero. Weisfeiler and leman go relational. LoG, 2022.*
*[2] Pablo Barceló, Egor V. Kostylev, Mikaël Monet, Jorge Pérez, Juan L. Reutter, and Juan Pablo Silva. The logical expressiveness of graph neural networks., ICLR, 2020.*
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for your response. I still think Link Prediction and KGC are different problems, and they should not discuss together, not from a theoretical perspective, but from a practical perspective. I would raise my score if this W1 is solved
---
Reply to Comment 1.1.1:
Title: Adjusting the title
Comment: Thanks for your comment! We agree with your comment regarding the practical aspect. We acknowledged in our response that the empirical focus is on KGs. To make this also explicit in the title, we propose the following title: A Theory of Link Prediction via Relational Weisfeiler-Leman on Knowledge Graphs. We hope this addresses your concern and clears up the ambiguity. We also hope that the other concerns are addressed in our rebuttal. | Summary: The paper investigates graph neural networks (GNNs) for link prediction in knowledge graphs. The authors propose a GNN that generalizes several existing techniques based on the labeling trick and NBFNets and investigate its expressive power by relating it to the Weisfeiler-Leman method and giving a logical characterization. Different design choices of the architecture are explored experimentally.
Strengths: * A highly relevant problem with various applications is addressed and neural architectures for related tasks are thoroughly analyzed, giving new theoretical insights. As the number of papers published in this area grows rapidly, it is of utmost importance to explain the advantage of practical design choices and to work out the benefits of specific architectures. The paper is a solid contribution in this direction.
* The authors give a good overview of related work, including several recent results, and place their work well in the context of existing approaches. The paper continues recent work and generalizes it, which is a valuable contribution to fostering future work.
* The paper is well-written and enjoyable to read; the presentation is on a high level.
Weaknesses: * The experimental comparison focuses on the design choices of the proposed architecture but does not compare to existing works. This would be desirable, in particular, since higher-order GNNs subsume some of the techniques (as stated in the paper). The trade-off between efficiency and expressivity appears to be crucial but is neither investigated theoretically in terms of time complexity nor experimentally in terms of measured runtimes.
* The presentation of conditional messages passing remains abstract. It would be helpful to come up with an illustrative example, if possible.
Minor comments:
* l176: $\epsilon_u$ should be explained
* l186: $\mathbf{z}_q$ as argument of MSG should be just $q$ following the notation introduced earlier
* l370: "Thus, thus"
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: * How does the approach compare to NBFNets, architecture using the labeling trick, and higher-order WL-type GNNs regarding (i) accuracy and (ii) running time?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are addressed sufficiently.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and your comments on our paper. We address each below.
>“The experimental comparison focuses on the design choices of the proposed architecture but does not compare to existing works. This would be desirable, in particular, since higher-order GNNs subsume some of the techniques (as stated in the paper). The trade-off between efficiency and expressivity appears to be crucial but is neither investigated theoretically in terms of time complexity nor experimentally in terms of measured runtimes.”
Thanks for raising this. Following your suggestion, we present a time-complexity analysis for C-MPNNs and other inductive link prediction models in Table 4 of **rebuttal.pdf**. This also includes the amortized time per query which can better reflect the practical use cases and the advantages stemming from parallelizability. Regarding higher-order GNNs we consider the neural architecture from Barceló et al. [1] corresponding to rwl$_2$ (and mentioned in line 259 of our paper). This higher-order model requires $O(|V||E|d + |V|^2d^2)$ time in each layer to update all pairwise representations, and it cannot be parallelized, hence remains prohibitive in practice.
>“The presentation of conditional messages passing remains abstract. It would be helpful to come up with an illustrative example, if possible.”
The main difference from standard message passing stems from the conditioning on the source node, which we make explicit in our approach. We agree this is more intricate than standard message passing, and hence we provide a visualization of C-MPNN and R-MPNN in Figure 1 of **Rebuttal.pdf** with the following explanation: for the basic C-MPNN model, in order to compute a query fact $q(u,v)$, where $u$ is the source node and $v$ is the target node, we first initialize all node representations with the zero vector except the representation of the node $u$ which is assigned a learnable query vector $\mathbf{z}_q$. Following the initialization, we carry out relational message passing and decode the hidden state of the target node $v$ to obtain the output, which yields the representation of $v$ conditioned on $u$.
>“Minor comments: l176: $\epsilon_u$ should be explained. ”
$\epsilon_u$ is explained in line 184: *“adding an error vector $\epsilon_{u}$ sampled from $\mathcal{N}(0,1)$ to the conditioned node's initialization”*. Here, $\epsilon_{u}$ is a fixed perturbation given that node $u$ is chosen.
>“Minor comments: …l186: $\mathbf{z}_q$ as argument of MSG should be just $q$ following the notation introduced earlier. ”
Many thanks for this! Indeed, this is a slight abuse of notation. We will pass $\mathbf{z}_q$ as an argument to MSG on line 150, where this is first introduced to fix this.
>“Minor comments: …l370: "Thus, thus".”
Thank you, we have corrected this.
>“How does the approach compare to NBFNets, architecture using the labeling trick, and higher-order WL-type GNNs regarding (i) accuracy and (ii) running time?”
We answer your questions regarding (i) accuracy and (ii) runtime, separately for each model:
- **Comparison with NBFNets**: Let us note that C-MPNNs subsume NBFNets. Indeed, if we consider C-MPNNs without readout and set the history function as $f(t) = 0$, we obtain (a slight generalization of) NBFNets, as stated in the paper. In terms of the runtime, C-MPNNs and NBFNets are comparable: even though the readout component of C-MPNNs incurs a linear overhead, this is dominated by other factors. In terms of the accuracy, the picture is more complex: our results show that the choice of history function is not critical, which is empirically confirmed: $f(t) = t$ and $f(t) = 0$ yields similar results (Table 1 of the main paper). On the other hand, the (uniform) logical characterization shows the theoretical benefit of using a global readout component: this is empirically observed on FB15K-237, where the model using a readout component showed substantial gains (leading to state-of-the-art results) compared to not using a readout component (Table 2 of the main paper).
- **Comparison with the architectures using labelling trick**: GraIL is an architecture using labeling trick and we focus on this to make the comparison concrete. In terms of runtime, architectures which rely on labeling trick are typically slower, since they need to label both the source and the target node, as we state in the paper *“... the Labeling Trick only applies when both the source $u$ and target nodes $v$ are labeled”*. This yields worse runtime complexity for these models, and particularly for GraIL: we report the runtimes in Table 3 of **rebuttal.pdf**. In C-MPNNs, we only label source $u$, which allows parallel comparison for all queries in form $q(u,?)$: a single forward pass would compute all hidden representations of these queries. With Labeling Trick, $|V|$ forward passes need to be carried out as each time, both source $u$ and target $v$ needs to be specified. In terms of accuracy, we added more baselines in **rebuttal.pdf**, including GraIL, showing C-MPNNs outperform these models. We will integrate these discussions to our paper.
- **Comparison with higher-order GNNs**: Regarding higher-order GNNs we consider the neural architecture from Barceló et al. [1] corresponding to rwl$_2$, as we stated in a response to a related question. These higher-order models are computationally prohibitive in practice, and hence it is hard for us to conclude anything in terms of the accuracy of these models on knowledge graphs. Theoretically, they can compute stronger binary invariants, and hence we conjecture that they would perform strongly if it would have been possible to scale them up. The trade-off between expressive power and scalability is perhaps the most intriguing aspect of the study of link prediction.
*[1] Pablo Barceló, Mikhail Galkin, Christopher Morris, and Miguel Romero. Weisfeiler and leman go relational. LoG, 2022.*
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed reply. My questions regarding the trade-off between expressivity and complexity have been addressed sufficiently.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for going through our rebuttal, and for your continued positive evaluation of our paper. | Summary: The authors note that while GNNs are understood well in the context of simple graphs, there is a lack of comprehensive understanding when it comes to knowledge graphs. This study aims to systematically explore the use of GNNs in knowledge graphs, specifically in relation to the task of link prediction. The research includes a unifying view on different, seemingly unrelated models, potentially revealing new ones. The study evaluates the expressive power of different models using the relational Weisfeiler-Leman algorithm. The theoretical findings help clarify the advantages of some commonly used practical design choices, and these theories are supported by empirical validation.
Strengths: I have read the appendix of the paper. The technical details are clearly articulated, and the proof statements are clear and persuasive.
The authors introduce conditional message-passing neural networks and define a relational Weisfeiler-Leman algorithm to substantiate that these neural networks can match the expressive power of this algorithm.
The initialization progress is interesting.
Weaknesses: In the background section, the author first mentioned G = (V, E, R, c) while change it to G = (V, E, R, x) later. I know the author says it usually should be x instead of c. But can we directly use G = (V, E, R, x) at first to prevent confusion?
More GNN baseline method should be involved in the experiments.
The proof process is commendable, but could it be possible to draw a conclusion from it and incorporate it into the main text instead of the appendix?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Can the author add more baselines?
Although the authors have already provided two datasets, each with four different versions, I think this is adequate, so I have not listed it as a weakness. However, it would be good if the authors could include one or two additional datasets.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: As the author mentioned, the method is limited to binary task such as link prediction.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and your comments on our paper. We address each below.
>“In the background section, the author first mentioned $G = (V, E, R, c)$ while change it to $G = (V, E, R, \mathbf{x})$ later. I know the author says it usually should be $\mathbf{x}$ instead of $c$. But can we directly use $G = (V, E, R, \mathbf{x})$ at first to prevent confusion?”
We understand the source of the confusion, but let us note that $G = (V, E, R, c)$ is indeed a more general definition compared to $G = (V, E, R, \mathbf{x})$. This is indicated in the background section as follows: *“When $D=\mathbb{R}^d$, we also say that $c$ is a $d$-dimensional *feature map*, and typically use $\mathbf{x}$ instead of $c$”*. The differences between $G = (V, E, R, c)$ and $G = (V, E, R, \mathbf{x})$ may appear rather subtle but they are important. In our context, we use $c$ to denote a discrete set of colors used in the WL test, whereas $\mathbf{x}$ are the continuous features belonging to $\mathbb{R}^d$ in the corresponding neural counterpart. We carefully distinguish between these two notations on knowledge graphs for mathematical clarity as the lack of such a notational convention can present problems. We will highlight this point to avoid the confusion of readers.
>“More GNN baseline method should be involved in the experiments.”
We added more baseline methods from GraIL paper [1] which are reported in Table 1 of the **rebuttal.pdf** as part of our global response. We also provided the transductive link prediction baseline methods from DRUM paper [2], which are reported in Table 2 of the **rebuttal.pdf**.
>“The proof process is commendable, but could it be possible to draw a conclusion from it and incorporate it into the main text instead of the appendix?”
Thanks for going through the proofs. We are glad you find the presentation of the technical content commendable. The key theoretical contributions are presented in terms of Theorem 5.1, 5.2, and 5.3 along with brief explanations. Figure 1 serves as a summary of all the lemmas presented in Section 5.3. We can elaborate these details further (with more space allowance): we are happy to add some discussions regarding the high-level ideas behind these proofs in the body of the paper.
>“Although the authors have already provided two datasets, each with four different versions, I think this is adequate, so I have not listed it as a weakness. However, it would be good if the authors could include one or two additional datasets.”
Thanks for this suggestion! We agree that diverse datasets are beneficial to strengthen our point. Following your suggestion, we provide transductive link prediction experiments on two additional datasets: **Hetionet** and **OGB-Biokg**. The experiments are reported in Table 3 of **rebuttal.pdf** as part of our global response. The results are reassuring as they show very similar trends to those observed on the other datasets. Please let us know whether these points clarify your concerns.
*[1] Komal Teru, Etienne Denis, and Will Hamilton. Inductive relation prediction by subgraph reasoning. ICML, 2020.*
*[2] Ali Sadeghian, Mohammadreza Armandpour, Patrick Ding, and Daisy Zhe Wang. Drum: End-to-end differentiable rule mining on knowledge graphs. NeurIPS, 2019.*
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. You address some of my concerns, I'd like to raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for going through the rebuttal, and for raising your score. Could you please let us know if there are any remaining concerns? This would allow us to hopefully clarify these concerns and get your final feedback before the rebuttal window closes. To the best our understanding, your main concern was regarding the lack of a high-level presentation of the proof sketches in the body of the paper: we are happy to integrate these explanations to the main paper to present our results at different granularities so that the reader gets a better idea of the proofs without going through them in detail. We hope this suggested change is satisfactory and fully addresses your concerns. Of course, we are happy to elaborate based on your input. | Summary: This paper explores the expressive power of several GNNs designed for link prediction in knowledge graphs. The authors propose a conditional message passing framework with various designs for each component, a generalization of NBFNets. They also prove that the proposed framework can match the expressive power of a relational Weisfeiler-Leman algorithm. The experimental results demonstrate the impact of different model choices.
Strengths: 1. The paper provides a theoretical understanding of some relational GNNs for knowledge graphs, such as NBFNets.
2. The proposed C-MPNN is reasonable, generalizing GNNs that can compute pairwise representations.
3. The authors study the effect of different model architectures, adding depth to the exploration of GNNs.
Weaknesses: 1. Using relational asymmetric local 2-WL (rawl2) to measure the expressive power of C-MPNN is a little awkward. The definition of rawl2 is quite similar to C-MPNN, leading to potential confusion.
2. The experiments do not verify some theoretical findings, focusing mainly on different choices of model design. Certain theoretical discoveries, such as logical characterization, are not confirmed.
3. The experiments are primarily conducted on two datasets with differing data splits, leading to inconsistencies. More datasets and deeper analysis may be necessary to solidify the findings.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and your comments on our paper. We address each below.
>“Using relational asymmetric local 2-WL (rawl$_2$) to measure the expressive power of C-MPNN is a little awkward. The definition of rawl$_2$ is quite similar to C-MPNN, leading to potential confusion.”
The color refinement algorithm rawl$_2$ is introduced to exactly capture the theoretical expressiveness of the corresponding neural architecture C-MPNNs. These algorithms are therefore closely related to each other, which can perhaps be better understood in analogy to the relation between the 1-WL algorithm and standard GNNs. Fundamentally, rawl$_2$ is a non-parameterized color refinement algorithm, whereas C-MPNNs is a class of neural networks with trainable parameters. In this sense, C-MPNNs can be viewed as a learnable, continuous, differentiable version of rawl$_2$. We are happy to elaborate further if this does not address reviewers' concerns.
>“The experiments do not verify some theoretical findings, focusing mainly on different choices of model design. Certain theoretical discoveries, such as logical characterization, are not confirmed.”
There are two main conclusions which follow from the presented expressiveness study and logical characterization and are empirically validated in the appendix of the paper:
1. It follows that C-MPNNs are more expressive than R-MPNNs via the abstraction given by rawl$_2$ and rwl$_1$, respectively. To empirically validate this, we conducted transductive link prediction experiments which are reported in Appendix B.3 (as part of the supplementary material). The presented results suggest a clear trend that C-MPNN has an overall better performance compared to R-MPNNs. These experiments are transductive, as R-MPNNs (i.e., R-GCN) only apply in this setup.
1. It follows from the logical characterization that the addition of global readout yields more expressive power in the class of queries that can be captured. We conducted a dedicated synthetic experiment to validate this, which is reported in Appendix B.1. C-MPNN model without readout achieved random accuracy on this task, whereas the C-MPNN model with readout solved this task almost perfectly. Please note that this is in addition to the experiments on real-world data which is presented in the body of the paper.
We wish to note that we prioritized inductive experiments in the body of the paper in order to validate the other aspects of the presented theory (i.e., the choice of history should not matter). We are happy to include more details regarding the experiments from the appendix if they need to be better highlighted.
>“The experiments are primarily conducted on two datasets with differing data splits, leading to inconsistencies.”
We are using the exact same data splits from the existing literature and our results align with the presented theory. It is therefore unclear to us where the inconsistency lies: could you please be more specific on what inconsistencies are presented in the paper so that we can clarify these?
>“More datasets and deeper analysis may be necessary to solidify the findings.”
We would like to re-emphasize the experiments reported in the appendix of the paper. Following your feedback, we carried out additional experiments for transductive link prediction on biomedical datasets **Hetionet** and **OGB-Biokg** to further solidify the findings on diverse datasets. These are reported in Table 3 of **rebuttal.pdf** as part of our global response. These results are reassuring, as they show very similar trends to those observed in the other datasets. Please let us know whether these points clarify your concerns.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification from authors. I still have one questions: Does logical characterization means the global readout?
---
Reply to Comment 1.1.1:
Comment: Thanks for going through our rebuttal! We answer your question below in two parts:
***What does the logical characterization achieve?*** When studying the expressive power of neural networks, one is typically interested in characterizing the class of functions that can be captured (or, approximated) by the neural network. In the context of graph machine learning, we are interested in the exact same question with the difference that the functions are defined over graphs. For the task of link prediction, this means the following: for each knowledge graph G, we are interested in functions of the form $f(G): R \times V \times V \to$ {0,1} , because we want to quantify whether a link $r(a,b)$ is true or not, where $r \in R$, and $a,b \in V$.
This is precisely what is achieved by logical characterizations: if we can show that a GNN architecture can capture a logic L, then it means that this GNN architecture can capture all functions which can be expressed in this logic. This is very important, because if a GNN architecture A captures a *strictly larger logic* than another GNN architecture B, then we can conclude that A is *more expressive than* B. One implication is indeed regarding the use of global readout: specifically, we obtain that C-MPNNs with global readout are strictly more powerful than standard C-MPNNs. This means that there is *at least* one function that the former can capture but the latter cannot which is empirically confirmed in Appendix B.1.
***How does this compare to WL-based characterizations?*** Logical characterizations of GNNs date back to the work of Barceló et al [1]. Unlike the characterizations of GNNs based on the WL test, these logical characterizations are uniform, in the sense that they hold over the set of all graphs. Therefore, these studies are harder to obtain and yield more powerful characterizations, because they require a single parametrized model which would apply uniformly to all graphs. Uniformity condition is much more realistic, as it applies to all graphs of all size/structure, and hence desired in theoretical analysis. Non-uniform WL characterisations are indeed weaker than logical characterisations: one example is that they cannot recognise the expressiveness gain of using of global readout which is recognised by the logical characterization, as shown in Barceló et al [1].
Please let us know if this answers your question. These technical differences are intricate and easy to overlook, and we are happy to explain these more in our paper, if you agree these explanations are useful.
[1] Pablo Barceló, Egor V. Kostylev, Mikaël Monet, Jorge Pérez, Juan L. Reutter, and Juan Pablo Silva. The logical expressiveness of graph neural networks., ICLR, 2020. | Rebuttal 1:
Rebuttal: We thank the reviewers for their comments. We have responded to each concern in detail in our individual responses. In addition, we include a **rebuttal.pdf** to this post containing the results of all additional experiments.
Here is a summary of the changes to be made to the paper in light of the reviews received.
- **New experiments on biomedical datasets**: We added additional transductive knowledge graph completion experiments on biomedical datasets: **Hetionet** and **OGB-Biokg** and reported them in Table 3. (Reviewer cjW9, Reviewer kRBD, Reviewer 1Kyu)
- **Visualization of models**: We have provided a visualization of C-MPNN and R-MPNN in Figure 1. (Reviewer 7NY6)
- **Complexity analysis**: We presented the complexity analysis of the models in Table 4. (Reviewer 7NY6)
- **Baselines**: We added more baselines to inductive relation prediction experiments and reported these in Table 1 (Reviewer kRBD). Similarly, we added more baselines to transductive knowledge graph completion experiments and reported these in Table 2 (Reviewer kRBD).
- **Results from the appendix**: Based on reviewer feedback, we noticed that some of our experiments which are originally reported in the appendix of the paper could be better highlighted in the body of the paper. We have incorporated these changes.
We look forward to hearing your comments and feedback during the discussion period.
Pdf: /pdf/61db12f00af68aa06597e07c68af8a69a2e46444.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Physics-Driven ML-Based Modelling for Correcting Inverse Estimation | Accept (spotlight) | Summary: This paper builds on recent advances in black-box optimization for science and engineering inverse problems to find a satisfactory state with small physical error while decreasing the query times to the physical evaluation. The authors propose an error correction method GEESE, which is composed of a hybrid surrogate error model and a generative twin state selection approach. The hybrid surrogate error model is defined by both explicit and implicit errors, with the implicit error modeled as an ensemble of fully-connected neural networks. The generative twin state selection method adopts an exploration and exploitation approach, reducing the cost associated with physical error collection by not directly choosing states via a comprehensive space search. Experiments in real-world engineering inverse design settings demonstrate that GEESE decreases the number of queries needed to find satisfactory states with low error tolerance.
Strengths: The paper introduces a novel correction algorithm, GEESE to find a qualified state within an error tolerance threshold after querying the physical evaluations as few times as possible. Overall architecture is novel, and a twin selection strategy, its key component that simultaneously propose a potentially near-optimal state and a potentially informative state, is effective to decrease the query times.
The experiments demonstrate that the proposed method outperforms the baselines in terms of both failure and query times.
Weaknesses: It is still unclear how existed ML model in ML-branch (in Figure 1) is incorporated in GEESE algorithm. Even though a state estimated by the existing ML-model is used for activating GEESE algorithm, what else is the ML-model used for? Is the model also used to generate initial data $D_{0}$? (If so, does the performance of the ML model affect the failure time and query time in the experiments?) If ML-branch is used only for activation of GEESE branch, what is the reason of having ML-branch? In this case, as long as there is a pair of observation and estimated state (not necessarily being a state estimated by ML-model), GEESE is available for the pair. In other words, GEESE is applicable to no matter what ML-models or estimators we have.
**Minor comments**
line 149: Taking advantage of the robustness of nsemble learning [52, 53] … (typo)
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Is there any theoretical evaluation between the estimated state that activates GEESE and a satisfactory state by GEESE, such as distance between them?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes, the limitations are discussed in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the insightful comments. We address the comments by grouping them into two categories: questions and discussion about weaknesses.
### Replies to questions
We would like to sincerely thank the reviewer for this excellent question. Let us assume an observation vector $\mathbf{y}$, and during training, its associated state vector $\mathbf{x}$, and corresponding state vector estimate $\hat{\mathbf{x}}$. We would like to emphasize that during testing, when GEESE is deployed, the ground truth state vector $\mathbf{x}$ for an observation $\mathbf{y}$ is unknown, and thus, it is not possible to assess whether the estimated state $\hat{\mathbf{x}}$ is satisfactory or not by calculating a distance metric in state space, i.e., $d(\mathbf{x}, \hat{\mathbf{x}})$, where $d$ is a suitable distance metric.
To remediate this, the error system including physical models and metrics is used to project the distance assessment in state space into the error space, i.e., $e(\hat{\mathbf{x}})=\phi(d(\hat{\mathbf{x}}, \mathbf{x}))$. Function $\phi$ is the known physical model/metrics. By doing this, although we acknowledge the existence of $\mathbf{x}$, it is masked in the error function. Notably, such a projection is nonlinear and depends on the physical process. Therefore, the distance in state space, although correlated to the error, is not directly proportional to the error. Theoretically, it is difficult to assess whether a state estimate is satisfactory or not via the direct measurement in state space. However, in practice, we can approximately assess it through an error tolerance threshold $\epsilon$, which is set empirically. Given a sufficiently small $\epsilon$, when $e(\hat{\mathbf{x}})<\epsilon$, we can assume $d(\mathbf{x},\hat{\mathbf{x}})<k\epsilon$ for some constant $k$. Therefore, we can say that the state estimate $\hat{\mathbf{x}}$ is satisfactory if satisfying the condition of $e(\hat{\mathbf{x}})<\epsilon$. This thresholding approach is currently used to activate/continue the run of GEESE.
There are certainly deeper answers to explore your very interesting question, which we intend to pursue as further work, specifically, to perform geometric learning in state space, studying the geometry (e.g., distance and manifold structure) in state space that is compatible to the physical error.
### Discussion about weaknesses
**1** We agree with the reviewer in that GEESE can be applied within a general estimation framework, without the requirement of using a ML branch. As explained in the introduction part (Section 1), data-driven machine learning methods resulted in substantial advances in the solution of inverse problems (please check lines 25-27). However, these approaches lack in terms of reliability and trustworthiness, especially, for applications with low error tolerance. Indeed, this is the motivation for introducing GEESE as a state correction framework to enhance the reliability and trustworthiness of state estimations, while capitalizing on the efficiency of data-driven machine learning approaches. In regards to the query on the initial dataset $D_0$, this is randomly chosen as explained in line 3 of the algorithm pseudocode. Lastly, the reason for introducing the ML branch in Figure 1 was to emphasize that GEESE is a state correction framework for ML estimators. Following the reviewer's comment, we removed the branch in question, and revised Figure 1, as shown in the pdf attachment.
**2** Thank you for pointing out the typo. We have corrected it in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for providing detailed explanation for my questions. The explanation on the difficulty in estimating distance is very easy to follow. However, it still lacks theoretical soundness, such as a reason why GEESE can reduce the number of correction times and physical evaluations as well as estimation on the distance between $\bf{x}$ and $\hat{\bf{x}}$. Also, now that I have read reviews and the authors have updated the figure 1 (thanks for the update anyway!), I’m a bit unsure if the term “correction” in the title still makes sense, because I don’t see which function in the GEESE algorithm is actually responsible for “correcting" $\bf{x}$ and the proposed method looks more like a method to solve inverse problems for physics estimation. It would be great if the authors could provide an insight on the use of “correction.”
Regardless of the lack of theoretical soundness, the empirical results are significant and I’d be happy to support acceptance.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for supporting the acceptance of this work. In order to clarify the role of GEESE, we would like to highlight the context of its application. We commence our analysis by considering the use of an ML-based inverse model, which provides an estimate of the system state $\hat{\mathbf{x}}$ (at iteration 1). The estimated state is evaluated by the independent error module $E_p$, as shown in Eq. (1). When the disparity between $\mathbf{x}$ and $\hat{\mathbf{x}}$ exceeds an error tolerance threshold $\epsilon$, the state correction framework of GEESE comes into play, with the aim of correcting the state estimate, so that it is within the prescribed error tolerance (within a finite number of iterations). This process is explained in lines 228-232 of the original submission, however, it will be further emphasized in the revised draft. We acknowledge the reviewer's point of view that it is entirely possible for GEESE to be directly used as the inverse solver, however, this would impact on its convergence speed (i.e., starting from a random or zero state). Given the computational complexity of iterative optimization, and the deployment requirements of engineering applications (e.g., problem 3: Pulse-width Modulation of 13-level Inverters), an appropriate solution is one that capitalizes on the computational efficiency of ML-based inverse models, and the ad hoc utilization of GEESE for improving the reliability of system state estimation, as explained in lines of 27-31, 43-46. Thus, we feel that the use of the "correction" term in the title of our contribution is well-justified and conveys the context of using GEESE.
We are grateful to the reviewer for appreciating the significance of our empirical results, which resulted from strategically selecting only two states for physical evaluations at each iteration, and the efficient training of the robust hybrid error model to support the selection process. These designs aim at a reduced number of physical evaluations. Development of a theoretical proof for the reduction of the number of queries times to the physical evaluator is planned as part of our future research, and will be highlighted in the revised draft. | Summary: This paper studies the problem of estimating states from observations (inverse problem). The problem is important since there are many applications such as engine design for aircraft. The current solution usually uses a black box simulator to simulate the observations given the estimated states from the model and compare them with the ground-truth observations to adjust the estimation of the state. This is based on the gradient-based method. However, the current state-of-the-art approach is time-consuming, in which it requires many quires and gradient update steps to find a good solution. As a result, this paper aims to increase the sample efficiency of such a method.
The proposed method is called "Generative Exploitation and Exploration guided by hybrid Surrogate Error (GEESE)". It consists of several parts: (1) a surrogate error model that consists of an ensemble of neural networks to provide a fast estimation of the physical error, and (2) a generative twin state selection algorithm that consists of two neutral networks for generating the state's distribution. The algorithm proceeds as follows: it first proposes an "exploitation" state that aims to search for a better solution that is near optimal, and then an "exploration" state that aims to explore other states that might lead to a better solution. The proposed method is tested in three real-world inverse problems and shows a competitive performance.
Strengths: 1. The writing is very clear, and the notation is great.
2.The method is simple and intuitive, and it shows good results based on Table 1.
Weaknesses: 1. The method requires the training data to train the neural networks to estimate the error. This would require a lot of computational cost and time to collect high-quality data.
2. The proposed method essentially tries to learn a "system dynamics model" using neural networks and exploits this neural network to do optimization. As a result, you do not need to query the simulator many times.
3. Following on the above point, this means that the quality estimation heavily relies on the performance of the hybrid surrogate model. How many datasets do you use? If I understand correctly, first you "pre-train" the model, and then use that for getting the results in Table 1?
4. There is no explanation about what are the tasks presented in Table 1. I knew it states that they are in the appendix, but the reviewers are not required to read the appendix. So it would be nice to include some detail about the tasks.
5. The observation dimension in the paper is quite low, compared to some prior works that estimate the state from image observations (https://proceedings.neurips.cc/paper_files/paper/2022/file/6d5e035724687454549b97d6c805dc84-Paper-Conference.pdf, https://arxiv.org/abs/2102.06794).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In Table 1, there is a metric called query times. Could you please report the world-clock time in seconds? I am curious to see if these three problems simulator actually requires an hour/minute to run.
2. How will the model perform if the size of the observation is in the order of thousand?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See the weakness section. Overall, I think this paper needs some further discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the insightful comments. We address the comments by grouping them into two categories: questions and discussion about weaknesses.
### Replies to questions
**1** Thank you for the question. In practice, different types of simulators exist for the same problem. For instance, for problem 1, a simulator mentioned in [A] with high fidelity takes 10.3 seconds $\times$ 309 iterations $\approx$ 53 minutes to obtain a converged simulation, while another simulator in [B] with even higher fidelity takes two weeks to get a converged simulation. Since we aim at a research scenario that attempts to search a feasible state without querying too much data and without setting a high standard on data quality, we choose to use faster simulators with lower but reasonable fidelity in our experiments. For the three studied problems, each simulator takes no more than five seconds to run. Since the computing time varies when changing simulators, we report the performance by query times. We will include such information in the revised draft for better clarification.
[A] Zhang, Xiaobo, Zhanxue Wang, and Li Zhou. "An integrated modeling approach for variable cycle engine performance analysis." Journal of the Franklin Institute 360.8 (2023): 5549-5563.
[B] Claus, Russell W., and Scott Townsend. "A review of high fidelity, gas turbine engine simulations." Proc. 27th Int. Congress of the Aeronautical Sciences, Nice, France, 19-24 September 2010. Bonn, Germany: International Council of the Aeronautical Sciences, 2010.
**2** Thank you for raising this question. An observation with thousands of dimensions will not affect the calculation much. As we mentioned in lines 118-121, and demonstrated in the original Figure 1, an observation, denoted by $\mathbf{y}$, is used to calculate the error in GEESE and serves as a constant when correcting the state estimation. Its size only affects the error calculation, but not the optimization in state space. For instance, the choice of the distance function used to measure the difference between the ground truth observation $\mathbf{y}$ and the simulated one $\hat{\mathbf{y}}$ using the estimated state $\hat{\mathbf{x}}$ should depend on the observation nature and size.
### Discussion about weaknesses
**1** We agree with the reviewer that training data (state-error pairs in our study) is needed to train the surrogate error model (an ensemble neural network in our study). But our research objective is to reduce the cost of collecting high-quality training data, since we collect data online instead of having a ready dataset. We collect a small amount of data to activate the training, and then incrementally collect as small amount of new data as possible at each iteration, aiming at effective online learning. To avoid the use of a large readily collected data and to avoid high computational cost of data collection, GEESE uses a twin state selection approach to query the simulator at most twice in each iteration (please refer to Sec.3.2). In experiments, for the studied problems, we only collect randomly 64 state-error pairs to activate the training (please refer to line 274), and by the time when a feasible state is found, the number of new state-error pairs incrementally collected is between 3-63 (see Table 1).
**2** Thank you for this comment. In GEESE, the surrogate model training (a learning process) and the optimal state search/exploration (a search process) are conducted in an alternating manner at every iteration. Such an incremental learning manner and an alternating nature do require to access the simulator many times. Specifically, unlike those conventional surrogate-model-based optimization mentioned in lines 85-98, the surrogate model in GEESE is not pre-trained using a readily collected large dataset, but trained from scratch (please refer to sec.3.2), where, at each iteration, the states selected by the search process are used by the learning process to update the surrogate model. In addition, the surrogate model may provide unreliable estimations, which is critical for the applications of low error tolerance, as discussed in lines 95-98, so we also need to query the simulator to assess the quality of the corrected state estimation. Therefore, the simulator is necessarily queried multiple times.
**3** We agree with the reviewer that the estimation quality heavily relies on the performance of the hybrid surrogate model. We would like to re-iterate that the surrogate model is not pre-trained via pre-collected datasets. Instead, the data is collected via online querying the simulator. As mentioned before, our training is performed incrementally and aims at using as few training data as possible, which, for the studied problems, include 64 randomly collected state-error pairs and another 3-63 carefully selected pairs.
**4** Thank you for the useful suggestion. A task description will be incorporated after the first sentence of Section 4 in the revised draft.
**5** We are thankful to the reviewer for sharing these two papers. We would like to clarify that our research aims at reducing data usage through more effective online search strategies for optimization supported by a surrogate error model trained from scratch, and also aims at reducing data usage in surrogate model training by selecting more useful states to query. This is in contrast to the approaches reported in these two works, which are based on data-hungry, pre-trained models. As stated in our previous response under Question 2, the observation size is not of concern in the proposed approach. However, the state size matters, and this is discussed in lines 342-346, together with the potential limitations. | Summary: This paper proposes a sample-efficient method for correcting failed states in surrogate inverse problems. This is achieved through error estimation of optimized surrogate states, where some computationally expensive errors are approximated with a neural network, while the simple errors are computed explicitly. Exploitation and exploration networks sample new states to verify errors for, finding feasible states that will not lead to failure. Several science and engineering applications are shown for the proposed method, and it outperforms the baselines for all tasks in query time and failure times.
Strengths: Well written abstract, concise, showing core idea and significance in the field. The proposed method is well explained, and easy to understand. The authors did a thorough comparison with baselines, and ablation study to investigate the different components of the work.
Weaknesses: 1) Minor note: The introduction could be shorter, its purpose is very clear in that it leads to the contributions and methodology of the proposed work in the last paragraph, but there might be too much information leading up to that.
2) The D-dimensional state space first mentioned in Line 151 lacks a practical introduction, the reader might find it hard to imagine what such a state space could look like. Is it a meshed airfoil shape, is it the 3D position and velocity vectors of some object, etc. This is mainly important since in Line 183 the authors explain the generative model for a candidate set of these states, which is then again more difficult to understand without knowing what exactly these states are.
3) The implicit errors should be more thoroughly explained, it is unclear how "expensive to collect" is defined, and once again, a practical example of how these look like would be appreciated. Most important of all, it is unclear why these should be estimated with a surrogate in the first place. If it is due to computational complexity, an analysis should be done on how slow the computation is, and how much of a speedup the surrogate can offer, all the while showing the limitations of the surrogate in terms of generalizability, since the error estimation likely worsens when the explored states are far off from existing data.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) In line 99 for the related works paragraph on RL, is it correct to assume that there is no practical benefits of RL for inverse problems as opposed to the black box optimization mentioned in the paragraph above? At least there does not seem to be any advantages mentioned in this paragraph on RL for inverse problems. Is RL able to find better optima, but is just resource-intensive?
2) Why is the exploration network using a generative model at all? Is the state space poorly parametrized, i.e., too high dimensional? Otherwise the maximization in Eq. 10 could just be performed over all states, and not the generative model of the states.
3) Did the authors tune hyperparameters for the baselines as well for a fair comparison?
4) In Line 285 the authors simplified the setting of X=Z, is it correct to assume there is no generative model in this case?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are clear, although understanding why the surrogate for implicit errors is necessary should be elaborated. This was introduced in a too vague manner.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the insightful comments. We address the comments by grouping them into two categories: questions and discussion about weaknesses.
### Replies to questions
**1** Thank you for starting this interesting discussion. From our perspective, the biggest difference between RL and black-box optimization (BBO) is that the emphasis of RL is to train a policy/agent model that could be powerful for all scenarios, while BBO tries to search for the optimal solution for a given scenario. Such a general policy/agent model requires more iterations for training, which is more data and time consuming than BBO. In our experiments, we have compared SVPEN [43], an algorithm inheriting the idea of RL, with BBO (see Table 1). The results show that it does query more data. Although RL is not efficient as an online optimization tool, it can train powerful models. One can use GEESE to improve further an estimate state obtained by RL. In addition, RL can tackle very complex inverse problems via the existence of the policy model, one example is alpha-tensor[A], whereas BBO may be not suitable. We will enrich the related work paragraph on RL in the revised draft.
**2** Thank you for raising this question. We would like to point out that, in most practice, the state space is factually decided by the application, rather than being parameterized as an algorithmic choice. The reason to use a generative model is to enable a maximization over a sampled set of states rather than the whole state space, as we aim at finding a largely disagreed state instead of the most disagreed state. As we empirically observed during algorithm development, the inclusion of the most disagreed state can cause instability in search process, and being slower. The goal of performing exploration over the whole space is to find the most disagreed state, for which the exploration via disagreement (EVD) approach used by RL [57,58] serves as a tool by putting effort in training the exploration generator. We have conducted ablation study comparing EVD with our approach, which corresponds to the Ablation Study (2) in Table 2. The results show that our approach performs better with fewer query times. We will explain the above motivation for using an exploration generator in the revised draft.
**3** Yes, we did. Specifically, we tuned the kappa of utility function in Bayesian optimization, the learning rate of SVPEN, and the success rule of ISRES. As for the hyperparameters in GA-style algorithms, such as population size and offspring size, they are fixed to the batch size of GEESE, because it is unfair for GA to use a much larger population size than GEESE, which will exhaust the query budget in one run. In addition, problem 1 is directly inherited from the paper of SVPEN [43], so we took the default setting of SVPEN for that. We will include the information in Section B of supplementary material.
**4** Thank you very much for noticing this potential misunderstanding that could be caused by the expression of $X=Z$, and we apologize for not being clear in line 285. It means that we set the latent space $Z$ as the state space $X$ without using any neural network to transform in between. We randomly sampled an initial state sets $X_{IT}^{0}$ containing $K$ states sampled from the state space. The exploitation state is directly optimized iteratively, e.g., by a gradient descent approach, based on Eq. (7) (its slightly modified expression is shown in our submitted pdf file). The different states in $X_{IT}^{0}$ are used to initialize the optimization and result in $K$ solution states. The one with the smallest objective function value is selected as the exploitation state.
### Discussion about weaknesses
**1** We are happy to shorten the introduction in the revised draft.
**2** When we generally introduce inverse problem and state in the beginning of the paper, we have given some practical examples of state in lines 21-23. But we agree that it will benefit the readers' understanding to use examples when explaining the method. As suggested, we will add the following text after the sentence in line 151: "*An example of such a state space is a $D=2$ dimensional space, where the two dimensions correspond to the temperature and concentration states from spectroscopy. Another example is a state space of $D=11$ dimensions with each state corresponding to a design parameter for aeroengine design, for which we provide more details in Section 4 and Section A of supplementary material.*"
**3** There are two types of implicit error that drive us to construct surrogate model to estimate them. As pointed by the reviewer, one type is the expensive error that takes time to calculate. An example of this is a high-fidelity engine simulator in [B] that takes two weeks to get a converged simulation. But, more importantly, a stronger drive for constructing surrogate models is the existence of those non-differentiable errors obtained from non-differentiable simulators, which mostly contain databases and maps. Examples of such simulators include those of spectroscopy [17] and of gas turbines [11]. The error gradient information is important for optimizing Eq.(6). Therefore, although there is a risk of obtaining inaccurate estimation, the surrogate model is used to estimate the errors and their gradient. We do aim at improving the estimation accuracy in our design. For instance, we use ensemble learning to improve model robustness, and propose the twin state selection approach to query more useful states in order to improve the data quality for training the surrogate model. We will surely explain more on implicit errors and why estimating them in the revised draft, as suggested by the reviewer.
[A] Fawzi, Alhussein, et al. "Discovering faster matrix multiplication algorithms with reinforcement learning." Nature, 2022.
[B] Claus, Russell W., and Scott Townsend. "A review of high fidelity, gas turbine engine simulations.", 2010.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the rebuttal. The responses do strengthen the paper, knowing what practical applications can be expected, the comparison between RL and BBO, and why implicit errors are very much needed. There are no more questions from my side, the score has been increased since the motivation for this paper is much stronger with this knowledge.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your positive feedback and for increasing the score. | Summary: The paper is very good and interesting - and on a relatively novel topic that has so far recently seen very less attention in the AI community, and is only starting to see more attention with the recent questions on fairness and trust in AI models. The proposed GEESE algorithm is interesting, the foundation of the paper is mathematically sound, and this reviewer appreciates the idea of the authors to use the algorithm to build trust and confidence in traditional AI models.
Strengths: The contribution to the world of physics + AI combined is in itself very useful for the AI community. The major USP of the paper is its clear thought process on how in complex real-world engineering systems the error tolerance is often very low, and thereby, one can obviously not trust traditional optimisation techniques in ML. The idea of using physics-informed optimisation is appealing.
Weaknesses: Only issue with the paper is its title - it is very generic. This reviewer would suggest the authors to make the title more specific (e.g. include the mention of ML models in the title or something similar to show that it pertains to utility in ML applications, as is the thought with NeurIPS!).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations have been clearly addressed in the conclusions section/discussions of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for appreciating our work and the very useful suggestion. Indeed the title should link better with the machine learning community. The new title will be **"A physics-driven machine learning framework for correcting inverse estimation"**. | Rebuttal 1:
Rebuttal: Dear reviewers,
We are attaching a PDF file here for providing the information that we mentioned in separate rebuttals. Thank you very much.
Pdf: /pdf/f45825ea32f490649393571e1153664333fe2534.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The article presents a new method of solving inverse problems using what they call is a grey box method. The introduce key concepts of using Generative methods to reduce the number of objective function invocations in an optimization problem. The presented results are very good and show a lot of promise for the techniques. Although there are a few points that are not addressed properly. They are mentioned in the sections below.
Another good thing with the method is that you can continue doing Exploration in Parallel, although this is not discussed in the paper. But on this point, I have other concerns that are mentioned in the Questions section.
Strengths: The methodology is explained in detail. The appendices are used adequately to describe the required concepts and proofs in detail. The code is provided in the supplementary material, but I could execute it because not much guidance was provided on how to execute the code.
Weaknesses: The method of explanation is not correct. In lines 126 to 130 authors have explained the algorithm briefly, which is not explained correctly. Important details are missing which creates misunderstanding. A better way would have been to introduce a complete sketch of the algorithm in the form of an algorithm that could be presented here. Then each part of the algorithm could be explained later, for example, Hybrid Neural Surrogate Error Models and Twin State Selection. Another way could be do first embed these fundamental concepts for the reader and then explain the algorithm in one go. Personally, I will recommend the first approach because that is iterative and reinforcing.
Following are some language-related issues.
- Line 83
A grey-box setting is thus more suitable in practice, where ones do not
[Correction]
A grey-box setting is thus more suitable in practice, where one does not
- Line 141:
followed by the twin state selection for charactering the probability
[Correction]
followed by the twin state selection for characterizing the probability
- Line 257
for evaluatation are explained
[Correction]
for evaluation are explained
- Line 307
general, hile outperforming other methods
[Correction]
general, while outperforming other methods
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Line 207, if you are not training the generator then there it can be sampled from an appropriate distribution. How random weights are going to help you when they are not being trained?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have mentioned the limitations of complex systems with a large number of states. They have mentioned that they will work on this problem in the future. They will look into how they can solve the problem of extended training time and data requirements.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the insightful comments. We address the comments by grouping them into two categories: questions and discussion about weaknesses.
### Replies to questions
**Q1** Thank you for asking this interesting question. The purpose of the exploration generator $\mathbf{G}_R$ is to randomly sample a set of candidate states to select one state from by applying the criterion in Eq. (10). This is an improved selection strategy over the exploration via disagreement (EVD) approach commonly used by RL (see references [57,58] as in the submitted paper). The difference is that EVD selects from the whole space, while ours selects from the candidate set sampled by $\mathbf{G}_R$ to reduce computational cost. Theoretically, any probability distribution can be used as $\mathbf{G}_R$. To encourage diversity, different distributions are used for different iterations by adopting a parametric probability distribution function and varying its parameters at each iteration. In our design, we use neural network to formulate such a parametric distribution and varying its parameters by using random network weights. An alternative option is to use the multivariate Gaussian distribution and assign different mean vectors and covariance matrices at different iterations. In our practice, the neural network based distribution is sufficient to encourage diversity and select an effective exploration state.
We are happy to enrich the discussion of our exploration generator design in the revised manuscript.
### Discussion about weaknesses
**1** Thank you for the valuable suggestion. In lines 126 to 130, we started by explaining the commonly used iterative framework by classical black-box optimization techniques, instead of our proposed algorithm. It would be more clear to include a sketch of our algorithm after this. We also prefer the first approach as suggested by the reviewer, and will include an algorithm sketch in the revised manuscript and then expand on each algorithm part. We have prepared such a sketch and have attached it in the submitted pdf file as Algorithm 1.
**2** Thank you for carefully reading our paper and helping improve the writing. We have made the corrections and polished the English in the revised manuscript.
---
Rebuttal Comment 1.1:
Title: Acknowledgement
Comment: Thank you for clarification.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for the acknowledgement. | null | null | null | null | null | null |
Unsupervised Optical Flow Estimation with Dynamic Timing Representation for Spike Camera | Accept (poster) | Summary: The paper titled "Unsupervised Optical Flow Estimation with Dynamic Timing Representation for Spike Camera" presents a novel approach for unsupervised optical flow estimation using spike cameras. The authors propose a method that leverages dynamic timing representation to estimate optical flow without the need for explicit supervision. They demonstrate the effectiveness of their approach through extensive experiments and comparisons with existing methods.
Strengths: 1. The paper introduces a unique and innovative approach to unsupervised optical flow estimation. By utilizing spike cameras and dynamic timing representation, the authors provide a fresh perspective on tackling this problem. This can potentially open new avenues for research and development in the field.
2. The paper presents the methodology in a clear and concise manner. The authors provide sufficient details and explanations, making it easier for readers to understand the technical aspects of their approach. They also include relevant diagrams and figures to aid in comprehension.
Weaknesses: 1. The paper lacks a thorough discussion of the limitations of the proposed method. While the authors demonstrate its effectiveness through experiments, it would be beneficial to address potential challenges or scenarios where the method might not perform optimally. This would provide a more comprehensive understanding of the approach.
2. The authors primarily focus on benchmark datasets and quantitative evaluations, but they do not provide real-world examples or scenarios where their method could be applied. Including such examples would enhance the practical relevance of the research and highlight its potential impact.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1.How does the proposed method handle occlusions or complex scenes where multiple objects are moving simultaneously? Are there any specific limitations or challenges in such scenarios?
2. How does the dynamic timing representation used in the spike camera contribute to the accuracy of optical flow estimation compared to traditional frame-based approaches?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The proposed method's performance and generalizability might be limited to specific datasets or conditions. It would be valuable to explore its effectiveness on a wider range of datasets, including more challenging and diverse scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your precious time and reviews.
**Q1. Lack a thorough discussion of the limitations.**\
We fix the data length of spike streams which participate in the calculation of our unsupervised loss function. In scenes with normal brightness, this fixed-length spike data can provide sufficient information for inferring light intensity. However, the intervals between spikes are relatively large in the ultra-dark scenes, and light intensity cannot be accurately inferred using the spike streams of this fixed data length. We note relevant examples in real-world data. For example, when estimating the optical flow for a dark car in the shadow of an overpass, we note that different positions of the car have different optical flows. In future work, we plan to perform interval analysis on the input spike data and dynamically select the appropriate-length spike data to participate in the unsupervised loss function according to the statistical distribution of the interval length.
**Q2. The author does not provide real-world examples or scenarios where their method could be applied.**\
Our method is suitable for high-speed motion scenes with normal brightness in the real world. The Figure 4 shows the test results of USFlow and comparison methods on the real spike data captured by a spike camera in real-world. The shooting device we use is the first-generation spike camera with a spatial resolution of 250 × 400 and a frequency of 20 kHz. For these real spike data, there are more detailed descriptions in the supplementary material. However, in outdoor scenes, obtaining the ground truth of optical flow can only rely on LiDAR, and the ground truth collected by this way has measurement and calibration errors. We cannot collect the accurate ground truth in these real street scenes. We can only demonstrate the performance of our method by visually comparing it with other spike-based methods on these real spike data.
**Q3. How to handle the occlusions or complex scenes where multiple objects are moving simultaneously? Are there any specific limitations or challenges?**\
The spike camera has an ultra-high frequency, and it can continuously record high-speed motion scenes. In the application of the spike camera, complex motion is usually divided into multiple small motions along the time dimension. Note that the time span corresponding to each small motion is very small, so the occluded area changes very little in each small time span. Then the spike data within these short time spans are used to estimate the corresponding small motions respectively. Finally, all these estimated small motions make up the complete complex motion. Therefore, the influence of occlusion problem in spike-based methods is less than in frame-based methods, and we did not design a particular structure for this problem. The proposed method currently does not deal well with the problem of occlusion caused by the simultaneous movement of multiple objects. It addresses the main issues of the spike-based methods: the effective representation of spike data and the lack of sufficient datasets with ground truth. In future works, we will consider designing a structure to handle the occlusion issue.
**Q4. Compared to frame-based methods, how does the dynamic timing representation used in the spike camera contribute to the accuracy of optical flow estimation?**\
Compared with traditional cameras, spike camera has an ultra-high frequency and the ability to continuously record high-speed motion with a high temporal resolution. Given two adjacent RGB frames, frame-based methods usually model the motion between two RGB frames as linear motion. However, with the help of the continuous spike data between these two frames, the motion to be estimated can be divided into multiple small motions along the time dimension to estimate separately. These estimated small motions are then used to compose the complete motion. Therefore, nonlinear and non-uniform complex motions can be estimated by using spike data. For spike-based methods, how to represent the spike data is the key point because the data structure of spike data is different from the data structure of RGB frames. In spike representation, selecting an appropriate data length is pivotal. A too-long spike stream is not suitable for high-speed regions since time offset accumulation introduces more redundant information and a too-short cannot exhibit light intensity precisely with few binary data. The proposed Dynamic timing representation breaks the limitation of the fixed time window and it dynamically selects the spike data of the appropriate length according to the motion speed at each pixel position for representation. It improves the quality of representation and thus the accuracy of optical flow.
**Q5. It would be valuable to explore its effectiveness on a wider range of datasets.**\
The benchmark dataset used in the experiments for testing is the PHM dataset which is generated by a graphics-based simulator. It is the most common dataset in the spike-based optical flow estimation task. In other tasks, there are some spike datasets, in which the spike data is generated by simulating the working mechanism of the spike camera based on RGB frame sequences. The frame rates of these RGB frame sequences are much lower than that of spike camera, so the motion information between two adjacent RGB frames can only be generated by frame-based methods. Thus, there is usually a difference between the real spike data captured by the spike camera and the simulated spike data generated by algorithms simulating the spike camera working mechanism. In order to verify the effectiveness of the proposed method on a wider range of data, we use a spike camera to capture a series of real spike data from different angles in the real world. Then we test and compare the performances of methods on these real spike data. The visualization results are shown in Figure 4 of our paper.
---
Rebuttal Comment 1.1:
Title: Thanks for the responses
Comment: I really appreciate the responses. The authors have successfully addressed all of my concerns. While I am happy to raise the score, I have to stress that the paper focuses more on an engineering problem rather than a generic algorithmic challenge. I think this manuscript can be a very strong ICRA/IROS paper but may not provide much value in AI community. With that said, my conclusion is weak acceptance.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: We are glad to address all your concerns successfully. Thank you for your insightful comments and raising your score to borderline acceptance. | Summary: The paper proposes a method for estimating optical flow from event streams captured with a spike camera. The approach is based on unsupervised learning and achieves good results in the comparison to other works in the SotA, including conventional and event-based approaches. Authors collect their own dataset and evaluate on it, specially, in an attempt to show the advantage of finely estimating for low and high-speed regions (although, this is not clearly demonstrated with the current experimental section). Finally, authors also include a section with an ablation study to assess the impact of the different parts of the model (loss terms) and the supervised vs. unsupervised learning approaches.
Strengths: + Authors prepare and collect a dataset with is always a hard but valuable task
+ The unsupervised learning approach provides a valuable alternative for optical flow estimation and shows good results on the evaluated datasets
+ It is unsupervised, which is another good keypoint to bear in mind
+ The ablation study on the loss terms impact and the comparison between supervised vs. unsupervised are very valuable
Weaknesses: - After reading the work, I don't think authors make clear the differences between spike and event cameras. There is a confusion all over the introduction and methodology and this makes me (and the reader) hesitate about the relevance of this work.
- The work needs extensive English proofreading, some sentences are hard to understand.
- The conclusions should not be a summary of the work
- In the experimental section, authors compared on PHM and propose their own dataset but they are not comparing to the widely used MVSEC and DSEC datasets (that are the most common for event-based processing). This will improve not only completeness but also the comparison to other works in the state of the art.
- From the paper and since authors propose an artificially generated dataset, it is not clear if 1) the spike camera is a real sensor and 2) in case it is, if they have access to real sequences recorded with it.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: - Regarding the difference between event and spike cameras. Authors should make clear what kind of data they are handling. Event cameras produce asynchronous events and in fact, one of the advantages of these cameras is the lack of need for reconstructing the dynamics of the scene due to their high temporal resolution. This is great e.g. for optical flow estimation. However, it seems that spike cameras produce frames at very high resolution (something closer to a high-speed camera). This could also be interesting but it carries the burden of analyzing a lot of full frames, which cannot be ignored as a drawback. I really think that, since they are comparing to both conventional and event cameras, this should be clarified as soon as possible in the paper.
In fact, this is mixed up in the paper, for example, at some point authors mention something about spike frames but then, they point out that pixels respond asynchronously (?)
- When comparing to event camera processing, performance should be part of the comparison as well (or latency).
- In the abstract, lns 12-14 do not make reference to the dataset (I think this should be added here as well)
- In page 2, I believe the correct term is luminance (not illuminance)
- In line 68, authors mention "shows visually impressive performance", what do you mean? Is that qualitative results? I guess authors mean that they are appealing? Are they only qualitative?
- The sentence in Lns 107-108 seems incomplete
- In Ln 182, are 40 and 100 ms? or frames?
- In ln. 196 "different numbers of intervals"
- Lsmooth is not defined
- There are some recent works that use SNNs such as "Optical flow estimation from event-based cameras and spiking neural networks", from J. Cuadrado et al. that could help to improve the state of the art section.
- Citations are weirdly formatted. Authors should make sure that they are using the conference format
- In section 5.3 authors (unsupervised loss) authors mention Figure 4 when they mean Table 4.
- What is the contribution of the LA module? According to Fig. 6 it seems it is not helping at all ... in such a case, why do authors keep it? is there another case in which the LA module does add a significant contribution?
- At the end of section 5.4, authors discuss results on SSES dataset but they mention before that they are also doing some real-world sequences, where are the results on these sequences?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: The work includes a limitations section that seems adequate
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your precious time and reviews.
**Q1. Authors do not make clear the differences between spike and event camera.**\
Both spike camera and event camera have ultra-high frequency, and they can continuously record high-speed motions. However, the working mechanism of them are different. In the spike camera, when the accumulated photons reach a preset threshold, a spike will be fired. The event camera records the light intensity change information on the corresponding pixels whose light intensity changes exceed the threshold. Spike camera and event camera have their own application advantages. In the analysis of high-speed motion scenes, there are indeed some related event-based methods. The method proposed in this paper uses spike data output by spike camera to estimate optical flow and it belongs to the spike-based method.
**Q2. Authors should make clear what kind of data they are handling. Spike camera carries the burden of analyzing a lot of full frames.**\
The proposed method uses the spike data output by the spike camera to estimate optical flow. Spike camera and event camera have their own advantages. The event camera records motion information according to the relative changes in light intensity and outputs relatively sparse event streams. Thus, event camera has great application value in tasks such as motion estimation, and objects tracking. The spike camera outputs binary spike data at each pixel position according to photon accumulation. This full-pixel recording feature makes spike camera suitable for pixel-dense tasks, for example, high-speed motion scene reconstruction. When using a spike camera to reconstruct a high-speed motion scene, optical flow estimation is usually involved, so there is a need to estimate optical flow based on spike data. Thank you for your suggestions. We will clarify in the paper that spike camera records scene information at full resolution.
**Q3. In the experimental section, authors compared on PHM and propose their own dataset but they are not comparing to the widely used MVSEC and DSEC datasets (that are the most common for event-based processing).**\
This paper focuses on using spike data which is output by the spike camera to estimate optical flow. The input of the whole model is spike data. The MVSEC and DSEC datasets do not contain spike data, so these two datasets cannot be used as test datasets for the spike-based methods.
**Q4. if the spike camera is a real sensor? In case it is, if they have access to real sequences recorded with it.**\
The spike camera is a real camera. We use the first-generation spike camera (20 kHz, 250 pixels × 400 pixels) to collect a series of real spike data in real-world. In our paper, Figure 4 shows some test results on these real spike data. The SSES dataset is generated by CARLA simulator. We plan to release all new datasets used in our paper in the future.
**Q5. When comparing to event camera processing, performance should be part of the comparison as well (or latency).**\
It is not the goal of this paper to compare spike camera with event camera in optical flow estimation. Since spike camera applications usually involve motion estimation, this paper aims to study how to estimate more accurate optical flow from spike data which is output by spike camera.
**Q6. In the abstract, lns 12-14 do not make reference to the dataset.**\
We will make reference to PHM dataset here.
**Q7. What does "shows visually impressive performance" mean?**\
It means the visualization results of USFlow on real spike data are significantly better than other methods.
**Q8. The sentence in Lns 107-108 seems incomplete.**\
At the end of the subsection that introduces various types of representation works, Lns 107-108 is a preview of the proposed representation method.
**Q9. In Ln 182, are 40 and 100 ms? or frames?**\
40 and 100 represent 40 spike frames and 100 spike frames, respectively.
**Q10. What does "different numbers of intervals" mean?**\
The interval is defined in Lns 186-188. Multiple intervals are spliced to form a time span, and we use multiple such time spans to infer light intensity. These time spans contain (2k-1) intervals, the k is a hyper-parameters in experiments.
**Q11. Smooth loss function is not defined.**\
The $L{smooth}$ used in our method is a basic version which is defined as: $\mathcal{L}{\rm smooth}\left( f, f' \right) = \frac{1}{HW} \sum_{\mathbf{x}} \vert \nabla f (\mathbf{x}) \vert + \vert \nabla f' (\mathbf{x}) \vert$, where $f$ and $f'$ are bidirectional estimated optical flow, $( \nabla )$ is the difference operator, H is the height and W is the width of the optical flow.
**Q12. There are some recent works that use SNNs. These works could help to improve the state-of-the-art section.**\
The proposed USFlow estimates optical flow based on spike data. Although the output of spike camera and event camera have different data structures, some event-based methods and SNN works may indeed be instructive. We will keep our attention on such methods.
**Q13. What is the contribution of LA module?**\
Although the LA module cannot bring a huge improvement in performance, it can make the network converge faster and the training process more stable.
**Q14. Where are the results on real spike data captured by the spike camera?**\
In Figure 4, we show some results on real spike data. We capture these real spike data by a first-generation spike camera (20 kHz, 250 pixels × 400 pixels) from different angles.
**Q15. Writing advice.**\
Thanks for your concern. We will correct these issues.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I thank the authors for their rebuttal. Now, I have a clearer picture of the work. I think authors should carefully proofread the work for the final version. Also, and regarding the mentioned SNN-based works, I suggest some recent ones such as:
- Optical flow estimation from event-based cameras and spiking neural networks
- Taming Contrast Maximization for Learning Sequential, Low-latency, Event-based Optical Flow
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thank you for your valuable suggestion. We will certainly add discussions of these works in our paper. | Summary: This paper propose an unsupervised learning framework for spike-based optical flow estimation, which is mainly developed for spike input representation and spike loss function. In general, I think the method has merit, but the experiment is unconvincing in its lack of clarity.
Strengths: 1. Propose a lightweight end-to-end spike data representation with the function of a dynamic time window.
2. Propose a two-stage unsupervised loss to model light intensity in regions with different motion speeds.
Weaknesses: 1. In Fig. 5, there are significant quantization/discontinuities in the optical flow gt of the SSES dataset, concentrated in the ground area. To my knowledge, this is not present in other optical flow datasets. The reliability of the optical flow ground-truth is a concern, please provide an explanation.
2. In Fig. 4 and supplementary video, the results of these methods using only spike as input look bad. The major problem is that the results are concentrated only the textures and are not consistent for the same object., which cannot be called ``dense`` optical flow. To my knowledge, this phenomenon is not present in unsupervised two-frame based optical flow estimation methods. I suggest comparing the outputs of two-frame based methods and analyzing the spike-based failure cases.
3. The performance of using spike cameras for optical flow estimation is of concern. In L27, spike camera can record details of high-speed moving objects, it has enormous potential for estimating more accurate optical flow in high-speed scenes. But comparing the state-of-the-art optical flow estimation methods using image cameras, I believe this claim is not convincing. The visualization results of existing spike-based methods, including the authors', are significantly worse than existing image-based methods.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. L158, Blindly fusing the output of all layers may impair the learning process. It is desirable to have experiments to support this belief.
4. L170 seems duplicate to L122.
5. I suggest to complete the experiments of raft backbone model.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: 1. L132, advantage of neuromorphic vision is low latency. However, the authors do not present the runtime analysis of the proposed method.
2. See weaknesses above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your precious time and reviews.
**Q1. About the reliability of the optical flow ground-truth.**\
The SSES dataset is a verification dataset containing various corner cases in the autonomous driving field and it is generated by CARLA. CARLA is an open-source simulator for autonomous driving research and it can provide the ground truth of optical flow. The proposed SSES dataset is used to verify the effectiveness of the methods in extreme cases. However, the temporal resolution of spike data is ultra-high, which results in relatively small motions between spike frames. Therefore, although the generated ground truth is a smooth optical flow field, it is limited by the quantization accuracy of the CARLA, and quantization appears in the visualization of the ground truth. For high-speed motion scenes, collecting optical flow by using LiDAR can circumvent the need for using simulators. However, the frequency of LiDAR is relatively low, and the process of calibration and measurement can introduce larger errors to the ground truth.
**Q2. The performance of spike-based optical flow estimation methods is not better than frame-based methods.**\
The frequency of spike cameras can reach up to 40 kHz, so spike cameras can continuously record high-speed motion scenes. The spike camera serves to compensate for the drawbacks of traditional cameras in high-speed motion scenes, where information loss arises from insufficient frame rates. If there is complex nonlinear and non-uniform motion between two adjacent RGB frames, the common frame-based methods usually approximate the complex motion as linear motion. However, with the help of spike data containing continuous motion information, the complex motion can be estimated. Specifically, the spike data containing the information of complex motion is divided into multiple spike data with shorter data lengths. Then corresponding small motions can be estimated from these spike data with shorter data lengths. Finally, all these small motions are merged into a complete complex nonlinear and non-uniform motion.
**Q3. Questions about the quality of the results shown in Figure4.**\
Directly estimating optical flow on spike data using unsupervised learning is a new and difficult topic. It is very challenging to infer light intensity from spike data, because it is necessary to consider many factors, such as the brightness of scenes and the motion speed of different objects. Thence, directly estimating optical flow on spike data is more challenging than estimating optical flow on RGB images. The data corresponding to the Figure 4 is captured by a spike camera in the real-world and it only contains spike data. Limited by the frame rate of traditional camera, we don’t have RGB frames corresponding to these real spike data. The scenes on the left column of Figure 4 are reconstructed by a professional reconstruction method, and these reconstructed RGB images contain more noise than real RGB images. When these reconstructed RGB images are used in frame-based methods, the accuracy of estimation will be affected. Thence, the real spike data corresponding to Figure 4 cannot be used to compare the performance of spike-based methods and frame-based methods. We just reconstruct the scenes to tell the readers what contents are in the scenes. In the practical applications of spike cameras, there is only spike data can be used. As shown in Figure 4, the proposed method shows significant advantages over other spike-based methods, and it also promotes the exploration of unsupervised learning in spike-based optical flow estimation task. For spike-based unsupervised methods, there is still space for exploration. We may further study this issue in our future works.
**Q4. It is desirable to have experiments to support L58.**\
The "blindly fusing" here means not using layer attention, but simply averaging the output of all layers. In the ablation study on the layer attention module, simply averaging all layer outputs is used to replace the layer attention module. As shown by the orange curve in **Figure 6**, it can be seen that simply averaging all layer outputs does impair the learning process.
**Q5. L170 seems duplicate to L122.**\
In order to start the discussion of unsupervised loss function design, L170 briefly recalls the problem setting and clarifies again that the proposed method doesn’t need ground truth in training. Thank you for your advice, we will simplify the expression.
**Q6. Suggest to complete the experiments of raft backbone model.**\
We test the USFlow(raft) which is trained by unsupervised learning on PHM dataset, and AEEs are shown below. As indicated by Table 2, it can be observed that both USFlow(raft) and USFlow(pwc) show the similar performance.
| $\Delta$ t | method | Ball | Cook | Dice | Doll | Fan | Hand | Jump | Poker | Top | Mean |
| :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: |
| $\Delta$ t=10 | USFlow(raft) | 0.760 | 1.902 | 1.923 | 0.631 | 0.724 | 2.439 | 0.490 | 1.217 | 2.556 | 1.405 |
| $\Delta$ t=20 | USFlow(raft) | 1.249 | 3.507 | 2.980 | 0.964 | 1.112 | 3.440 | 0.624 | 2.216 | 4.628 | 2.303 |
**Q7. About runtime.**\
During the process of designing the network, one of our goals is to shorten the runtime by designing a lightweight model. Estimating the optical flow between two timestamps on a 3090 Ti GPU, the runtime of our proposed USFlow is 90.6ms, and the runtime of the comparison method SCFlow is 236.9ms.
---
Rebuttal Comment 1.1:
Comment: I have read the authors' responses as well as the comments of other reviewers. I agree that the methods are somewhat contributing, but I still maintain that the evaluation presented in this paper is limited, so I will lower my rating.
I believe that all three of the weaknesses that concerned me have not been definitively addressed by the authors.
For Q1, is the simulated optical flow reliable? The authors mentioned the limited quantization accuracy of CARLA as a problem, so does this problem still exist with the optical flow data stored in the simulated dataset? I think a conclusive answer is needed.
For Q2, the authors start the paper by claiming that the spike camera has enormous potential for estimating more accurate optical flow in high-speed scenes (from line 26). But the results presented in their paper fall short of my expectations for the performance of state-of-the-art optical flow estimation methods. The authors do not answer why the optical flow estimated from spike data is not spatially dense.
For Q3, the reason given by the authors for not comparing the two-frame method is the noise contained in the images, which does not convince me. From Figure 4, I think the image quality is acceptable and I believe that the existing two-frame methods can easily provide better optical flow estimation results.
In addition, I think 90ms is still slow for low latency neuromorphic vision, but of course this is not a problem that should be addressed by every frontier study.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: We will give clearer clarifications to your concerns below.
For the concern of Q1, we checked the unique values of the gt, sorted them, and looked at the diff between each unique value. The diff is exactly 1/1024 which points to a 10-bit quantization. Thence, the dataset is reliable, and its accuracy is 1/1024. We will release this dataset for others to check.
For the concern of Q2 and Q3, we would change the claim to $\textit{it has enormous potential for estimating optical flow in high-speed scenes}$. We admit some existing two-frame methods can provide better optical flow estimation results.
However, we want to clarify that when the motion to be estimated occurs during an extremely short time, the conventional RGB camera cannot provide the corresponding RBG frames due to its low frequency. In other words, If we cannot obtain data in some high-speed scenarios, it is hard to obtain optical flow. Hence, the spike camera and conventional RGB cameras have different application scenarios.
Spike camera is only a complement for the various vision tasks. Since it is not a replacement for RBG camera, it cannot surpass the frame-based work in all scenarios.
In addition, as a counterpart, event camera also cannot surpass the RBG camera in optical flow estimation if cameras both have data for a certain scenario. However, it doesn’t mean there is no meaning to investigating this direction and it is indeed a thriving field at the current stage.
The unsupervised finetune dataset is small, which leads to the phenomenon that the estimated optical flow is not spatially dense. Note that the results are spatially dense in Figure 3.
As mentioned by other reviewer, unsupervised spike-based optical flow estimation is a new venue. However, the unsupervised frame-based methods have been studied for a long age, and they have many better training datasets which can boost the generalization. We admit at the current stage, our method has limitations, but our results look not bad and are supported by other reviewers. In addition, only when this new venue is encouraged, more robust datasets and more works can emerge to promote the development of this field. | Summary: This paper works on the optical flow estimation problem, especially in the unsupervised setting, for high-frequency spike camera inputs. Specifically, a dynamic timing representation module and a spike-based unsupervised loss are proposed to improve performance. A new synthetic dataset, namely SSES, is created to test extreme scenes in autonomous driving. Experiment results show the proposed method has achieved state-of-the-art performance compared with previous methods.
Strengths: 1. The paper is well-motivated by introducing the importance of optical flow estimation for high-frequency inputs collected from latest developed sensors like spike cameras. This topic has not been extensively studied so far.
2. The paper is overall clear and well-structured.
3. The proposed method is reasonable. Explanations on each module design are provided.
4. The results are compelling.
Weaknesses: 1. There are still some confusions in the text or figures that need to be clarified. See additional comments below.
2. It will be better to polish the language of the paper before publication. Correct grammatical errors and use more formal language. Avoid informal expressions like "a bunch of" (Line 138), "won't" (Line 140), etc. See additional comments below.
3. Authors are encouraged to share their code and generated datasets for better reproducibility. Such plan has not been declared in the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. If I understand it correctly, the method uses the predicted optical flow to determine whether each pixel belongs to a low-speed or high-speed region and then uses different ways to approximate light intensities $\tilde{I}$. At the starting iterations of training, the predictions could be totally off, so it is likely that the method will pick a wrong way to approximate intensities. Did you try to tackle this issue?
2. The data used in Fig 4 do not have ground-truth labels, right? Only qualitative evaluation is possible, so this evaluation may be weak.
3. There is also a large part of supervised results, so stating "unsupervised" in the title could be misleading.
4. I am curious how a normal RGB-based optical flow network would perform as a comparison. For example, a naïve approach is to first convert spike inputs to RGB and then apply a state-of-the-art RGB-based optical flow network of similar size (parameters, memory cost). What would the results be like? This could be a baseline to strengthen the motivation of developing optical flow networks specific to spike camera inputs.
Additional comments:
1. Grammatical errors: run-on sentences (Line 103-105, line 140-141, Line 163-165, Line 310-311).
2. Eq 1: The "mod" operator is traditionally only defined for integer division, but the value here is clearly continuous numbers, so it is better to define "mod" in the text in the continuous context.
3. Chapter 3.1: There are some confusions here. (1) I assume $A(\mathbf x, t)$ is continuous, but Line 119 says the output stream is binary, so there is missing gap in between that need explanations. Maybe adding a definition, something like $S_t^N(\mathbf x) = \mathbb I(\sup_{\tau\in[t-\Delta t, t+\Delta t]} A(\mathbf x, \tau) =\theta)$ will make it clear. (2) Line 113-114 say the camera "fires a spike" when the cumulative electrons hits the threshold, so I assume it is asynchronous for each pixel, but then Line 117-118 say the camera "fires a spike" at time T, which is synchronous and periodic. Maybe you could change the expression here to avoid confusion.
4. Line 130: "officially" -> "official".
5. Line 135: "dependence" -> "dependency".
6. Fig 2: Text is too small. It is better to make text size close to that of the main paper so that readers can read the figure clearly without needing to zoom in.
7. Line 147: It is better to add that your convolution is 1D on the time dimension to avoid confusion.
8. Line 182-183: Does your method guarantee $\omega_s + \omega_l = 1$? Eq 6: Does $\sum_k\omega_k=1$? If so, state so explicitly.
9. Eq 7 could be merged into the text. It does not have to an equation.
10. Line 175-184: What do you mean by "speed"? The frequency of camera readings, or fast motion? Please clarify.
11. Line 207: The notation $\omega_s, \omega_l, \omega_{k=1}, \omega_{k=2}$ is messy. Try not to use both textual abbreviations ("s", "l") and values ($k$) as subscripts for the same letter $\omega$. Use other letters like $\alpha_s, \alpha_l$, $\beta_k$ instead; or, you could do superscripts with parentheses $\omega^{(s)}, \omega^{(l)}, \omega_k$.
12. Line 209: "approximate" -> "approximated".
13. Line 212: define or cite "Charbonnier loss"; Line 213: define or cite "smoothness loss" specifically. There are many ways to define "smoothness".
14. Table 1 and 2: It is better to add citations into the table, so we know which data are generated from your experiments and which data are cited from other papers. The current table assumes that all data are credited to yourself.
15. Table 1 and 2: I recommend adding a vertical line between the "Top" and "Mean" columns, so we know "Top" is the name of the scene instead of the top error across all scenes.
16. Line 277: "through" -> "though".
17. Line 284: "Figure 4" -> "Table 4".
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your precious time and reviews.
**Q1. Do authors have plan to share their code and generated datasets ?**\
We will release the code, models and datasets.
**Q2. The problem about the network may choose wrong way to approximate light intensity at the starting iterations of training.**\
In high-speed regions, the light intensity estimated by the method which is suitable for low-speed regions will be affected by motion blur. In low-speed regions, the light intensity estimated by the method which is suitable for high-speed regions will contain noise. While the quality of these light intensities estimated in the wrong ways is not good, they are still sufficient for initiating the training of the network.
**Q3. The real spike streams output by spike camera lack ground truth.**\
There are two approaches to obtaining the ground truth of optical flow from real-world data: 1. The first method is used for indoor scenes. The optical flow can be obtained with the assistance of ultraviolet (UV) light. 2. The second method is used for outdoor scenes. It uses LiDAR to collect optical flow. However, this method will introduce calibration and measurement errors, and the collected optical flow field is sparse. Obtaining the ground truth of optical flow in outdoor scenes is challenging. Therefore, we show the performance of USFlow by comparing the visualization results of USFlow and other methods on real spike data.
**Q4. The paper shows some supervised results, so stating "unsupervised" in the title could be misleading.**\
In Table 1, the comparative methods are all supervised methods. Therefore, in Table 1, we also employ supervised training in order to compare the performance of the representation module. The method presented in Table 1 is not the final method proposed in this paper. After validating the effectiveness of our representation module in Table 1, we proceed to Table 2 where our USFlow is trained by unsupervised learning. Our proposed method includes a novel unsupervised loss function, and the USFlow trained using unsupervised learning is our proposed method in this paper.
**Q5. First convert spike inputs to RGB and then apply SOTA RGB-based optical flow estimation networks. What would the results be like?**\
Reconstructing spike data into RGB images is currently a developing research topic, involving intricate considerations. Simple principle-based methods for reconstructing RGB images from spike data often have issues with noise and blurriness. Even the state-of-the-art frame-based optical flow estimation methods use such poor-quality reconstructed RGB images as input, the accuracy of the estimated optical flow remains challenging. A well-designed reconstruction method is highly complex, such as Zhao's Spk2imgnet, which needs to consider many complex factors, and the reconstruction process involves a motion analysis module. First reconstructing the spike data into RGB images and then estimating the optical flow is a kind of solution with a large space for exploration. Our proposed method which directly estimates optical flow on spike data belongs to another kind of solution.
**Q6. In L175 and L184, what do you mean by "speed"?**\
The "speed" at here means motion speed.
**Q7. "Charbonnier loss" and "smoothness loss".**\
The Charbonnier loss we use is the basic version: $\\rho(x)=(x^2+\epsilon^2)^r\$. The smoothness loss we use is also a simple version which is defined as: $\mathcal{L}{\rm smooth}\left( f, f' \right) = \frac{1}{HW} \sum{\mathbf{x}} \vert \nabla f (\mathbf{x}) \vert + \vert \nabla f' (\mathbf{x}) \vert$, where $f$ and $f'$ are bidirectional estimated optical flow, $( \nabla )$ is the difference operator, H is the height and W is the width of the optical flow. We will add definitions about them.
**Q8. Additional comments.**\
Thank you for your concern. We will correct grammatical errors, add citations in tables and clarify confusion.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I don't have more questions for now.
---
Reply to Comment 1.1.1:
Comment: Thank you for your approval of our work. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper introduces a method for optical flow estimaiton for spike cameras. Because of the high temporal resolution of the camera, a preprocessing step based on temporal dilated convolution and attention layers is used to automatically select the best temporal scale for a given sequence.
The authors also introduce a unsupervised loss based on the reconstruction of intensity values form spike data combined with the photometric loss.
Strengths: The main strengths of the paper are the temporal dilated convolution preprocessing that allow for automatic temporal scale selection, as well as the unsupervised loss.
The description of the method is clear and the experimental section complete, including ablation studies and comparision with previous works.
Weaknesses: The paper does not address the question of latency and computational time, which are key aspect of the spike camera.
Also, since frames are available in the simulate datasets it would have been interesting to compare the accuracy of the proposed method on standard rgb frames, to quantify the benefits of using a high temporal resolution sensor for optical flow estimation.
Finally, for people not familiar with the spike camera, it would be beneficial a more complete introduction to this type of sensor. There is no reference in the paper to any publications explaining what a spike camera is, it's HW implementation and carachteristics.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: please refer to "weaknesess" section
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors could add a general discussion of advantages and limitations of the spike camera and of optical flow from spike cameras.
For example any the challenge related to data rate, computational time and power consumption of these devices.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your precious time and reviews.
**Q1.The latency and computational time of the proposed method.**\
In USFlow, to estimate the optical flow from timestamp t0 to timestamp t1, a spike stream should be collected containing (100 + dt) spike frames from t0, where dt represents the number of spike frames between t0 and t1. For instance, using a first-generation spike camera with a frequency of 20 kHz, when dt is set to 10, the necessary time to collect the spike stream for optical flow estimation would be approximately 5.5 ms. During inference, when utilizing a 3090 Ti GPU to estimate the optical flow between two timestamps, the computational time of our USFlow is 90.6 ms, whereas the computational time of the comparative method SCFlow is 236.9 ms.
**Q2.Comparing the accuracy of the proposed method on standard rgb frames to show the benefits of using a high temporal resolution sensor for optical flow estimation.**\
Typical RGB cameras usually operate at frequencies between 30Hz and 120Hz. The spike stream output by the spike camera takes advantage of its ultra-high temporal resolution, which can compensate for the deficiencies of low-frame-rate RGB frame sequences in optical flow estimation task. Taking a 120Hz RGB camera as an example, within a time span of (1/120)s, the RGB camera can only capture two RGB frames. Estimating nonlinear intricate motion between these two adjacent RGB frames by frame-based methods is often challenging. However, the spike stream, with its inherent advantage of extremely high temporal resolution, is capable of continuously recording dynamic processes. Thence, spike stream can be used to estimate nonlinear and non-uniform motion. Specifically, the (1/120)s can be divided into multiple short time spans. The motion within each short time span can be estimated on the corresponding spike stream. Eventually, by combining the motion information within these short time spans, the complete intricate motion within the (1/120)s can be estimated.
**Q3.For people not familiar with the spike camera, it would be beneficial a more complete introduction to this type of sensor.**\
In Section 3.1, we introduce the fundamental working mechanism of the spike camera. Some previously works, such as [a] and [b], provide a more comprehensive introduction to the spike camera. Thank you for your suggestions, we will cite these references containing detailed introductions of the spike camera.\
[a] J. Zhao, et al. Reconstructing Clear Image for High-Speed Motion Scene With a Retina-Inspired Spike Camera. TCI 2022.\
[b] T. Huang, et al. 1000x Faster Camera and Machine Vision with Ordinary Devices. Engineering 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers. I still believe that the points raised in the initial review are relevant and their discussion should be added to the article.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thank you for your suggestions. The points you raised in the initial review are important, and we will add these discussions to the paper as you suggested. | null | null | null | null | null | null |
Multi-Agent Learning with Heterogeneous Linear Contextual Bandits | Accept (poster) | Summary: This paper studies the heterogeneous multi-agent contextual bandit problems. They propose the H-LinUCB algorithm, where agents coordinate at the beginning stage by pooling and synchronizing the data, until a certain time determined by the dissimilarity. They show the algorithm is optimal when the tasks are highly similar or highly dissimilar.
Strengths: The paper is well-written and organized in general. Multi-agent learning, especially in the heterogeneous setting, is of great interest in the literature. The scenario considered is hard, as they don't add special structures for the parameters other than assuming the dissimilarity is known. Yet the results are strong and seem to be theoretically rigorous. Moreover, this work improves DISLINUCB by discarding coordination when the dissimilarity level is too high and the coordination hurts the regret.
Weaknesses: The experiment part is relatively weak to me. Currently, the authors consider three settings of different levels of dissimilarity to highlight the advantage of the proposed H-LINUCB in achieving good regret in all settings. However, there isn't a setting where H-LINUCB dominates the other two algorithms. This makes H-LINUCB more likely an interpolation of the fully communicating and fully independent methods. I hope the authors could at least show some experiments where H-LINUCB uniquely succeeds or achieves the best regret, or discuss why this is impossible or hard.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Although both plots (c) and (d) correspond to regime (ii), H-LINUCB performs differently in the two settings. It resembles Ind-OFUL in (c) but not in (d). What's the key component that leads to the different behavior? Is there a threshold for $\epsilon$ above which H-LINUCB could be shown to copy Ind-OFUL?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I don't see any limitations or potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your feedback. We address your concern about the experiments as follows:
> The experiment part is relatively weak to me. Currently, the authors consider three settings of different levels of dissimilarity to highlight the advantage of the proposed H-LINUCB in achieving good regret in all settings. However, there isn't a setting where H-LINUCB dominates the other two algorithms. This makes H-LINUCB more likely an interpolation of the fully communicating and fully independent methods. I hope the authors could at least show some experiments where H-LINUCB uniquely succeeds or achieves the best regret, or discuss why this is impossible or hard.
We believe that the experiments efficiently demonstrate our objectives. Our goal was to show that H-LinUCB is essentially behaves as “the best of the both worlds” approach. Specifically, our experiment aims to show that H-LinUCB can improve regret when there are opportunities for collaboration, and does not collaborate if it gains no benefit. Therefore, it behaves as DisLinUCB when it is beneficial to collaborate and behaves as Ind-OFUL, otherwise.
On the contrary, Ind-OFUL is unable to leverage similarity structure between bandits to learn faster, whereas DisLinUCB incurs linear regret in the case that collaboration only hurts the performance. Thus, DisLinUCB and Ind-OFUL both fail in different ways.
Regarding your question on the interpolation, for small $\epsilon$, we would see that H-LinUCB performs worse than DisLinUCB due to the artifact of the extra term $\alpha \epsilon Mt$ to handle the dissimilarity (Lemma 4.1).
> Although both plots (c) and (d) correspond to regime (ii), H-LINUCB performs differently in the two settings. It resembles Ind-OFUL in (c) but not in (d). What's the key component that leads to the different behavior?
The reason is that $\epsilon$ increases significantly from plots (c) to (d). In plot (d), H-LinUCB only collaborates for a small number of rounds, and switches to independent learning; the plot therefore looks nearly identical to Ind-OFUL.
> Is there a threshold for $\epsilon$ above which H-LINUCB could be shown to copy Ind-OFUL?
From a theoretical viewpoint, for $\epsilon=1$, H-linUCB will behave like Ind_OFUL. However, experimentally, it is not clear what that threshold is – it would vary from experiment to experiment due to a randomly generated set of parameters. For some large enough $\epsilon$ (could be smaller than 1, e.g., plot (d) ), H-LinUCB only allows agents to collaborate for a small number of rounds then switches to learn independently, so the regret of H-LinUCB is nearly identical to that of Ind-OFUL by just looking at the plot.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns! I'll keep my original score. | Summary: This paper considers a multi-agent linear contextual bandit (with central server coordination) setting where $M$ agents each has (possibly different) unknown but fixed $d$-dimensional bandit parameter $\theta_m$, and the $\ell_2$-norm bound on the bandit parameters $\epsilon$ (i.e., $||\theta_i - \theta_j||_2 \leq \epsilon, \forall i, j \in [M]$) is known to the agents. The authors propose a UCB-based algorithm that collaborates for the first $\epsilon^{-2}$ time slots and then learns individually. The authors also present the related regret upper bound and lower bound and run experiments on synthetic data.
Strengths: - Heterogeneous collaborative multi-agent setting is an upcoming area of interest, so this paper is likely to be relevant to the community.
- The idea of adjusting the length of collaboration by the dissimilarity (i.e., $\tau = \min(\lfloor\epsilon^{-2}\rfloor, T)$) is neat and intuitive.
- The code used for the experiments is also included and the results should be easily replicable.
Weaknesses: - The introduction of the proposed algorithm (Section 4.1) is not easy to follow and understand, many new variables are used without explanations
- Notion of heterogeneity and the part of algorithm for handling collaboration with dissimilarity seem to be straightforward extensions of prior works, the new technical challenges for analyzing the proposed model and algorithm are not clearly discussed
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - How does the algorithm perform if the given $\epsilon$ is off from the real $\epsilon$ of the environment?
- How does the result in this paper generalize to linear (non-contextual) bandit?
- How important is the dissimilarity defined on $\ell_2$-norm? Why didn't the authors consider $\ell_\infty$ norm as prior work?
- (minor point) It is not clear what kind of norm the one in line 42 is.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors discuss some limitations (that $\epsilon$ is assumed known and that the proposed algorithm is not optimal for some values of $\epsilon$) in the conclusion section. In my opinion, the primary limitation of this work is the practicality of the proposed algorithm as $\epsilon$ is often not precisely known in real application.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your suggestion. We address your concerns as follow:
> The introduction of the proposed algorithm (Section 4.1) is not easy to follow and understand, many new variables are used without explanations
In line 196, we define these variables as sufficient statistics. Specifically, $V$ is the gram matrix, and $b$ is the sum of the product of reward and context vector, which we use to form a regularized least squares solution and construct the confidence bound for our linear bandit problem.
> How does the algorithm perform if the given $\epsilon$ is off from the real $\epsilon$ of the environment?
As we discussed in our submission, the current scope of our paper is when $\epsilon$ is known and we leave the case when $\epsilon$ is unknown to a future work.
> How does the result in this paper generalize to linear (non-contextual) bandit?
Our setting is linear bandit. The arms in the decision set are also referred to as the contexts in the literature.
> How important is the dissimilarity defined on $\ell_2$-norm? Why didn't the authors consider $\ell_\infty$-norm as prior work?
It is not important. There is no particular reason why we choose $\ell_2$ over $\ell_\infty$, it is just natural to use $\ell_2$-norm to define the distance of two vectors. When using $\ell_\infty$-norm the results would change by a factor of $\sqrt{d}$. Even if we use $\ell_\infty$-norm, it is also not comparable to the prior work of Wang et al. 2021, because (1) $\epsilon$ hides inside the subpar-arm set, and (2) it is not possible for us to define such subpar arm set in our setting when the arm sets can be generated arbitrarily.
> (minor point) It is not clear what kind of norm the one in line 42 is.
It is $\ell_2$-norm. We'll add the subscript in the revision.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response to my questions and concerns.
I would encourage the authors to polish Section 4.1 and make the definitions of variables clearer.
I remain my recommendation for this paper.
---
Reply to Comment 1.1.1:
Comment: We thank you for the suggestions. We will ensure that the definitions of variables are clearer in the next revision. | Summary: This paper studies multi-agent linear stochastic bandit under a model of heterogeneity of the agents. Concretely, the model assumes a centralized controller who can communicate information to and from all of the M different agents in the network. At each time, every agent in the network, will play an arm from a set of possible arms (that are adversarially chosen). The choice of which arm an agent plays is a function of only the past information at that agent -- namely the agent's past arms pulled and rewards observed and the information communicated by the central controller to the agent. Subsequently, the agent observes a noisy stochastic reward where the noise is independent across agents and time.
To facilitate group learning, the paper assumes structure among the heterogeneity. Concretely, the paper assumes that the unknown parameter for any pair of agents is at-most \epsilon away in L2 distance. This modeling assumes that although the agents are different, nonetheless have a latent structure which can enable faster learning.
Under this structure, the paper proposes an algorithm such that the sum regret of all agents scale sub-linearly in M and T. The paper also gives lower bound evidencing that their upper bounds are order wise tight.
Strengths: Well articulated problem, algorithm, intuition and the key steps of the analysis.
Weaknesses: The main weakness is the positioning and contributions of the paper in comparison to "Multi-agent heterogeneous stochastic linear bandits, Ghosh et.al. in ECML-PKDD 2022" which the submission cites.
Specifically, the submission claims that the model of heterogeneity studied is different from Ghosh et.al. (Line 280 of the submission). However, a cursory examination indicates that the assumption of the submission is stronger than Ghosh et.al's in the sense that Defn 3.1 of the submission implies that the average parameter is close to any agent's parameter by \epsilon, which is the assumption in Ghosh et.al. Does this imply that the assumptions in the present paper with regard to heterogeneity are stronger than Ghosh et.al.'s?
A somewhat less satisfying feature of the algorithm is also the known \epsilon which seems to be non needed by Ghosh et.al. ?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The main questions for the authors is to elucidate their contributions in comparision to Ghosh et.al.?
Specifically,
1)The authors make a stronger assumption on the heterogeneity (Defn 3.1) compared to Ghosh et.al. Does this stronger assumption on heterogeneity is what enables the authors to allow for adversarially generated set of arms?
2) The algorithm of Ghosh et.al. does not need \epsilon while the present paper assumes knowledge of \epsilon. Can the authors comment on the necessity of this information?
3) In the case of stochastic arms, is the weaker result of the submission compared to Ghosh et.al. the price needed to pay for having an adversarial arm setup?
***Post Rebuttal: *** Thank you for your response and I have updated my score.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your insightful feedback and questions. We hope to clarify our contributions compared to Ghosh et al. as follows:
Besides the stochastic assumption, Ghosh et al. also make a **strong distributional assumption** on how the arm is being generated. Essentially, they require a lower bound on the smallest eigenvalue of the covariance matrix of context vectors. Formally, they need that $E_{t-1}[\beta_{i,t}\beta_{i,t}^\top] \geq \rho_{\min} I$ ; see Equation~1 of Ghosh et al.
This assumption is crucial for them as it allows them to estimate the average parameter and motivates the algorithm design.
We do not make any such distributional assumption since it is often unrealistic in practical settings. On the contrary, we consider an adversarial setting where the arm sets at each round can be selected by an (oblivious) adversary and the size of the arm sets can even be infinite. Our setting renders the algorithmic design and guarantees of Ghosh et al. inapplicable as far as we can tell, thus requiring a completely new treatment.
> Specifically, the submission claims that the model of heterogeneity studied is different from Ghosh et.al. (Line 280 of the submission). However, a cursory examination indicates that the assumption of the submission is stronger than Ghosh et.al's in the sense that Defn 3.1 of the submission implies that the average parameter is close to any agent's parameter by \epsilon, which is the assumption in Ghosh et.al. Does this imply that the assumptions in the present paper with regard to heterogeneity are stronger than Ghosh et.al.'s?
Yes, our notion of heterogeneity is slightly stronger. However, we can possibly work with the same heterogeneity assumption as in Ghosh et al. This is simply a matter of presentation. In particular, note that for their personalized framework, Ghosh et al. give a regret bound for individual agents – it is, therefore, natural to have a more fine-grained notion of heterogeneity by defining the dissimilarity of the individual parameter with the average parameters. However, our work emphasizes the collaboration among the agents, and we are interested in the _group_ regret instead of individual regret. Therefore, the heterogeneous notion which we proposed is a natural way to capture the heterogeneity among the agents. Again, these differences are merely cosmetic. We can work with the same assumption as Ghosh et al. and present bounds on the individual agent’s regret.
> The authors make a stronger assumption on the heterogeneity (Defn 3.1) compared to Ghosh et.al. Does this stronger assumption on heterogeneity is what enables the authors to allow for adversarially generated set of arms?
No, as we say in the remarks above, the difference between the two notions of heterogeneity is not significant. We do not rely on the heterogeneity assumption to allow for an adversarially generated set of arms.
> The algorithm of Ghosh et.al. does not need \epsilon while the present paper assumes knowledge of \epsilon. Can the authors comment on the necessity of this information?
The knowledge of (an upper bound on) $\epsilon$ facilitates our algorithmic design in constructing the confidence bound and controlling the collaboration between tasks. Adapting to the unknown $\epsilon$ in an adversarial setting that we consider here would require additional work. It is important to note that the adaptation strategy of Ghosh et al. (to unknown $\epsilon$) _fails_ in an adversarial setting.
> In the case of stochastic arms, is the weaker result of the submission compared to Ghosh et.al. the price needed to pay for having an adversarial arm setup?
Yes. However, we would like to reiterate that the two settings are simply _incomparable_. Again, Ghosh et al. need a strong distributional assumption to obtain their bounds. In fact, the $O(T^{1 /4})$ bound that they achieve (for $\epsilon = 0$) is not even possible information-theoretically in an adversarial setting. The minimax lower bound in such settings is $O(\sqrt{T})$ which we match.
---
Rebuttal Comment 1.1:
Title: Thank you for the response. Increasing my score
Comment: Thank you for this comparison. I would encourage the authors to update Remark 4.1 of the draft with the response provided here -- namely that the present paper improves on restrictive stochastic arm assumption, but pays the price for doing so by needing \epsilon and a worse off regret in the stochastic case compared to Ghosh et.al. However, unlike Ghosh et.al., the present paper yields sub-linear regret bounds in M and T even under adversarial generated arms.
PS: I also increased my score from 4 --> 6.
---
Reply to Comment 1.1.1:
Comment: We thank you for the valuable feedback and re-evaluating the score. We will integrate the discussion in our next revision. | Summary: This work considers a multi-agent linear contextual bandit model with heterogeneity among the agents. The authors propose a novel algorithm called H-LinUCB to minimize the group cumulative regret when agents communicate through a central server. When the level of heterogeneity is known to the agents, they show that H-LinUCB is provably optimal in regimes when the agents are highly similar or dissimilar.
Strengths: Originality:
------------
- The authors introduced the notion of heterogeneity as a $\epsilon$-MALCB problem, inspired by (Wang et al. (2021))
- The model considered in this paper is well-motivated
- Related work covered in detail
Quality:
-----------
- The theoretical analysis seems to be concrete
- Numerical simulations are provided to validate the theoretical results
Clarity:
-----------
- The paper is well-written and easy to follow, with the exception of some proofs in the supplementary material
Significance:
-----------
- The proposed notion of heterogeneity could be of interest to researchers, particularly for the case when the level of heterogeneity is unknown
Weaknesses: - The algorithmic ideas and the regret analysis are known extensions of the work in (He et al. (2022), Wang et al. (2020), Dubey et al. (2020))
- The lower bound proof is extended from the ideas in (Lattimore and Csaba (2020))
- The algorithm depends on the assumption that the level of heterogeneity $\epsilon$ is known, which need not be true in realistic scenarios
- The proof of Theorem 4.1 in Appendix A.2 is not easy to follow, because the steps aren't explained in detail. Even though the regret analysis is extended from (He et al. (2022), Wang et al. (2020), Dubey et al. (2020)), I believe a paper should be self-contained.
- The lower bound seems to be loose, because it is off from the regret upper bound by a factor of $d$ in one of the terms. Furthermore, it is not applicable in the regime $\epsilon \in (1/\sqrt{MT}, d/\sqrt{T})$
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Some of the notation isn't defined in Appendix A.2. What is $V_p$ in line 498? How is the condition on the ratio of determinants obtained below line 499? I was hoping if the authors could re-write the proof of Theorem 4.1 so that it is easy to follow
----
Post rebuttal: I have raised the score from 5 to 6.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I don't see any potential negative societal impact of their work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your time and effort to review our submission. We hope to address your concerns as follows.
> The proof of Theorem 4.1 in Appendix A.2 is not easy to follow, because the steps aren't explained in detail. Even though the regret analysis is extended from (He et al. (2022), Wang et al. (2020), Dubey et al. (2020)), I believe a paper should be self-contained.
> Some of the notation isn't defined in Appendix A.2. What is $V_p$ in line 498?
On line 196, we denote matrix $V_{sync}$ as a sufficient statistic needed to form a least square estimate. Essentially, the matrix $V_{sync}$ contains all the samples after the synchronization (line 20 algorithm 1). In line 495, we have $P$ as total epochs where the synchronization happens. Here, $V_P$ is a matrix that contains all the samples of the agents at the last synchronization epoch. We apologize for not clearly introducing this bit of notation; we will fix that in the next revision.
> How is the condition on the ratio of determinants obtained below line 499? I was hoping if the authors could re-write the proof of Theorem 4.1 so that it is easy to follow
Sure, we will be happy to provide those details. Essentially,
by telescoping, we have that $\log \frac{\det(V_P)}{\det(V_0)}=\log \frac{\det(V_P) \cdot \det(V_{P-1}) \cdots \det(V_{1})}{\det(V_{P-1}) \cdots \det(V_{1}) \cdot \det(V_0)}=\sum_{p=1}^P\log \frac{\det(V_p)}{\det (V_{p-1})}$.
Furthermore, we have the following condition (which we state above line 499):
$\log \frac{\det(V_P)}{\det(V_0)}\leq d \log ( 1+ \frac{M\tau }{\lambda d } )$
Therefore, we have at most $R = \lceil d \log ( 1+ \frac{M\tau }{\lambda d } )\rceil$ epochs such that $\log \frac{\det(V_p)}{\det(V_{p-1})}\geq 1$; otherwise, we have $\log \frac{\det(V_P)}{\det(V_0)}> d \log ( 1+ \frac{M\tau }{\lambda d } )$, which violates the condition above. This implies that for all but $R$ epochs, we have that
$0 \leq \log\frac{\det(V_p)}{\det(V_{p-1})}\leq 1$, which is $1 \leq \frac{\det(V_p)}{\det(V_{p-1})}\leq 2$.
We thank you for your suggestion. We will add the explanations to the proof of Theorem 4.1. in our revision.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I thank the authors for answering my questions. A minor comment: it seems that the $\log$ in this submission is the natural logarithm, in that case, shouldn't $\frac{\det(V_p)}{\det(V_{p-1})} \leq e$?
I raise my score from 5 to 6, and it would be greatly appreciated if the authors add the explanations to the proof of Theorem 4.1 in their revision.
---
Reply to Comment 1.1.1:
Comment: We thank you for going through the discussion and raising the score. Your suggestion would indeed make the proof easier to follow.
Regarding your question on the ratio, we use logarithm base 2 for that determinant ratio. Using natural logarithm would only change the bound by a constant factor, we will add this clarification in the revision. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies multi-agent linear contextual bandits problem where each agent faces different bandits model. The heterogeneity is captured by the $l_2$ distance between the environment parameters. An upper confidence bound (UCB) algorithm, termed as H-LinUCB is proposed. The regret upper bound nearly-matches the lower bound proved by this paper in both small and large heterogeneity regime. The experimental results also valid the effectiveness of the proposed algorithm.
Strengths: + The paper provides solid theoretical results including both regret upper bound and regret lower bound. The discussion of the small and large $\epsilon$ cases are insightful.
+ The paper provides experimental results to validate their theoretical results.
Weaknesses: The major weaknesses is the design of the stopping criterion. Please see the question part for detailed discussion.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It seems like we can have a simpler criterion when $\epsilon$ is known. For example, if $\epsilon\leq 1/\sqrt{T}$, we deploy a H-LinUCB with $\tau=T$ and when $\epsilon\geq 1/\sqrt{T}$, we deploy individual OFUL algorithm for each agent. The regret upper bound is $O(\epsilon MT + d\sqrt{MT})$ and $dM\sqrt{T}$ respectively, which scales the same as that provided in the paper. In my opinion, this rule is simple and may have lower communication and computation cost since there is no communication when $\epsilon$ is large.
Can author discuss whether this method works?
Another question is that is it able to detect the magnitude of $\epsilon$ when $\epsilon$ is unknown so that the algorithm can automatically “stop” to aggregate?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your suggestions. We address your concerns and questions as follows:
>.... Can author discuss whether this method works?
We appreciate your discussion and thank you for raising an insightful question. Mathematically, in the scenario where $\epsilon$ is known, your proposed rule would indeed yield the same statistical guarantees. However, our objective was to design a generic algorithm that would be adaptive to an unknown $\epsilon$, even though in our current results, we only worked in the known $\epsilon$ setting. Specifically, our stopping rule adjusts the collaboration based on the $\epsilon$, enabling us to extend the current algorithm to the case of unknown $\epsilon$.
> Another question is that is it able to detect the magnitude of when $\epsilon$ is unknown so that the algorithm can automatically “stop” to aggregate?
That is also an excellent question, something we leave for future work. We do believe that it is possible to estimate $\epsilon$ (by doubling trick, corralling, etc.) using data collected during early rounds. Depending on the estimate, the algorithm would determine whether it will continue to collaborate or cease it. We note, though, that what we really require is any upper bound on $\epsilon.$
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: Thank you for answering my previous questions. I decide to keep my original score. | null | null | null | null | null | null |
Information Maximizing Curriculum: A Curriculum-Based Approach for Learning Versatile Skills | Accept (poster) | Summary: This paper proposes an imitation learning method IMC that can model multi-modal behaviors. IMC avoids the mode-averaging issue with an objective similar to reverse-KL. To cover all modes in the dataset, IMC further introduces a mixture model with multiple components, each focusing on different data distribution it specializes to. The mixture model is optimized with the EM algorithm. The authors provide extensive experiments over simulated control environments and demonstrate IMC's superior modeling abilities.
Strengths: 1. IMC is a well-motivated method for multi-modal density estimation and provides an elegant reverse KL-based solution.
2. The experiments in low-dimensional control environments are extensive. IMC is compared against major generative models (with maximum likelihood objective) and addresses the mode-averaging issue better.
3. The visualization of learned trajectories clearly demonstrates the learned modes of different methods and the natural mode-ignorance ability of IMC.
Weaknesses: The presentation in the experiment section could be improved. Currently, the subsections for different environments repeat mostly the same conclusions that IMC achieves high success rates and covers diverse modes. I think it is better to emphasize anything special for different environments or different baselines.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How sensitive is the performance of IMC with respect to different choices of $\eta$?
2. Can IMC be scaled up to scenarios with high-dimensional input?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > 1. IMC is a well-motivated method for multi-modal density estimation and provides an elegant reverse KL-based solution.
> 2. The experiments in low-dimensional control environments are extensive. IMC is compared against major generative models (with maximum likelihood objective) and addresses the mode-averaging issue better.
> 3. The visualization of learned trajectories clearly demonstrates the learned modes of different methods and the natural mode-ignorance ability of IMC.
We thank the reviewers for their positive feedback. We are glad the reviewers recognize our contribution in addressing the mode-averaging problem. We now aim to thoroughly address the concerns raised by the reviewers.
> The presentation in the experiment section could be improved. Currently, the subsections for different environments repeat mostly the same conclusions that IMC achieves high success rates and covers diverse modes. I think it is better to emphasize anything special for different environments or different baselines.
We are grateful to the reviewers for their valuable suggestion. We agree that the content in the experiment section currently provides limited additional insights beyond what's presented in the tables and figures within the paper. Consequently, we are committed to enhancing the experiment section by placing a stronger emphasis on distinct characteristics of various environments.
> How sensitive is the performance of IMC with respect to different choices of $\eta$?
To address your question thoroughly, we have included a comprehensive analysis of IMC's performance sensitivity in the supplementary material of the paper (Appendix E.2). In that section, we discuss the impact of various values of $\eta$ on the model's performance on the Obstacle Avoidance, Table Tennis and Franka Kitchen task.
> Can IMC be scaled up to scenarios with high-dimensional input?
Indeed! IMC can be effectively scaled up to scenarios with high-dimensional input. A viable approach to achieve this scalability is by utilizing an encoder, such as a Convolutional Neural Network (CNN) encoder, to map the high-dimensional inputs (such as images) to a lower-dimensional space before passing them to the inference network or experts.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I appreciate the newly added sensitivity analysis and discussions. I am keeping my original score. | Summary: The authors study a method, Information Maximizing Curriculum (IMC), that performs behavioral cloning by having the model selectively choose a learned weighing of the demonstration data for which the model is best at predicting (via minimizing the reverse KL divergence). To avoid the mode-seeking behavior of this approach, the authors extend the method to make use of K such model components, leading to a mixture of experts (MoE) approach, whereby each component selectively models the distribution in this way, while maximizing their joint information projection over the dataset (via simultaneously maximizing the entropy of the MoE distribution). Experiments in two robotics simulators show it outperforms several other baselines based on generative models and basic MoE methods trained via expectation-maximization and backpropagation.
Strengths: - The paper is overall well written. The method and its motivations are clearly communicated.
- The authors compare to several important baselines, spanning generative models, energy-based models, and MoE methods, and show strong performance improvements.
Weaknesses: The main weakness I see is that this work seems almost identical to Li, et al (2023), which proposes largely the same method. Moreover, this prior work looks at a very similar set of environments as this paper. While the authors cite this prior work, they do not directly compare to it. Given the extremely close similarity between the two methods, it seems important to compare to this work to justify the contribution in this paper, which in a sense, is an extension of the method in Li, et al (2023) by incorporating neural networks.
I see two main ways to show improvement over this prior work: The authors can either (1) show their method outperforms the approach in Li, et al (2023) on the tasks studied, or (2) Demonstrate clearly how IMC can scale to environments in which the method of Li, et al (2023) cannot, thereby clearly justifying the strengths of this extension. I imagine neither of these aims is too difficult, but such a comparison seems sorely missing, given the strong similarity between the two works.
Lastly, I find it odd that the authors couch their method as “curriculum learning,” ignore the field of active learning altogether, and proceed to imply that their method is novel in not requiring an a priori difficulty metric for each datapoint. The authors should relate their method to active learning, which is a mature field of study, and most active learning methods can be viewed as “curriculum learning” without any a priori notion of task difficulty.
### References
Maximilian Xiling Li, Onur Celik, Philipp Becker, Denis Blessing, Rudolf Lioutikov, and Gerhard Neumann. Curriculum-based imitation of versatile skills. arXiv preprint arXiv:2304.05171,
2023.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - This study primarily focuses on continuous control robotics simulators. Do you have a sense of whether IMC can be made to work in discrete control settings, e.g. Atari, chip design, web navigation.
- Are there cases where maximizing mode coverage is disadvantageous? Often demonstration datasets contain a mix of approximately optimal and very suboptimal trajectories. Could IMC lead to learning MoE that perform worse in these settings?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: This work only focuses on fairly simple continuous control tasks. Scaling this to more complex imitation learning settings (e.g. from pixels or controlling a much more complex action space) as well as discrete control settings would add confidence in the utility of this approach. Relatedly, demonstrating success on these more challenging settings would be a great way to highlight why IMC is an important improvement over the Li, et al (2023) work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The paper is overall well written. ff.
We thank the reviewers for acknowledging our contribution and are committed to addressing your questions and concerns.
> The main weakness I see is that this work seems almost identical to Li, et al (2023), which proposes largely the same method.
We agree with the reviewers that the method proposed by Li, et al (2023) (ML-Cur) and our work seem very similar. However, we want to point to some of the differences:
- ML-Cur assumes Gaussian curricula. The parameters of these distributions must be updated in every iteration of the algorithm and involve expensive matrix inversions. The gating distribution has limited flexibility due to the Gaussianity assumption. In contrast, IMC uses non-parametric curricula $p(\mathcal{D}|z)$ and trains a gating network ($g_{\phi}(z|\mathcal{D})$) once after training which is more efficient and allows for highly non-linear partitioning of the observation space.
- ML-Cur does not admit mini-batch updates for the model parameters. Therefore, the method does not scale to large datasets. Conversely, IMC accommodates mini-batch updates (refer to Section 3.5), enabling effective scalability.
- ML-Cur has to update the mixture weights $p(z)$ in every iteration. IMC has implicit updates due to Proposition 3.1.
- ML-Cur uses linear conditional Gaussian distributions to represent the experts. In contrast, IMC allows for arbitrary experts where exact likelihood computation is possible. In particular, we used non-linear conditional Gaussian distributions in our experiments.
- ML-Cur is evaluated on *episodic* tasks, i.e., where the model has the learn a mapping from a context to movement primitive parameters which represent the whole trajectory. A much more common setting is where the model has to learn a mapping from observations to actions (*step-based*). In that case, a trajectory is formed by performing multiple steps in the environment of a given task. In our work, 3 out of the 4 tasks consider the latter setup.
> Moreover, this prior work looks at a very similar set of environments as this paper.
The tasks used in the work of Li, et al (2023)
Despite the similarity in appearance of the environments used in the work of Li, et al and our work, there exist major differences:
- Their tasks do not require complex manipulations of objects in contrast to the Block Pushing task as well as Franka Kitchen used in our work.
- They consider input spaces with very low dimensions ($<4$) whereas our work considers up to $30$ (Franka Kitchen)
- They focus on learning from a few data points ($<5000$) while our method looks at datasets with up to $463k$ samples (Block Pushing).
- Their environments require little generalization capabilities of the models (this can be seen from the highly competitive results achieved by the k-nearest neighbor algorithm). Our environments are much more complex, see e.g. the low performance of behavior transformers (BET) and denoising diffusion models (DDPM) on Obstacle Avoidance, Block Pushing, or Franka Kitchen.
- They do not provide performance metrics for quantifying how versatile the learned policy is. In contrast, we provide such a metric for all environments except for the table tennis task.
> While the authors cite this prior work, they do not directly compare to it. ff.
We agree with the reviewers that the comparison to the work of Li, et al (ML-Cur) is missing. Therefore, we added quantitative and qualitative comparisons between IMC and ML-Cur in the 'global comment' of the rebuttal and in the accompanying PDF file. We thank the reviewers and will include these results in the final version of the paper.
> I see two main ways to show improvement over this prior work: The authors can either (1) show their method outperforms the approach in Li, et al (2023) on the tasks studied, or (2) Demonstrate clearly how IMC can scale to environments in which the method of Li, et al (2023) cannot ff.
We thank the reviewers for their suggestions. We believe that by including the comparison to ML-Cur (see last comment) in the paper we demonstrate (1) that our method significantly outperforms ML-Cur on the tasks studied and (2) show ML-Cur is not able to scale to the environments used in our work.
> Lastly, I find it odd that the authors couch their method as “curriculum learning,” ignore the field of active learning altogether. ff.
We agree that we should relate our method to active learning. To that end, we will include a section in Chapter 4, elaborating on the commonalities and differences between our work and active learning.
> This study primarily focuses on continuous control robotics simulators. Do you have a sense of whether IMC can be made to work in discrete control settings, e.g. Atari, chip design, web navigation.
Absolutely! While this study primarily focuses on continuous control robotics simulators, our algorithm is applicable for training mixture of experts models whenever the likelihood of the experts can be defined and optimized, including the discrete control setting.
> Are there cases where maximizing mode coverage is disadvantageous? ff.
Yes, we believe there are cases where maximizing mode coverage can be disadvantageous. However, this is a problem of divergence minimization in general and we believe that other methods for learning MoE could lead to even worse performances, as they aim to cover all data even if the model complexity is not sufficient (property of the forward KL). There are dedicated fields of study such as offline reinforcement learning [1], that address problems associated with learning from suboptimal data.
[1] Levine, Sergey, et al. "Offline reinforcement learning: Tutorial, review, and perspectives on open problems.”
We sincerely appreciate the time and effort the reviewers have invested in assessing our work. Their comments and suggestions have significantly contributed to refining and strengthening the manuscript.
---
Rebuttal 2:
Title: Nice additions
Comment: The new experimental comparison to Li et al satisfies my original critiques. I encourage the authors to emphasize the similarity and differences between their work and Li et al in their final manuscript, as the differences articulated above form the *primary contribution* of their work. I am raising my score in response. | Summary: This paper proposes a curriculum based approach for imitation learning. Overall, the imitation learning problem is posed as a conditional density estimation problem. Given the multi-modal nature of underlying data, this paper proposes to learn a curriculum based mixture of expert policy. Intuitively, each expert is only responsible for learning a subset of the training data and learns the policy for this subset using reverse KL. Experimentally, the proposed approach is verified in 4 different environments and compared against both common and recently proposed approaches.
Strengths: The paper presents an interesting and grounded approach for learning from multimodal data distribution. The overall idea of using a curriculum to weight data samples based on how well they are represented under the expert policy seems to be generally useful. The approach is shown to work in diverse settings and seems competitive with recent results.
Weaknesses: *Baseline results:* Overall, the proposed approach provides very little significant advantage over simpler baselines. For instance, in Figure 4 and Figure 5 we can see that for most tasks (Pusing, Tennis, Kitchen), a diffusion model based approach (DDPM) is highly competitive to the proposed approach (success rate difference is less than 0.05) across all tasks. Only in the Obstacle avoidance task, does DDPM approach suffer although CVAE still performs quite well (unclear why DDPM performs poorly here).
*Benchmark tasks:* While the paper tries to evaluate the proposed approach on multiple datasets, most of the tasks/datasets are not commonly used across multimodal tasks (only Franka-Kitchen is the commonly used dataset). Given the large set of recent works in this area it would be much better to evaluate on tasks which other recent works evaluate. For instance, most recent works evaluate on RoboMimic dataset [1] and the block pushing task from IBC [2]. Both of these task suites have human demonstrations available and since the underlying data distribution is highly important for these proposed approaches, reusing datasets from previously proposed approaches will allow for a much fairer comparison.
[1] Robomimic: https://github.com/ARISE-Initiative/robomimic
[2] Florence et al. Implicit Behavior Cloning
--- Post Rebuttal ---
I don't see any issues with the paper the provided code also looks reasonable and reproduces the results so I would update my recommendation to Weak Accept.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Overall the approach is broadly similar to an EM approach with mixture of policies. As noted, the main difference being that the proposed approach uses a curriculum weighted optimization objective to assign weights to samples instead of directly using the variational distribution. While algorithmically, this difference is not large in terms of empirical results the EM based approaches lags behind the proposed approach quite a lot (especially for the obstacle avoidance task). Do the authors have an intuition why the EM approach suffers so much in comparison?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The paper presents an interesting and grounded approach for learning from multimodal data distribution. ff.
We thank the reviewers for their feedback on our paper. We are delighted to hear that the overall idea of using a curriculum to weight data samples is well-received. We would like to address address the mentioned weaknesses / questions:
> Baseline results: Overall, the proposed approach provides very little significant advantage over simpler baselines. For instance [...] a diffusion model based approach (DDPM) is highly competitive to the proposed approach ff.
We appreciate the reviewers' comment and the opportunity to discuss the advantages of our approach in comparison to other baselines. Our method indeed offers distinct benefits over alternative techniques, such as DDPM.
One significant advantage lies in the inference efficiency of IMC. DDPM, due to its sequential inference procedure, suffers from longer inference times, which can be impractical for applications with real-time constraints. This limitation becomes especially pronounced when deploying models to real-world systems. In contrast, IMC's inference process requires just two forward passes: one through the inference network for sampling $z$ and another through the corresponding expert. This efficiency is particularly advantageous compared to DDPM's requirement of $N_{\text{diffusion steps}}$ forward passes. Furthermore, we observered that our method requires less than $1/10$th of the training time that is required to train DDPM. We thank the reviewers and will include an ablation study on inference and training times in the camera-ready version of the paper.
Regarding the term 'simpler baselines', we are uncertain about the reviewers' defintion of 'simple'. We understand that baselines, including DDPM, are far from being simple in terms of their design choices and intricacies. DDPM involves non-trivial decisions concerning parameters like the number of diffusion steps, variance scheduling, time embedding, and employs additional code-level optimizations such as using an exponential moving average for the parameters. The same holds for other baselines such as normalizing flows, energy based models and behavior transformers.
> Only in the Obstacle avoidance task, does DDPM approach suffer although CVAE still performs quite well (unclear why DDPM performs poorly here).
We conjecture that DDPM's subpar performance can be attributed to the limited amount of data available for the obstacle avoidance task. IMC, in contrast, appears to circumvent this challenge due to its composition of 'simpler' Gaussian experts, which appear to generalize better on little data.
Regarding CVAE: While CVAE has a high sucessrate, it suffers from low entropy values, i.e., is not able to cover multple modes in the data distribution. This can also be seen in Figure (3). We've empirically observed that a trade-off often exists between achieving high success rates and maintaining adequate entropy levels across most models (see Table 1). However, IMC stands out as an exception by simultaneously achieving high success rates and commendable entropy values.
> Benchmark tasks: [...] most recent works evaluate on RoboMimic dataset [1] and the block pushing task from IBC [2]. Both of these task suites have human demonstrations available ff.
While we agree that the RoboMimic datasets are excellent for evaluating a models' performance in complex manipulation tasks, we opted not to use them since a) the tasks within the RoboMimic datasets (lift, can, square, tool hang and transport) lack multimodality, except for the multimodality evident in human demonstrations and b) they do not provide suitable metrics to evaluate a models' ability to cover different modes present in the dataset.
Regarding the block pushing task from IBC: Unfortunately, this dataset is not recorded by humans but uses a scripted policy (Quote IBC paper, Section 4: [...]We evaluate implicit (EBM) and explicit policies on both variants, trained from a dataset of 2,000 demonstrations using a scripted policy that readjusts its pushing direction if the block slips from the end effector.[...]). We found that this task can be solved by simple methods such as behavior cloning (see IBC paper Table 3, MSE). Therefore, we decided to rebuild the task and use human-recorded data which significantly increases the diffuculty due to the inherent versatility in human demonstrations. Furthermore, we want to highlight that we've acknowledged the resemblance between our task and the one introduced in the IBC paper. We provided an explanation for our decision to redevelop the task, as outlined in Appendix C.1.2.
We carefully designed new tasks and metrics and recorded data to circumvent these shortcommings of existing benchmarks such as Robomimic and intend to publish them, as well as additional tasks, in a separate work. This endeavor aims to establish a benchmark for versatile imitation learning, focusing on quantifying a model's capacity to acquire diverse skills.
> Overall the approach is broadly similar to an EM [...]. Do the authors have an intuition why the EM approach suffers so much in comparison?
We believe that the primary reason for the relatively poorer performance of EM, compared to IMC, lies in the inherent requirement of EM to account for all data points during training. This constraint forces EM-trained models to handle outliers or challenging samples that might lead to suboptimal log-likelihood outcomes. In contrast, IMC's curriculum-based approach enables the model to focus on samples where experts perform well (in terms of log-likelihood), allowing them to disregard outliers or poorly performing instances, resulting in the observed performance boost.
We extend our gratitude to the reviewers for dedicating their time and effort to evaluating our work. Their valuable comments and suggestions helped enhancing and fortifying the quality of our manuscript. | Summary: The paper proposes a learning protocol that learns multiple policies ("skills"), distribution over skills, and a per-skill priority over experience buffer. This is achieved by maximizing a variational lower bound of a certain averaged regularized KL distance. Namely, the objective is a sum of two terms. The first term is a KL distance between the per-skill priority over previously seen actions and a policy, both conditioned on the skill and previously seen observation, and averaged over them. A second term is an entropy of the joint distribution over experience and skills, which acts as a regularizer, forcing it to be as diverse as possible.
In the experiment section, the skill policies share a common backbone and output a mean of an isotropic Gaussian distribution. The paper tests the method over several tasks, such as Obstacle Avoidance, Block Pushing, Franka Kitchen, or Table Tennis. The approach compares (mostly) favorably against seven baseline methods.
Strengths: The paper proposes a simple objective to train multiple policies at once, such that together they cover multiple modes in the dataset. The results show that the method (mostly) performs better than the considered baselines.
Weaknesses: The empirical part could be improved. Namely:
* Performance metrics should be defined in the main body of the paper:
* There could be a dedicated section for that purpose. This would improve the exposition and allow more discussion about the method in Sections 5.1-5.4.
* This could also make reading the results (e.g., Table 1) more accessible. For instance, the definition of entropy varies across environments, as shown in Appendix C, which makes the numbers incomparable and possibly draw wrong conclusions.
* The discussion on the range of entropy values should be moved from the Appendix to the main body. It provides grounding for the numbers in Table 1.
* Additional place for this and a more in-depth discussion of the results could be achieved by moving Section 2.1 and Figure 1 to the Appendix (it is a well know property of KL). Similarly, some parts of Section 3 could be moved to the Appendix.
* The paper does not provide numbers on how the algorithm mixes between skills, e.g., the entropy of the distribution over components ($p(z)$).
* It seems that each environment required a different setup of the method. This is a limiting factor for the method. The paper, however, does not guide the potential user of the method in choosing the relevant hyperparameters. For example, estimating how many modes are in the data can be a non-trivial task, and consequently, setting the number of components ($N_z$) selected for each experiment may be non-trivial. The paper has no information regarding the actual values of $N_z$ chosen for each experiment.
* Descriptions in Sections 5.1-5.4 are mainly technical and deal with the setup. The discussion about the results of IMC is limited to one or two sentences that do not add more information than Table 1 or Figure 6. A reader would expect that most of such a section would provide real insights about the method (e.g., what skills were learned, how they were acquired during training, where all modes were discovered, how a distribution that mixes skills looks, etc.).
* Figure 6: Shouldn't we expect that the performance is an increasing function of the number of components (the more modes covered, the higher the objective)?
* There is no mention of what values $\eta$ were chosen.
* Other issues:
* Table 1: The description is not self-explanatory. What is the selected number of components? What versions of entropy are used? What is the setup of experiments? Etc.
* Sections 5.1, 5.2, and 5.4 refer to Table 13, which is not present in the paper. Most likely, it should refer to Table 1.
* Figure 6 precedes Figure 5.
The quality of the technical part of the paper could be improved and more clearly agitated. In particular,
* Section 2.2:
* There is no definition of $\mathcal O$, $\mathcal A$, $p$, $z$.
* Is $z$ a continuous or discrete random variable? From the context of the following sections, it would seem that the latter.
* Section 3.1:
* It seems more clear to write $p(o, a)$ instead of $p(\mathcal D)$.
* In equation (2), there should be $\mathcal H(p(\mathcal D))$ (or better $\mathcal H(p(o,a))$, see the item above). Additionally $\mathbb E_{p(\mathcal D)}$ could be more transparent if written as $\mathbb E_{o,a\sim p(\mathcal D)}$.
* The formulas in lines 80, 85, and 86 should be better justified. There is only a cryptic comment in line 81 about optimization in an alternating fashion, which is not justified, nor a suitable reference is given.
* The description in lines 87-93 is relatively informal and does not refer to the formulas in lines 80, 85, and 86. Additionally, the text uses colloquial terms such as "representational capacity of the policy", "capacity [..] is exhausted", or alludes to the convergence of curriculum, which was not proved.
* There is no definition of $\mathcal D_n$ (lines 85, 86, 119, 120, 124, 129, 130, etc.). Why not just write $(o_n, a_n)$?
* In the proof of Proposition 3.1 (Appendix A.1.), the formulas read $p^*(o)$ where the Proposition refers to $p^*(z)$. A similar comment refers to other proofs in Appendix A.
* Section 3.2
* There is no description of the objective $J(\psi)$ and its lower bound, showing how the individual terms promote desirable behavior.
* In equation (3), it seems that the entropy term should be equal to $\mathcal H(p(o, a, z))$; otherwise, equation (4) seems not to be valid.
* It seems that in equation (4) and in the definition of $R_z$, there should be $q(z|o,a)$ (in place of $q(z|o)$ and $q(z|\mathcal D)$, respectively). Additionally, it would make sense to define $R_z$ as a function of $(o, a)$ (instead of $\mathcal D$).
* Similar comments apply to Sections 3.3-3.5.
Edit: After the Authors' rebuttal, I have raised the score (4->6).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Consult the Weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper includes a brief limitations section. What could also be mentioned is that the method requires setting several important hyperparameters (see the review), some of which require non-trivial knowledge about data. The method also was tested on continuous tasks, so a natural question (limitation or future research) would be to ask about performance on discrete domains, such as combinatorial puzzles like chess or video games like Atari benchmark.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: In order to adhere to the character limit we will jointly address most of the concerns regarding the empirical part and those of the technical part.
**Regarding concerns on the empirical part:**
We agree with the reviewers that moving the definition of the entropy for the different experiments to the main body of the paper increases readability and makes the results more accessible. To that end, we moved Section 2.1 (explanation of KL properties ), Figure 4, and some of the algorithmic details (Section 3.5) to the Appendix. Additionally, we improved the discussion of the results and highlighted unique aspects of different environments instead of repeating similar conclusions.
Furthermore, we moved parts of the explanation about the hyperparameters of IMC (which encompasses all design choices incl. $\eta$ and $N_z$ for all experiments) from Appendix C.2 to Section 5. We also made Table 1 self-explanatory by elaborating on the setup and on the metrics used. Moreover, we improved the result discussion by highlighting unique aspects of the different tasks and our method.
> The paper does not provide numbers on how the algorithm mixes between skills ff.
First, we want to clarify potential confusion about the relationship between components $z$ and skills learned by the policy. For the following discussion, we assume that the reviewer refers to a skill as a sequence of actions that lead to a successful outcome. Each $z$ employs a curriculum to specialize on a subset of the data. Depending on the model complexity of the expert, this subset could encompass multiple skills or only part of a skill. Hence, there is no one-to-one correspondence between skills and $z$. Therefore, there is no need for knowing the number of skills or modes a-priori. We used $N_z=50$ for all experiments which empirically worked well. To clarify this further, we added a figure to the supplementary PDF that visualizes the curriculum.
We believe that reporting the entropy $\mathcal{H}(p(z))$ provides valuable insights on how many of the components are used to solve a task. We will add an ablation study showing $\mathcal{H}((p(z))$ for various $N_z$ to the appendix.
>Figure 6: Shouldn't we expect that the performance is an increasing function of the number of components ff.
The performance metrics presented in Figure 6, namely the success rate and entropy, differ from the optimization objective $J(\psi)$. While we do observe empirically that adding more components to the model tends to increase $J(\psi)$, the same does not necessarily hold true for the performance metrics.
The reason behind this observation lies in the fact that as we add more components to the model, the overall model complexity increases. Consequently, the model becomes more susceptible to overfitting on the training data. We conjecture that this causes the fluctuations in the success rate in Figure 6a.
**Regarding concerns on the technical part:**
We appreciate the reviewers' feedback regarding the notation for the curriculum $p(\mathcal{D})$. We understand the potential confusion that might arise from this notation. The intention behind using this notation was to emphasize that the curriculum represents a categorical distribution over data points rather than a continuous joint distribution between actions and observations $p(o,a)$.
Upon careful consideration and in light of the existing MoE literature, which commonly employs $(o,a)$ to denote responsibilities as $q(z|o_n,a_n)$, we find the suggested notation to be more aligned and clear. Therefore, we gladly adopt the notation $(o,a)$ in our work. Moreover, we defined $R_z$ as a function of $(o,a)$ and replaced $\mathcal{D}_n$ with $(o_n,a_n)$.
>- The formulas in lines 80, 85, and 86 should be better justified ff.
>- [...] or alludes to the convergence of curriculum, which was not proved
We add a sketch of the convergence proof of the objective in line 80 as well as $J(\psi)$ to the 'global' comment of the rebuttal (please note that we used the old notation for the curriculum to make it accessible to other reviewers). We add a detailed proof to the appendix in the final version of the paper.
We agree with the reviewers and 1) explain the equations 2) add references and 3) replace colloquial terms with technical statements. For 1) and 2) we refer to the global comment. Regarding 3):
- The statement 'capacity [...] is exhausted' refers to the iteration where the optimization in line 80 reaches a fixed point in $\theta$
- The term 'representational capacity' refers to the model complexity. The statement 'samples that lie within the representational capacity' refers to samples where the expert is able to achieve high log-likelihood values
>- There is no description of the objective $J(\psi)$ and its lower bound, showing how the individual terms promote desirable behavior.
- We focussed on the description of the per-component objective $J_z$. We believe that this explanation of the objective is the most intuitive, as $J_z$ is similar to the single expert objective (Section 3.1) with an additional term $\log q(z| o, a)$. We will add a description explaining why $\log q(z| o, a)$ promotes the claimed behavior.
>[...] the method requires setting several important hyperparameters ff.
We argue that the only hyperparameter that needs to be tuned is the curriculum pacing (or entropy scaling) $\eta$. We promote future research in an automatic tuning of $\eta$, similar to the approach proposed in [1].
> [...] a natural question [...] would be to ask about performance on discrete domains ff.
We will add discrete domains as promising avenue for future research.
We thank the reviewers for noticing errors and notational shortcomings in the paper. We have addressed them and made the necessary corrections. Moreover, the valuable feedback has immensely contributed to the improvement and credibility of our research.
[1] Haarnoja, Tuomas, et al. "Soft actor-critic algorithms and applications."
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: I have read all the reviews and the Authors' answers, and I appreciate the detailed discussion and new experimental results.
I would like to know the Authors' response to the point on the description of the method in Sections 5.1-5.4, including insights about the method (e.g., what policies $p_{\theta_z}$ were learned, how they were acquired during training, where all modes were discovered, how a distribution that mixes policies $p_{\theta_z}$ looks like etc.), see my original review (but with a word 'skill' replaced by a 'policy')
Concerning the results from Figure 6, I found the Authors' response in this respect slightly confusing: (a) the metrics seem to increase with $N_z$; (b) If the Authors' hypothesis about the overfit is true, the choice of $N_z$ matters and it has to be selected either via expert knowledge or hyperparameter optimization.
---
Reply to Comment 1.1.1:
Title: Re: Thank you for the response
Comment: # Rebuttal
> I have read all the reviews and the Authors' answers, and I appreciate the detailed discussion and new experimental results.
We extend our sincere gratitude to the reviewers for their questions, allowing us the opportunity to provide further clarification.
> I would like to know the Authors' response to the point on the description of the method ff.
We would like to address each of the concerns raised by the reviewer, one by one.
> [...] what policies were learned [...]
Visualizing the policies $p_{\theta_z}$ learned by our algorithm is challenging when dealing with high-dimensional problems. Nevertheless, the Obstacle Avoidance task lends itself to informative visual representations. Take for instance Figure 2 in the supplementary rebuttal PDF, which displays the curricula $p(o,a|z)$ with distinct colors assigned to individual values of $z$. Given that each policy $p_{\theta_z}$ is trained using samples selected from $p(o,a|z)$, this visualization serves as a direct means to observe the specific policies that have been learned.
For instance, Figure 2b shows that the blue color corresponds to a policy guiding the robot's movement from the top left to the bottom right, the green color indicates a policy facilitating movement from the bottom left to the top right, and the orange color signifies a policy where the robot remains stationary.
> [...] how they were acquired during training [...]
We initialize the policy parameters randomly. Throughout the training process, the inclusion of the $\log q(z|o,a)$ term in our objective function ensures that curricula focus on distinct subsets of data. This specialization consequently results in the training of policies that align with these subsets. By employing a suitable number of expectation-maximization (EM) steps, the curricula converge, as demonstrated in the newly added proof, yielding the final policies. These converged curricula are visualized in Figure 2 of the supplementary rebuttal PDF.
We incorporate an additional figure in the final version of the paper, which will illustrate the curricula at various stages, i.e., before training, during training, and after convergence. Similar to the format of Figure 2, this will provide a clearer visualization of how the curricula and policies evolve throughout the training process.
> [...] where all modes were discovered [...]
It is difficult to assess whether all modes in a data distribution are discovered as the number of modes is typically unknown. We, therefore, chose tasks such that the number of modes is known a-priori. This allows us to quantify the mode coverage using the 'entropy' metric. In response to the reviewer's suggestion, we will provide a more detailed explanation of how entropy is calculated in the main body of the paper, providing a clearer understanding of how we measure mode discovery. Please note that knowing the number of modes is not a requirement of our method but is merely used for evaluation.
> [...] how a distribution that mixes policies looks like etc. [...]
We are unsure of the reviewer's definition of 'mixing'. In the following, we assume that the reviewer refers to selecting different policies as mixing.
The distribution that mixes policies in a MoE policy is the gating $p(z|o)$ which is approximated by $g_{\phi}(z|o)$. Given an oberservation $o'$, a sample $z' \sim g_{\phi}(z|o')$ is drawn to select a policy $p_{\theta_{z'}}(a|o',z')$.
> [...] I found the Authors' response in this respect slightly confusing: (a) the metrics seem to increase with $N_z$
; (b) If the Authors' hypothesis about the overfit is true, the choice of $N_z$ matters, and it has to be selected either via expert knowledge or hyperparameter optimization.
We apologize for any confusion that might have arisen from our earlier response concerning the issue of overfitting. We want to clarify that we do not consider overfitting due to a high number of components as a significant concern and it is also not visible in the evaluations of Figure 6. As can be seen, there are only small fluctuations in the performance of less than 2.5 percent for the Obstacle Avoidance task which are within the error bars and therefore not significant.
In support of this assertion, we have conducted experiments using both the Obstacle Avoidance and Franka Kitchen tasks, employing 100 components over 5 separate seed runs. We could not observe any drop in performance and the observed success rates are within the error bars of the performance with 50 components.
Hence, the only downside of choosing a large number of components is an increased computational burden and neither expert knowledge nor hyperparameter optimization is necessary to determine a suitable value for $N_z$.
We appreciate the opportunity to clarify this matter and apologize for any confusion caused by our earlier response. We will include an ablation study where we use substantially more components in the camera-ready version of the paper. | Rebuttal 1:
Rebuttal: We express our gratitude to the reviewers for their valuable suggestions and constructive feedback. Here, we post additions that might be of interest for all reviewers:
We have included a proof sketch, outlining the convergence of the algorithm proposed in our work, providing a more thorough understanding of its theoretical foundations.
Additionally, we have incorporated a comprehensive comparison between our proposed approach and the work introduced by Li et al. (2023) [1].
Lastly, we added a supplementary PDF file which contains
1. further quantitative and qualitative comparisons to the work of Li et al.,
2. a visual demonstration of the curricula of IMC for a different number of components on the obstacle avoidance task,
3. a figure that accompanies the convergence proof and visualizes the lower bound over training iterations.
We firmly believe that these additions contribute to elevating the overall quality and depth of our work. Once again, we thank the reviewers for their insights, which have played an instrumental role in refining our manuscript. We encourage the reviewers to reach out without hesitation if they have any additional questions or concerns.
## IMC Convergence Proof (Sketch)
**Singe Expert Objective**
$f(p(\mathcal{D}), \theta) = \mathbb{E}_{p(\mathcal{D})}[ \log p(a|o;\theta)] + \eta \mathcal{H}(\mathcal{D})$.
We perform coordinate ascent on $f$ which is guaranteed to converge to a stationary point if updating each coordinate results in a monotonic improvement of $f$ [2]. For fixed expert parameters $\theta$ we can find the unique $p(\mathcal{D})$ that maximizes $f$ [3] (see Section 3.1) and hence we have
$f(p(\mathcal{D})^{(i)}, \theta) \geq f(p(\mathcal{D})^{(i-1)}, \theta),$
where $i$ denotes the iteration.
Under suitable assumptions ($f$ is differentiable, its gradient is $L$-Lipschitz, and the learning rate $\alpha$ is chosen such that the descent lemma [4] holds), it holds that
$
f(p(\mathcal{D}), \theta^{(i)}) \geq f(p(\mathcal{D}), \theta^{(i-1)}),
$
when updating $f$ using gradient ascent. Hence, we are guaranteed to converge to a stationary point of $f$.
**Mixture of Experts Objective** $J(\psi)$.
To show that $J(\psi)$ converges to a stationary point, that is $\nabla_{\psi}J(\psi)=0$ we only have to show that $L(\psi^{(i)}, q) \geq L(\psi^{(i-1)}, q)$ as we tighten the lower bound in every E-step [5]. We again perform a coordinate ascent on $L(\psi^{(i-1)}, q)$. In order to prove convergence we show that we have monotonic improvement on $L$ in every coordinate. It can easily be seen that $L$ is a maximum entropy objective with respect to $p(z)$ and $p(\mathcal{D}|z)$.
Hence, we can find the unique $p(z)$ and $p(\mathcal{D}|z)$ that maximize $L$ and thus have monotonic improvement. Noting that $q$ is not dependent on $\theta_z$, we can show monotonic improvement by using the same argument as for the single expert objective (the objective for $\theta$ is equal to the per-component objective for $\theta_z$).
**Additional Note.** While stochastic gradient ascent doesn't ensure strictly monotonic improvement in $L$ for $\theta_z$, our empirical observations (see accompanying PDF figure) reveal that $L$ indeed tends to increase monotonically in practice.
## Comparison: IMC vs. ML-Cur
We tested ML-Cur on the experiments used in the paper. We tested ${10, 30, 50}$ components. Furthermore, we did a hyperparameter sweep for the entropy scaling $\alpha$. We report the average performance $\pm$ one standard deviation across 10 seeds:
| | Obstacle Avoidance | |
| --- | --- | --- |
| | success rate | entropy |
| IMC | $0.855 \pm 0.053$ | $0.930 \pm 0.031$ |
| ML-Cur | $0.454\pm 0.223$ | $0.035\pm 0.024$ |
| | Block Pushing | | |
| --- | --- | --- | --- |
| | success rate | entropy | distance error |
| IMC | $0.521 \pm 0.045$ | $0.654 \pm 0.041$ | $0.120 \pm 0.014$ |
| ML-Cur | $0.000 \pm 0.000$ | $0.000 \pm 0.000$ | $0.408 \pm 0.030$ |
| | Table Tennis | |
| --- | --- | --- |
| | success rate | distance error |
| IMC | $0.870 \pm0.017$ | $0.153 \pm 0.007$ |
| ML-Cur | $0.836 \pm 0.020$ | $0.181 \pm 0.011$ |
For the Franka Kitchen task, we report the average performance for 1-4 tasks solved in the brackets.
| | Franka Kitchen | |
| --- | --- | --- |
| | success rate | entropy |
| IMC | $[0.996, 0.969, 0.884, 0.626]$ | $[0.619, 1.037, 1847, 2.147]$ |
| ML-Cur | $[0.394, 0.000, 0.000, 0.000]$ | $[0.118, 0.000, 0.000, 0.000]$ |
In the accompanying PDF, we have incorporated supplementary quantitative and qualitative comparisons to further enhance the comprehensiveness of our analysis.
The results indicate the ML-Cur fails in the step-based setting (i.e., where the policy has to map from observations to actions, in contrast to the episodic tasks considered in Li, et al, where the policy maps from low-dimensional contexts to movement primitive parameters).
We hypothesize that linear experts and Gaussian curricula cause errors to accumulate across steps, leading to failure. This insight elucidates ML-Cur's struggles in manipulation tasks requiring precise actions.
[1] Li, et al. Curriculum-based imitation of versatile skills. arXiv preprint arXiv:2304.05171, 2023.
[2] Boyd, Stephen P., and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
[3] S. Levine. Reinforcement learning and control as probabilistic inference: Tutorial and review.
arXiv preprint arXiv:1805.00909, 2018.
[4] Avriel, Mordecai. "Nonlinear programming." Mathematical Programming for Operations Researchers and Computer Scientists. CRC Press, 2020. 271-367.
[5] Bishop, Christopher M., and Nasser M. Nasrabadi. Pattern recognition and machine learning. Vol. 4. No. 4. New York: springer, 2006.
Pdf: /pdf/d37ed65010f14962ec7f50cf2327aaeb28e88bd9.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Random-Access Infinite Context Length for Transformers | Accept (poster) | Summary: The paper presents a new architecture for long-range decoder-transformers: a new "landmark token" is inserted into the input within constant strides (that is, after every $k$ tokens). Every such "landmark token" is thus the "representative" of a block of tokens.
The attention score to this "landmark token" is treated as a multiplicative gate for attending the tokens within its block.
That is, during training: tokens in the local proximity are attended to as usual; the attention score of far away tokens is their usual token-attention probability, multiplied by their landmark's attention probability.
During inference, the model attends to the local proximity tokens (the last k tokens) and to the previous landmarks, chooses the top-k landmarks, and attends only to the local block's tokens (the most recent tokens) and to tokens within these top-k blocks.
In other words, the model always attends to the most recent tokens, but performs a kind of "hierarchical attention" to the past tokens.
This allows, after fine-tuning, extending LLaMa 7B from 2k to 10k inputs.
Strengths: ## Strengths
* The approach allows finetuning the newest models (such as LLaMA) to extend their input length significantly
* The discussion of related work is thorough, albeit with some inaccuracies regarding Memorizing Transformers (see Weaknesses below).
Weaknesses: ## Weaknesses
- The authors claim "infinite context length", but demonstrate only on ~10k tokens. The authors write that "it is theoretically possible for the model to access any token in the entire past" - but this was not demonstrated. Even if the authors argue that this is only a matter of missing engineering, it needs to be demonstrated that the model can actually leverage the attention to the entire past in an effective way to claim "infinite context length" (although I believe that even solely the engineering is not as simple as the authors try to make it sound).
- The paper notes that a model that was trained this way can be used as a general-purpose document retriever, but this was not demonstrated as well.
- The authors make some claims about the most related work, [Memorizing Transformers (Wu et al., ICLR 2022)](https://arxiv.org/pdf/2203.08913.pdf), that are inaccurate:
> Memory Transformers [33] ... However, while these methods improve upon the memory-less variants, they do not allow for attending to *specific* tokens in the past, as the model only has access to a compressed version of this information
This is incorrect: Memorizing Transformers [33] do allow for attending to specific tokens in the past, by indexing and retrieving them in a kNN index. The main difference may be that in Memorizing Transformers, the trained gate is applied to the entire attention head, rather than to the local "landmark". Thus, I also think that the claim about having access only to a compressed version of the information is incorrect - later in the Related Work, the paper says:
> Memorizing Transformers [33] performs... However, these methods obtain the final results by interpolating between the kNN prediction and the local attention prediction using a tuned parameter as interpolation weight
This is again incorrect. Memorizing Transformers does not perform interpolation with a tuned parameter, it attends to the retrieved kNN tokens with a learned attention, as in standard transformers (the claim in the paper is correct regarding kNN-LM, but not to Memorizing Transformers)
* The explanation about positional encoding (Section 3.2.1) is very unclear. What does "allocate a segment" mean? where is this allocated and why? What does "*we index the current chunk starting after this segment*" mean? I could not parse sentences such as "*For the retrieved blocks, we map the index for any of the latest k blocks to the corresponding place within the last k blocks in the allocated prefix segment*", although I am very familiar with the long-range transformers literature and with Rotary positional embeddings.
* Evaluation
* Table 1 shows the language modeling evaluation.
* What metric is this? The table only shows numbers without mentioning the metric. is this perplexity?
* From what I am able to understand, in the longest evaluation length of 4096, Transformer-XL [7] performs better than the proposed approach? (achieving perplexity of 14.55 on PG19 which is lower than "Ours").
* In evaluation length of 2048, Transformer-XL [7] also performs better than "Ours", achieving perplexity of 14.72.
* Am I right to conclude that the proposed approach outperforms only the vanilla base transformer, but not outperforming Transformer-XL?
* The authors argue that "*In contrast with Transformer-XL, using our method, the information retrieval is interpretable. Particularly, it is possible to understand which parts of the text was recovered to generate a certain answer*" - however this interpretability was not demonstrated, so I did not take it as an advantage.
* The experiments with LLaMA-7B were demonstrated only on a synthetic task
* baselines:
* The only baseline is Transformer-XL, which came out in 2019.
* [Zhong et al., "Training Language Models with Memory Augmentation", EMNLP '2022](https://arxiv.org/pdf/2205.12674.pdf) discusses several retrieval scenarios - retrieving from an external datastore, and using kNN retrieval for long inputs. I believe that their TRIME-long is a very relevant baseline. I agree that TRIME-long is conceptually more limited because it uses a tuned interpolation hyperparameter as in kNN-LM, but the insight there of training with retrieval *from other examples in the same batch* may compensate for it.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: ## Questions
1. What is the metric in Table 1? perplexity?
2. Am I right to conclude that the proposed approach outperforms only the vanilla base transformer, but not outperforming Transformer-XL?
3. The paper clearly discusses relevant long-range transformers in the Related Work section. Isn't any of them a relevant baseline?
4. The only baseline is Transformer-XL, which came out in 2019. Can Memorizing Transformers and TRIME$_{\mathrm{long}}$ be relevant baselines?
5. In the LLaMa experiments in Section 4.2, the paper mentions that LLaMA was fine-tuned with a context length of 512 tokens. For fairness, was the proposed model trained with a context of 512 as well?
## Additional Comments
6. Possibly related paper [Bertsch et al., "Unlimiformer: Long-Range Transformers with
Unlimited Length Input"](https://arxiv.org/pdf/2305.01625.pdf). I refer to Unlimiformer as concurrent work since it appeared on arxiv 2 weeks before the NeurIPS deadline, so a direct comparison is not required.
7. I think that the explanation of GroupedSoftmax and the entire Section 3.1 can be simplified, as it explains the operator from the implementation point of view, rather than from the conceptual idea. I think that it could have been explained more simply by just saying that "every token attends only to the tokens within the same block, and to the other landmarks of other blocks", without bothering the reader with the $\mathbf{g}$ indices. Papers such as [Longformer](https://arxiv.org/pdf/2004.05150.pdf) also implement attention with masking and summing over indices, but the explanation in the paper is much more intuitive (see Figure 2 in [Longformer](https://arxiv.org/pdf/2004.05150.pdf)).
5. Equation (2) uses $\mathbf{G}$ rather than $\mathbf{g}$, unless $\mathbf{G}$ is something else; in that case, $\mathbf{G}$ was not defined.
## Summary
I like the overall approach and I think that it can be very useful in practice. However, there are a few significant issues:
1. Overclaiming, and claiming abilities that were not demonstrated, but only "theoretically possible"
2. Lack of baselines - the only baseline is Transformer-XL, which performs better (!) than the proposed approach
I am thus voting for only a borderline acceptance at this time. I would increase my score if the above questions, issues and weaknesses would be resolved.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer tGXa,
We make the following comments:
1. Our argument on infinite context length relies on the model's capabilities. While practical demonstration of infinite context length isn't feasible, our method's efficacy is shown in Appendix G at 32k context length. The original transformer remains crippled by the inability of positional encodings to fully generalize to arbitrary length. Current solutions sacrifice direct attention to early tokens to address this. However, our approach, using stingy positional mapping, eliminates this and reduces quadratic complexity by the block size factor. While inevitably going to larger lengths requires more resources, we show that this can be further alleviated by offloading the cache to CPU which is what we use in Appendix G. Our experiments reveal the model's ability in key retrieval despite stingy positional mapping, justifying the term "infinite." We welcome any suggestions to make our argument more concise.
2. We only note the ability to use for retrieval as a future prospect. We will clarify this in the final revision.
3. Memory Transformer is different from Memorizing Transformer. By mistake, we have referenced the wrong paper in Line 79. We will fix this in the final revision and cite Burtsev et. al. (2020). Indeed Memory Transformers do not allow for attending to specific tokens in the past while Memorizng Transformers do.
4. We refer to Section 3.1 of the Memorizing Transformer paper where the interpolation is discussed. The knn-augmented layer in memorizing transformers similar to kNN-LM performs an interpolation between the memory and the LM output to obtain the final output.
5. Please note that we still use standard Rotary Positional Encoding in our model. Our approach solely modifies token indexing (not the encoding) to eliminate the need for extrapolation to indices unseen at training. Here is an alternative description: We feed the input to the model in chunks; as in the paper. So the total number of tokens in the chunk and in the retrieved blocks should be less than or equal to the training context length $l_{\text{train}}$. So we introduce $l_{\text{train}}$ placeholders corresponding to indices 1 to $l_{\text{train}}$ to be filled with tokens from either the chunk or retrieved blocks, as shown in Figure 2. By allocating a segment in the beginning of the input we mean reserving some of the early placeholders. The placeholders beyond the reserved segment are filled with the chunk tokens left to right. Note that the token to placeholder (index) allocation differs between finding relevant blocks and attending to them. When attending to relevant blocks, those that are among the last $k$ blocks fed to the model are right-aligned in the reserved segment while others are left-aligned. Given that the reserved segment spans $k + 1$ blocks, this leaves one empty block between the two groups. We refer to Figure 2 for a visual representation of this process.
6. As specified in Line 282, the numbers in Table 1 are perplexity scores. We will make it clear in the caption of the Table in the final revision.
7. We present Transformer XL as a baseline to show that landmark attention can perform comparably in utilizing long contexts. However, Transformer XL has inherent limitations as it cannot directly access earlier tokens, hindering its ability to perform certain tasks, such as retrieval. In contrast, our LLaMA experiments demonstrate that Landmark Attention successfully retrieves and attends to early tokens.
8. Please note that the information in Transformer XL has to be passed through recurrence which prevents identifying the tokens the model attends to. In this sense, our model is more interpretable since the exact tokens attended to by the model can be identified by looking at the attention scores or looking at the set of retrieved blocks, i.e. blocks with highest scoring landmarks. We will clarify this further in the final revision.
9. We want to strongly emphasize that **in our method each token can attend to any other token**. In our method, a token attends to specific tokens in other blocks as well. However, the attention scores to those tokens are gated by their block's landmark token's score (through multiplication as in Eq 4). We provide high level intuition for the method in the introduction and Fig 1 but formally define our method in Sec. 3.
10. We do not train a separate model in LLaMA experiments and only compare a fine-tuning of LLaMA using our method at length 512 with Meta's vanilla LLaMA (not fine-tuned) at length 2048, our baseline. If we have misunderstood your question, please let us know.
11. We emphasize a key distinction between Memorizing transformers and our approach: Memorizing transformers require training with a large context length. This increases training cost or complicates implementation due to the need for linking the model to FAISS data structure. Due to the increased training cost it is challenging to train such models for large contexts. In contrast, our method allows inference at arbitrary context lengths regardless of the training context length. We tried training our model using the TRIME-Long method. Unfortunately, the training collapsed after 140k steps. At that point, Landmark Attention noticeably outperforms TRIME on PG-19 dataset in addition to being cheaper to train. The perplexity with 4096 tokens is 15.54 for landmarks and 17.43 for TRIME. We also note that the Trime-Long method is inherently limited since the model can not directly attend to the past context and intuitively it can only match close repetitions.
12. $g$ is any vector and denotes one of the inputs of GroupedSoftmax in Equation 1. $G$ is the specific grouping matrix defined in Equation 2.
We hope that the above replies adequately addresses your concerns and that you would consider raising your score. Please also read our general reply. We remain at your disposal to answer any additional questions or comments.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for your response.
>1. Our argument on infinite context length relies on the model's capabilities...
I still think that demonstrating empirically 10k tokens (or 32k in a synthetic dataset) and calling it "infinite context" is still a bit of a stretch.
>2. We only note the ability to use for retrieval as a future prospect. We will clarify this in the final revision.
I suggest moving this future prospect to the end of the paper. When reading such claims at the beginning of the paper, the reader expects to see empirical evidence of them.
>3. Memory Transformer is different from Memorizing Transformer. By mistake, we have referenced the wrong paper in Line 79.
OK, it's great that we caught it. So the Wu et al. 2022 Memoriz**ing** Transformers is not discussed? What are the conceptual advantages and disadvantages of this work compared to Memoriz**ing** Transformers?
>4. We refer to Section 3.1 of the Memorizing Transformer paper where the interpolation is discussed. The knn-augmented layer in memorizing transformers similar to kNN-LM performs an interpolation between the memory and the LM output to obtain the final output.
Right, but this paper says that it does that "using a **tuned parameter** as interpolation weight" which is incorrect, Memorizing Transformers **learn** this interpolation weight, so I don't see that learning this is necessarily a bad thing (and the paper mentions that as a weakness as far as I understand).
>5. Here is an alternative description: We feed the input to the model in chunks; as in the paper. So the total number of tokens in the chunk
> ...By allocating a segment in the beginning of the input we mean reserving some of the early placeholders...
> The placeholders beyond the reserved segment are filled with the chunk tokens left to right. Note that the token to placeholder (index) allocation differs between finding relevant blocks and attending to them. When attending to relevant blocks, those that are among the last
blocks fed to the model are right-aligned in the reserved segment while others are left-aligned. Given that the reserved segment spans
blocks, this leaves one empty block between the two groups...
I still think that this can be explained more intuitively. Maybe avoiding the words "allocation" and "segment" and focusing on the mathematical idea (rather than the implementation) would make it clearer.
>7. We present Transformer XL as a baseline to show that landmark attention can perform comparably in utilizing long contexts. However, Transformer XL has inherent limitations as it cannot directly access earlier tokens, hindering its ability to perform certain tasks, such as retrieval. In contrast, our LLaMA experiments demonstrate that Landmark Attention successfully retrieves and attends to early tokens.
To justify this claim, I would expect a non-synthetic experiment that shows this. What if access to early tokens is not necessary in any realistic task, so in practice Transformer XL should always be used instead of the proposed model?
>8. lease note that the information in Transformer XL has to be passed through recurrence which prevents identifying the tokens the model attends to. In this sense, our model is more interpretable since the exact tokens attended to by the model can be identified by looking at the attention scores or looking at the set of retrieved blocks
These claims about being "more interpretable" sound reasonable in theory, but they were not demonstrated, and I thus cannot seriously consider them as an advantage against Transformer XL. Transformers in general are "somewhat interpretable" (but largely not), so comparing their relative interpretability without any demonstration is a bit of a stretch.
>9. We want to strongly emphasize that in our method each token can attend to any other token. In our method, a token attends to specific tokens in other blocks as well.
Can't Memorizing Transformers attend to any other token as well?
>10. We do not train a separate model in LLaMA experiments and only compare a fine-tuning of LLaMA using our method at length 512 with Meta's vanilla LLaMA (not fine-tuned) at length 2048, our baseline
The LLaMA model that was fine-tuned with the proposed approach and length 512 - on which data was it fine-tuned?
>11. Memorizing transformers require training with a large context length. This increases training cost or complicates implementation due to the need for linking the model to FAISS data structure.
I agree that it seems that it's easier to train the proposed approach compared to Memorizing Transformers. However, Memorizing Transformers can theoretically be also trained with only a memory size that fits in the GPU memory, without FAISS at all. I think that the authors have conducted such experiments when the memory is limited during training.
Best,
Reviewer `tGXa`
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer tGXa,
Thank you for considering our replies and providing further comments. We hope the following replies address your questions.
> OK, it’s great that we caught it. So the Wu et al. 2022 Memorizing Transformers is not discussed? What are the conceptual advantages and disadvantages of this work compared to Memorizing Transformers?
We discuss both of these works in our paper. Memorizing Transformer is discussed in Line 114 while Memory Transformer is discussed in Line 79. The only typo is the index of referenced paper in line 79.
> Right, but this paper says that it does that “using a tuned parameter as interpolation weight” which is incorrect, Memorizing Transformers learn this interpolation weight, so I don’t see that learning this is necessarily a bad thing (and the paper mentions that as a weakness as far as I understand).
The process of learning is commonly referred to as tuning the model (e.g. fine-tuning), which is why we refer to the weight as a tuned parameter (as opposed to a hyper-parameter). The problem with this approach is that the weight does not depend on the input and is completely fixed during inference. Therefore even if the memory does not contain any relevant information, the model will still put a weight on the kNN results (and vice-versa).
> To justify this claim, I would expect a non-synthetic experiment that shows this. What if access to early tokens is not necessary in any realistic task, so in practice Transformer XL should always be used instead of the proposed model?
The need to copy words from early into the text can commonly arise in text generation. For example, consider the name of character that was referred to in a book several chapters ago. It is very important that the model generates the correct character name but this is only needed once after possibly many other words. Therefore, observing such improvement is not easy using metrics such as perplexity on normal text dataset. This is why we demonstrate the capability on the synthetic dataset.
> Can’t Memorizing Transformers attend to any other token as well?
This is correct. We made the quoted comment only to avoid a misunderstanding given your previous summary (number 7 in your review) “every token attends only to the tokens within the same block, and to the other landmarks of other blocks”. We have provided a comparison with Memorizing Transformer both in our related works (line 114) and in our reply (number 11).
> The LLaMA model that was fine-tuned with the proposed approach and length 512 - on which data was it fine-tuned?
The model was fine-tuned on RedPajama dataset which has been created with the aim to reproduce the original training data from Meta. Please note that the fine-tuning is not done for a particular task (e.g. instruction tuning, etc.).
> I agree that it seems that it’s easier to train the proposed approach compared to Memorizing Transformers. However, Memorizing Transformers can theoretically be also trained with only a memory size that fits in the GPU memory, without FAISS at all. I think that the authors have conducted such experiments when the memory is limited during training.
We could not find results in the Memorizing Transformer paper suggesting that training can be done with a small memory block while using a larger memory block at inference. Intuitively using very small memory sizes at training and larger ones at inference can possibly cause problems for example with the weighting parameter (but we have not tested this). Also please note that the bottleneck with training with a memory block without FAISS is not just RAM but also compute. For example using a memory block equal to context length doubles the computation cost. This is alleviated to some extent when using FAISS and very large memory blocks (as in the memorizing transformer paper) but still increases the step time (in some cases from 0.2s to 0.6s according to the original paper).
We hope that the above replies adequately addresses your concerns and that you would consider raising your score. We remain at your disposal to answer any additional questions or comments. | Summary: This paper presents a novel approach that enables the Transformer language models to process much longer sequences. Specifically, the authors propose to group the input sequence into multiple blocks, each of which is represented with a landmark token. The attention scores are calculated regularly among all tokens, but are normalized within each block and multiplied by the landmark scores. In this way, the attention mechanism is trained to identify and select relevant token blocks given each query. As a result, during inference, the model could dynamically select relevant context blocks for each incoming query without loading all previous contexts into memory. Combined with a careful design of positional encodings, the model is allowed to process arbitrarily long contexts and demonstrates its effectiveness in language modeling tasks across different model scales.
Strengths: - The idea is novel and interesting in that it associates each contiguous block of tokens with a pointer; each incoming query is then compared against these pointer vectors (as well as tokens within the local neighborhood) and retrieves past token blocks only when the corresponding pointer is semantically relevant. A notable advantage of this approach is that the representation of these pointer vectors is learned through the attention mechanism, thereby reducing the need for heuristic reductions and enhancing the overall simplicity of the model.
- Furthermore, the exploration of positional encodings in this paper is a valuable contribution. While it is not new to investigate the impact of positional embeddings on length extrapolation, this research offers a comprehensive analysis of this phenomenon and proposes several techniques to alleviate related challenges. The empirical findings presented here not only provide insights specific to the context of this study but may also hold relevance for the general language modeling community.
Weaknesses: - The paper is not that easy to follow. For example, the formulation of Equation (1) introduces confusion. The subscript $i$ in Equation (1) actually indexes the $i$-th element of vectors $v$ and $g$, rather than representing the $i$-th query as in Equation (2,3,4). I would suggest using a different subscript other than $i$ to enhance clarity. In addition, it seems that $(k+1)\cdot \ell_{\text{block}}$ in L234 should be $(k+1) \cdot (\ell_{\text{block}} + 1)$, according to the specification in L200 that “augmented by a landmark token after every $\ell_{\text{block}}$ tokens”.
- The proposed model is not evaluated sufficiently in the following aspects:
- 1) Some baselines are missing. In my opinion, it would be beneficial to compare against other long-range methods such as Combiner (as mentioned in the paper), which processes the distant context with heuristic pooling operations.
- 2) Ablation studies are not sufficiently extensive. For instance, this work does not examine the impact of positional encodings and block sizes in modeling long contexts;
- 3) The conducted experiments do not provide qualitative or quantitative results supporting the claim such as “In contrast with Transformer-XL, using our method, the information retrieval is interpretable”?
- One notable advantage of this proposed model is the ability to efficiently process long contexts by enabling random access to tokens. However, this work does not demonstrate the efficiency gains (e.g., memory usage or decoding-time speedup) achieved in dealing with long contexts. Providing evidence of these gains would further strengthen the paper's contributions in this regard.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Regarding LLAMA fine-tuning experiments, it is known that LLAMA performs poorly when it comes to length extrapolation, possibly due to positional encodings. Consequently, it remains unclear whether the improved results depicted in Figure 3(b) are solely attributed to the proposed model's enhanced ability to handle longer contexts or are primarily a consequence of the utilization of more robust positional encodings. This issue might require further investigation.
2. During inference, token blocks within the memory are frequently retrieved and replaced due to the attention landmark selection. Intuitively, such swapping-in/out processes can be computationally expensive, especially in the case of per-head and per-token selection. Does this case hold true in practical settings and are the benefits of the proposed approach only apparent in the long sequence regime?
3. How well does the model scale with longer sequences during training in terms of evaluation performance (e.g., language modeling perplexity)?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have addressed the limitations; my suggestions can be found above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer g7s5,
We make the following comments to address your questions and concerns:
1. We thank you for the valuable suggestions to increase clarity (and also for pointing out the typo) and will apply them in the final revision.
2. Please note that all existing methods for handling long contexts usually require training at the target inference length, which increases the training cost, or sacrifices the ability to directly attend to any tokens, which are unlikely to perform well for retrieving an exact pass key.
3. We want to emphasize that throughout the paper we always use the standard rotary positional encoding. The stingy positional mapping is solely applied during inference and only affects how tokens are indexed, not how they are encoded for the model. Furthermore, this mapping can only be applied in combination with a method such as landmarks which identifies relevant previous blocks. It is not a separate method that can be directly used with LLaMA or other models. Therefore, the study of the effect of different positional encodings on context length is out of the scope of our work though we discuss the limitations of the existing methods at the final paragraph of the related works.
4. Overall we do not expect our method to be very sensitive to the choice of the block size. Instead, this value can be chosen based on common sense (keeping the block small enough so that retrieving a single block does not fill the context with unimportant tokens and it can be summarized effectively, but keeping it large enough to use the computational benefits). We note that the need to choose a fixed block size also can be mitigated by providing the model with blocks of different sizes at training. In early stages of our work, we briefly experimented with dropping some of the landmark tokens, thus merging consecutive blocks, and did not find a noticeable effect on performance. For simplicity we decided to avoid doing this for the final experiments.
5. Please note that the information in Transformer XL has to be passed through recurrence which prevents finding the tokens the model attends to. In contrast, in our method, the model directly attends to the tokens. In this sense, our model is more interpretable since the exact tokens attended to by the model can be identified by looking at the attention scores or looking at the set of retrieved blocks, i.e. blocks with highest scoring landmarks. We will clarify this further in the final revision.
6. While swapping out and swapping in can lead to a slowdown, we show in Appendix G that only allowing the retrieved set to change across heads and not across tokens allows us to mitigate this problem and successfully perform inference with LLaMA at 32k inference length with 98% accuracy.
7. We acknowledge that without the reduced flexibility, i.e. in per token and per head setting, the CPU-GPU communication could become the bottleneck. However, we note that due to limitations, we did not implement caching of the retrieved blocks in GPU to avoid double communication which we expect to significantly reduce the load on CPU-GPU connection. Regardless, we emphasize that our method still improves over previous methods and allows the model to operate at arbitrary inference context lengths (though slowly) despite being trained at a much smaller context length. Therefore, using landmarks, inference at larger context lengths becomes a question of speed or having additional resources at inference time instead of the need to re-train the model.
8. In terms of perplexity, we expect our method to behave the same as the standard transformer as the context length grows. While this is not included in the paper, we can confirm for example that LLaMA 7B can also be successfully trained at 2k context length using a more efficient implementation of our method (combined with Flash Attention). Finally, a similar method used at inference can also be applied at training to allow direct training at larger context lengths as well while reducing the quadratic computational cost by the block size factor (though we have not yet implemented this).
We hope that the above replies adequately clarify our contributions and that you would consider raising your score. We would also like to ask that you read the general reply we have provided which addresses common concerns raised by the reviewers. We remain at your disposal to answer any additional questions or comments.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for providing comprehensive and insightful responses. In general, after reading through the clarification as well as the other reviews, I decided to stay at my initial rating of 5.
> Regarding Comments 1, 2, 3, 4, and 8:
Thanks for the feedback. The clarification of the experimental setup makes it clear to see the flexibility introduced by landmarks, and I acknowledged that such training flexibility is advantageous in that the model does not need to be trained on target sequence length.
> Regarding Comment 5:
- The claims such as “improved interpretability” and “better retrieval ability” still lack empirical substantiation.
- If the model can attend to all tokens, it would be prudent to compare it with more relevant baselines, like sparse transformers (e.g., routing transformers) that also have this capability.
> Regarding Comments 6 and 7:
While I appreciate the effort made by the authors to clarify the implementation details, the empirical advantages of the proposed method in longer contexts remain ambiguous. A more rigorous empirical analysis is warranted, such as
- comparing the memory and runtime efficiency between the proposed method and standard Transformer in long contexts;
- demonstrating the detailed trade-off between task performance (perplexity, retrieval accuracy, etc.) and empirical runtime, especially concerning the swapping-in and -out operations. | Summary: This works proposed a hierarchical structure to organize previous context in blocks and represent them via novel landmark tokens at the end of each block. Additionally, a novel GroupedSofxmax mechanism is proposed to replace the original softmax to enable current token to attend on both local tokens and retrieved previous contexts. Also, a novel positional encoding method is proposed to avoid the positional confusion due to retrieved previous contexts. The author demonstrate the effectiveness of proposed method on several language model datasets.
Strengths: 1. Different from the retrieval-based attention on all previous cached contexts in Memorizing Transformer, the proposed method proposes a novel way to split the previous context into blocks and attend on the well-organized blocks based on GroupedSoftmax on the landmark token representation of each token. The whole design is intuitive and feasible.
2. I do appreciate the related work section of this paper. This section discuss almost all feasible solutions on extending context length of Language Models.
Weaknesses: 1. My major concern towards the proposed method is that it is too trivial to implement, thus leading to the lower training stability and higher computational timecost. Now the authors store everything like the landmark token representations, previous contextual representations in GPU memory for simple implementation, and thus for each attention layer the stored vectors will take up a lot of GPU memory. The authors talk about the future work regarding storing cached vectors in CPU memory or even disks. However, the computation for the proposed GroupedSoftmax is still deployed on GPU. The major efficiency bottleneck will be the loading and offloading process from GPU to CPU or from CPU to GPU, as this process is several times longer than that of attention computation timecost. Additionally, implementing these cached index on disks seems like impossible as the data transfer timecost from disk to CPU and then to GPU is unacceptable for any real-time applications. Thus, analysis and statistics towards the introduced additional timecost and GPU memory cost will be appreciated.
2. My second concern is that the proposed method introduces too many hyperparameters to control the granularity of the block architecture, including the local context length, the block length, number of blocks, number of retrieved blocks, and attention size. The author should make a constant decision on which group of hyperparameters to use in all experiments or propose a brute-force strategy to select such group of hyperparameters.
3. You only follow Memorizing Transformer to compare the proposed method to Transformer-XL. However, the memorizing transformer, as the most important baseline to your methods, should be also compared with. The memorizing transformer can be regarded as a special case of your model when number of block is 1, local context length is eval length, number of retrieved block is 32, block length is 1 token, attention size is the local context length plus the number of retrieved blocks, and original softmax and attention is applied.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The presentation of the whole paper is perfect. I think I have understand the details of the paper well thanks to the great presentation. The only issue regarding the paper is that most of references lack of publication journal/conference and the publication year.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer fLCX,
We make the following comments to address your questions and concerns:
1. We did not observe any training instability due to using landmark attention. Furthermore, as mentioned in the paper, the additional computational costs are negligible especially when combined with Flash Attention. This is because our method facilitates performing inference at any length regardless of the train context length, removing the necessity of a costly training procedure for long context inference. Choosing the alternative approach of training at the target inference length would be much more expensive and scale quadratically with the length during both training and inference.
2. In Appendix G, we demonstrate how offloading the KV cache to the CPU, along with reduced retrieval flexibility, enables us to extend LLaMA's context length to 32K. As a result, we achieve 98% accuracy in retrieving the pass key. Without our method, performing inference at this context length would be impossible, as the KV cache cannot fit in a single A100 GPU.
3. The local context length and the attention size are directly determined based on the model’s training context length (which already is a hyperparameter that has to be chosen and depends on the available resources) and the number of retrieved blocks k. These values are provided in Table 1 for simplicity and better comparison. Therefore, the method only introduces two new hyperparameters which are the block size and the number of retrieved blocks. We note that only the block size needs to be chosen at training time and even this can be mitigated by providing the model with blocks of different sizes at training. In early stages of our work, we briefly experimented with dropping some of the landmark tokens, thus merging consecutive blocks, and did not find a noticeable effect on performance. For simplicity we decided to avoid doing this for the final experiments. Overall we do not expect much tuning to be required for the choice of the block size and this value can be chosen based on common sense (keeping the block small enough so that retrieving a single block does not fill the context with unimportant tokens and it can be summarized effectively, but keeping it large enough to use the computational benefits). Since the optimal k can be chosen at inference, it is easy to tune. Our results demonstrate the trade off between using a larger and smaller k. Finally, increasing k beyond a certain point has negligible effect since softmax is applied over the landmarks of the retrieved blocks. For example tokens in the 10-th top block can only have a weight of at most 0.1.
4. We would like to emphasize that a very important difference between Memorizing transformers and our method is the need to train the model at the large context length which either increases training cost significantly or makes the implementation more complicated since it requires connecting the model to FAISS data structure. The increased training cost makes it challenging to train such a model for large contexts. In contrast, our method allows inference at arbitrary context lengths regardless of the training context length. As a side note, we do not expect the stingy mapping to work for extremely small blocks (such as block size 1) as in that case mapping earlier blocks all to the same position in the stage of finding relevant blocks resembles searching in a bag of words.
We hope that the above replies adequately clarify our contributions and that you would consider raising your score. We would also like to ask that you read the general reply we have provided which addresses common concerns raised by the reviewers. We remain at your disposal to answer any additional questions or comments.
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal
Comment: Concern 1: I think the description in Line 55-57 leads to the misunderstanding from the reviewers. It is good that you present the benefit for cpu offloading in Appendix G. Therefore, your description in Line 55-57 should be modified and the results in Appendix G should be mentioned here. Meanwhile, some detailed statistics should be presented in a Table, i.e. the timecost with and without cpu offloading for context length = [8k, 16k].
Comment 2: Regarding our description "as the KV cache cannot fit in a single A100 GPU", from my understanding, your method replaces the softmax in each decoder layer with designed grouped softmax function with the attention on landmark token and retrieved blocks. If you keep some of the decoder layers as its original architecture and the kv cache of the previous context in these layers does not need to be cached, in this way you GPU memory requirement can be largely reduced. If you are interested in such implementation, you can verify whether it is good to save the GPU memory cost and simultaneously maintain the performance.
Comment 3: You did not mention anything regarding the flash attention in your paper. If you adopt such efficiency technique in your method, please give some details on how you adopt that in your implementation.
Comment 4: The author did not convince me that the method demonstrates both the efficiency and capability priority compared with Memorizing Transformer. As for training efficiency, the local context length for memorizing transformer is also 512 which is the same as your method. Memorizing Transformer and your method both introduce some external timecost for attending on previous context. Memorizing Transformer adopts the approximate retrieval with FAISS for acceleration, which brings extra timecost. You method also requires the indexing for previous blocks and reconstruction on the attention matrix. It is hard to demonstrate your method via weak descriptions. I hold the same opinion as reviewer tGXa, and the lack of comparison with Memorizing Transformer keeps this paper a borderline paper. I am ok to accept your paper, but you should consider to reproduce Memorizing Transformer and compare to your method.
Comment 5: Towards your explanation "As a side note, we do not expect the stingy mapping to work for extremely small blocks (such as block size 1) as in that case mapping earlier blocks all to the same position in the stage of finding relevant blocks resembles searching in a bag of words.", I believe the previous work like KNN-LM has demonstrated that using token-level retrieval and fusion can boost the perplexity score on language modeling benchmark. Sometimes the token-level retrieval can retrieve the similar token representations which are beneficial for language modeling task, especially when your evaluation metric is token-level perplexity. You should consider to add such ablation study to your Table 2.
Therefore, I still keep the final review score as 5.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you very much for participating in the rebuttal and providing additional comments. We hope the following clarifications addresses your concerns further.
1. Thank you for your suggestion. We will update the manuscript as suggested to refer the reader to Appendix G.
2. When generating auto-regressively, the KV cache is needed in the standard Transformer architecture as well, i.e. even when all the layers are the original ones. In other words the need for the KV cache **is not** a side-effect of our method. The need for KV cache arises to avoid the need to recompute all the intermediate vectors for previous tokens when the next token is fed through the transformer. In comparison to the original architecture, our method reduces the computation by a factor of block size and alleviates memory requirements by allowing off-loading the KV cache. The only increase in memory requirements when using our method comes from the additional landmark tokens which is negligible.
3. None of results included in the paper are based on Flash Attention. However, we discuss the possibility of combining our method with Flash Attention in Appendix F. Flash Attention computes the output of attention by processing the tokens in blocks. By using the same block size for both flash attention and landmarks, it is possible to implement a fused version of landmark attention. Since the submission, we have explored this possibility and implemented the fused version allowing us to obtain better performance and reduced memory footprint. However, for this paper, we use the high level implementation.
4. We want to emphasize that the exact difference is that our method **does not** incur an additional time cost for retrieval **at training** whereas Memorizing Transformer does as you described. The mentioned retrieval and indexing process is only applied at inference.
5. During our experiments, we found out that it is necessary to have the correct indices at least for the last few blocks as we discussed in lines 242-247. As a guess, the success of previous methods can be possibly because these methods include the retrieval during the training, allowing the model to adapt to some extent. However, verifying this falls outside the scope of this work.
We hope that the above replies adequately clarify our contributions and that you would consider raising your score. We remain at your disposal to answer any additional questions or comments. | Summary: The paper proposes a new attention mechanism that uses "landmark" tokens to allow access to long contexts while retaining the flexibility of standard attention. The landmark token represents each block of the input context, and the attention mechanism is trained to use landmark tokens to select relevant blocks. This allows retrieving blocks directly through attention instead of separate retrieval mechanisms. Experiments including language modeling and fine-tuning LLaMA 7B show landmark tokens' effectiveness.
Strengths: - The landmark design is interesting, which maintains the random-access flexibility of standard attention, unlike other retrieve-based approaches. It enables the possibility of processing arbitrarily long contexts and practically extends the context length of large LMs like LLaMA by 5x.
- This method is able to reduce memory footprint since only landmark tokens need to be stored in GPU and some KV cache can be offloaded to CPU. Besides, the landmark approach needs fewer retrieved tokens per step, compared with Transformer-XL.
Weaknesses: - One of the advantages of the landmark approach in the paper, is to reduce memory usage by swapping out (to CPU memory or to disk) all regular tokens’ cached key-value vectors (line205). However, this is not tested in experiments, as stated in line 56:
>For simplicity, we focus our experiments on storing everything in GPU memory, but we note that the above techniques can be directly applied in large-scale settings.
It is better to add the related experiments about memory saving to validate the effectiveness of this approach.
- The language modeling experiment is not that persuasive. The perplexity of the landmark approach is not better than Transformer-XL, although it uses a smaller attention size. It is still not sure whether the language modeling ability as well as the memory consumption is supreme to transformer-XL.
- The experiment of finetuning LLaMA 7B is not that persuasive, LLaMA-2k is not good at extrapolation, and this baseline is really weak. Besides: i) only test Retrieval tasks, lack of other common long document understanding/generation tasks (e.g. scrolls benchmark or many-shot in-context learning); ii) there are many other approaches targeting long document processing (listed in Related work), lack of experiments to compare some of these baselines.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - In Table 2, for different levels of retrieval flexibility, the change of perplexity is small. Does this mean the choice of $k$ is not that important?
- What is the effect of doing memory-offloading for unused KV cache? To what extent it can save memory consumption?
- Can you explain and compare the choice of pre-training the landmark LM from scratch and finetuning a landmark LM based on the existing LM like LLaMA.
- The line spacing between line 339-340, and line 347-348 is too narrow.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have not addressed their limitations officially in the conclusion but admitted the computation limitations in the middle of the paper and also stated their future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Hd8A,
We make the following comments to address your questions and concerns:
1. In Appendix G, we demonstrate how offloading the KV cache to the CPU, along with reduced retrieval flexibility, enables us to extend LLaMA's context length to 32K. As a result, we achieve 98% accuracy in retrieving the pass key. Without our method, performing inference at such long context lengths would be impossible, as the KV cache grows and cannot fit in a single A100 80GB GPU.
2. We present Transformer XL as a baseline to show that landmark attention can perform comparably in utilizing long contexts. However, despite performing slightly better, Transformer XL has inherent limitations as it cannot directly access earlier tokens, hindering its ability to perform certain tasks, such as retrieval. In contrast, our LLaMA experiments demonstrate that Landmark Attention successfully retrieves and attends to early tokens.
3. As you mentioned the original LLaMA 7B was trained on 2k tokens and can only perform inference up to this length. We want to demonstrate that fine-tuning using landmark attention (at 512 context length) makes the model capable of handling arbitrarily long context lengths. Our experiments clearly verify this for 32k tokens. Please note that all existing methods for handling long contexts usually require training at the target inference length which increases the training cost or sacrifices the ability to directly attend to any tokens.
4. We point out that most existing methods for retrieval use a separate embedding model to perform the retrieval which needs to be trained separately, increasing the training cost. In contrast, our method allows using the model itself to perform the retrieval. Furthermore, many of the proposed retrieval-based methods usually focus on retrieving whole documents rather than finding relevant parts of the input. Extending these methods to be capable of identifying small relevant blocks of the input is beyond the scope of our work and requires further research. Other existing methods for augmenting transformers with memory or long context capabilities either remove the capability to directly attend to each token , which is unlikely to perform well for retrieving an exact pass key, or involve a costly training method which is challenging to perform for a model as large as LLaMA. We have also left further evaluation of the fine-tuned model on other tasks such as summarization as future work given the resource limitation.
5. When the set of retrieved blocks is allowed to change for each token and each head, increasing k beyond certain values can have a negligible effect since softmax is applied over the landmarks of the retrieved blocks. For example tokens in the 10-th top block can only have a weight of at most 0.1. However, in a less flexible regime where the retrieval set remains the same across all tokens (but can vary across heads), k can have a more noticeable effect. For example, as shown in Table 2, increasing k from 2 to 4 in this setting improves performance from 15.48 to 15.10.
We hope that the above replies adequately clarify our contributions and that you would consider raising your score. We would also like to ask that you read the general reply we have provided which addresses common concerns raised by the reviewers. We remain at your disposal to answer any additional questions or comments.
---
Rebuttal Comment 1.1:
Title: Thank you for clarification
Comment: Thank you for the clarifications which makes the paper clearer. However, some concerns remain unsolving, such as: lack of testing in long document understanding/generation tasks (zero/few-shot instead of fine-tune) and the comparison between the pre-training landmark LM from scratch and finetuning a landmark LM based on LLaMA. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We would like to address some of the common concerns raised by the reviewers with the following comments:
* We have successfully used offloading parts of KV cache to CPU to perform inference at 32k context lengths using LLaMA. We have described this result in Appendix G. Note that while offloading to CPU might make the inference a bit slower, without our method performing inference at such long context lengths becomes impossible on a single A100 80GB GPU since it is not possible to store the KV cache.
* One of the major advantages of Landmark Attention is its ability to decouple the training and inference context lengths. This means that the model can be trained at a limited context length, thus avoiding increased training costs. The trained model can then be effectively used for making inferences at any context length. We have extensively demonstrated this feature in our language modeling experiments where we train our models at context length 512 and successfully infer at much larger context lengths, such as 4096. Similarly, with LLaMA, we fine-tune at context length 512 using a model originally trained at 2k context length, and we can perform inference at a much larger 32k context length with excellent results.
* This unique characteristic of our method sets it apart from previous approaches like Memorizing Transformers, which require expensive training with large context lengths. As a baseline, we implemented Transformer-XL to showcase that we can achieve comparable performance in utilizing long contexts. Comparing against additional baselines becomes a challenging task considering the increased training costs and the complexity of implementing such methods (e.g., Memorizing Transformers rely on FAISS during training).
* We note that while our method performs comparably (but not better than) Transformer-XL it has the advantage of being able to directly attend to all previous tokens. This is important to utilize the full power of attention, for example allowing our method to excel at retrieving tasks as demonstrated in our LLaMA experiment.
In addition to these general comments, we have provided individual responses to the reviewers to address their specific concerns. We hope that these explanations further clarify the contributions of our work. We are available to address any further comments or questions you may have.
Pdf: /pdf/30a30d6711dc87c056b1d1c43f06f27b3cf5f541.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors propose a new transformer architecture, the Landmark Transformer, which uses a novel approach to the self-attention mechanism to allow the model to handle longer sequences.
The Landmark Transformer introduces a new type of token called a "landmark token" which acts as a gateway to a block of tokens. The blocks are assigned to these landmark tokens in a way that allows the model to handle longer sequences while maintaining computational efficiency. The landmark tokens are distributed across the sequence and form a part of the model's input. They are processed in a particular order and are used to control the flow of attention in the transformer model. In some ways, this could also be viewed as introducing hierarchy (and or tree-structures) into the context window. They provide a higher level abstraction over local sets of tokens.
The authors propose a "Grouped Softmax" operation, which is used to calculate attention scores in the self-attention mechanism. This operation allows the transformer to regulate its attention across different blocks of tokens. The Grouped Softmax operation is applied to both the query and key vectors in the self-attention mechanism. The authors also propose an efficient method for calculating these Grouped Softmax operations.
The authors evaluate their proposed models on the PG-19 dataset and an arXiv math dataset. The results demonstrate that their models can handle longer sequences while maintaining good performance when compared to the baseline .
Strengths: The paper offers a novel and simpple approach to extending the context length of transformer models.
The multi-resolution nature of this approach allows for exact retention of fine-grained details and is very amenable to model interpertability approaches.
This method can be applied post-hoc, is extensible to tools/frameworks that can better take advantage of CPU processing and system RAM.
This approach can be used to directly build document retrievers without any further training required.
The approach is also quite efficient, requiring at most 4 A100 GPUs to achieve the results presented in this work.
This approach has the potential to enable heirarchical scaling of the context window, which when combined with an external cache could enable quadratic/exponential growth of the context window.
The illustrations in the article help alot in conveying the core ideas.
Overall, a solid, timely and well-justified piece of work.
Weaknesses: There should be more diversity in the models and datasets benchmarked against in this paper.
Sections 3.1 (presenting grouped softmax) and the middle paragraph os section 3.2 are confusing and less clear than the rest of the article.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: There is a typo on line 185: exmaple -> example
where these 80GB or 40GB version of the a100?
could you provide a graph of how memory/compute/time scales with block size and length?
Experiments to demonstrate the memory offloading capabilities of this approach would improve this work.
It would alse be good to benchmark against the memorizing transformer and similar competing appraoches.
I will improve my score if these questions are answered and the weaknesses raised are addressed.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: There is no discussion of the social impact of this work, it should be added though.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer DxW4,
We make the following comments to address your questions and concerns:
1. The language modeling experiments were done using 4 A100 40GB GPUs. LLaMA fine-tuning was done using 8 A100 80GB GPUs.
2. We have successfully used offloading parts of KV cache to CPU to perform inference at 32k context lengths using LLaMA. We have described this result in Appendix G.
3. Here is an overview of the effects of increasing block size or context length during training and inference.
* During training, the block size has almost no effect on time or memory usage since we keep the context size (including the landmarks) fixed. Since we use the standard training procedure for transformers, computation time scales quadratically with the context size (during training). We note that, a future work can look into using a similar retrieval method used at inference during training to reduce the training computation by block size as well.
* During inference, when using landmark attention, landmark tokens need to be also stored in memory which slightly increases the memory usage and compute time by a factor of 1/(block size). Thus increasing block size reduces memory usage while also reducing the time needed to find relevant blocks since there are less blocks. However, if the number of retrieved blocks $k$ is kept constant, the length of the chunks have to become smaller so the longer retrieved blocks and the current chunk can fit into the model's maximum allowed context length, i.e. the train context length. Therefore, increasing the block size increases the number of chunks the input has to be broken into, which slows down the inference. This trade-off can be seen in the Figure 1 of the PDF attached to our global rebuttal, showing the inference time for different block sizes and different values of $k$.
* When increasing the context length, the standard transformer’s memory usage and operation time grows quadratically since it needs to compute the full attention matrix. When using landmark attention, the attention matrix is only computed for a fixed number of blocks, which means the total memory usage and operation time for computing the attention matrix increases only linearly. However the bottleneck becomes finding the set of relevant blocks. As we mentioned it is possible to use a kNN data structure to do this operation efficiently as well but we leave exploration of such implementation for future work. With the direct (not kNN) implementation we used for our experiments, the memory usage and operation time of this operation is also quadratic in context length but is reduced with the noticeable factor of block size (e.g. 50x reduction). The implementation still needs to be done efficiently for this improvement to be observable. Finally, we note that using flash attention, it is possible to reduce the quadratic memory usage for both our method and standard transformer to linear. We have already implemented this version of our method in Triton and have seen its success. Please note that the reduction of operation time by the noticeable block size factor remains even with flash attention.
4. This unique characteristic of our method sets it apart from previous approaches like Memorizing Transformers, which require expensive training with large context lengths. As a baseline, we implemented Transformer-XL to showcase that we can achieve comparable performance in utilizing long contexts. Comparing against additional baselines becomes a challenging task considering the increased training costs and the complexity of implementing such methods (e.g., Memorizing Transformers rely on FAISS during training).
We hope that the above replies adequately clarify our contributions and that you would consider raising your score. We would also like to ask that you read the general reply we have provided which addresses common concerns raised by the reviewers. We remain at your disposal to answer any additional questions or comments. | null | null | null | null | null | null |
Certification of Distributional Individual Fairness | Accept (poster) | Summary: This paper considers the problem of certifying individual fairness (IF), which is of great importance to reliable machine learning algorithms. To this end, the authors propose a novel convex relation of IF constraints that greatly reduces the computational cost. In addition, the authors propose to certify distributional individual fairness, ensuring that the neural network has guaranteed individually fair predictions for a given empirical distribution and all distributions within a $\gamma$-Wasserstein ball.
Strengths: 1. This paper is technically sound.
2. The extensive experiments validate the effectiveness of the proposed methods.
Weaknesses: The paper studies individual fairness and distributional fairness. To my opinion, the two topics seem to be independent. However, it is possible that I misunderstand this paper. It would be better if the authors can present more relations between these topics.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: ## Miscellaneous
1. Line 106: feed forward $\to$ feedforward
2. Line 168: $d$ is indeed a vector; however, the denotation $\sqrt{d}$ should be defined more specifically.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: none
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments including their clarifying points about our notation and praise regarding our experimental section. We will ensure that the notational points are adjusted in the final version of the paper
### On the separation between distributional robustness and fairness
In general, distributionally robust fairness and individual fairness are separate concepts. This point is made on line 97 (last paragraph of the related works). However, in this work we study the intersection of these two: distributional individual fairness. We will attempt to further emphasize this in the Introduction.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. It addresses my concern. Since I am not expert in this field, I will keep my score. | Summary: This paper studies formal guarantees for notions of individual fairness (IF) for predictors given by neural network models. After relaxing common definitions for IF metrics by means of $\ell_\infty$ balls (or orthotopes), they adapt methodology based on adversarial robustness to provide upper and lower bounds to the IF achieved by models on an empirical sample - and those within a $\gamma-$Wasserstein ball about it.
Strengths: - This paper studies an important problem of individual fairness
- The first half of the paper, Section 3 and 4, which cover Background, the DIF definition, and problem explanation are very clear and easy to understand.
Weaknesses: - The key observation and novelty in the approach is not clearly noted (See below)
- Several of the nice advantages of their method (e.g efficiency) are not explained (see below).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Numerous times in the paper the authors say their bounds are ”efficient” because they leverage efficient methods (e.g. those based on bound propagation). While that may be true, it would be nice for the readers if they provided a brief explanation as to why these methods are efficient instead of placing everything in the appendix.
2. It seems to me that the central novelty of this paper is to upper bound a mahalanobis metric (for $d_{fair}$) with an orthotope, which is quite simple. The remaining of the paper seems to me a direct application of results and methods in adversarial robustness. While I do appreciate the observation of being able to use those tools in the context of fairness - which also constitutes novelty - I would appreciate if the authors could be very clear about what are the main technical contributions of this work.
3. Personally, I am not sure providing a section on the impact of these methods on group fairness is necessary. I’d much rather prefer a discussion on the efficiency of the bounds.
4. Figure 1 is quite confusing. What makes the blue-star individuals likely? As presented, those blue-star points do not look likely. If I understand the figure correctly, the authors should present a more balanced empirical sample together with a larger sample representing the (unobserved) population.
5. I also have problems with the fact that the authors state their goals and present their definitions in terms of expectation (e.g. as in Def 2), but simply restrict themselves to studying empirical samples. I think the presentation is misleading, because nowhere the authors really provide guarantees for the definition in Def 2 (that is, risk bounds). This is also an important limitation where the study the Wasserstein distance between distributions, as they simply regard their distribution as a one supported on Dirac functions (on the observed samples).
6. Immediately after Eq (4), the authors write that “we can optimize this bound to be tight”. I don’t think this is correct: while they can indeed optimize the bound, there’s no guarantee that the bound will be tight, as the original problem is non-concave.
7. In Section 5.4 and after presenting $\mathcal L_{F-DIF}$, the authors mention when $\gamma=0$, one recovers a local constraint on individual fairness on $x\in X$. I don’t think this is completely accurate, because again, Def. 2 is defined in expectation of $x\sim p(x)$, not simply over the empirical sample.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors mention that they do not foresee negative societal impacts. Maximizing upper and lower bounds is great but in doing so we don’t really know what is happening to the true fairness violation. It may be that the true fairness violation is in fact increasing which is propagating unfairness. While I understand that solving for this value is not feasible and thus appreciate the results presented, I would also like the paper to acknowledge that there are potential negative effects.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments on the presentation of our paper and appreciate the detail of their review. Below, we address each point raised by the reviewer. We clear up any minor misconceptions and provide a clear action to improve the presentation of the final version of the paper. We note that we have addressed the authors point on key contributions and observations in the global response. We are more than happy to respond on any further concerns.
### On point 1:
On line 180-191 we state that our approach to local IF certification has computational complexity of two forward passes through the network, where one forward pass maintains the upper bound, the other maintains the lower bound. On the other hand, MILP approaches take exponentially many forward passes to converge. We will elaborate on these in the revised paper. We will also add a further comment on this in the main text that also points to Figure 2 as empirical validation of this statement.
### On point 2:
We agree with the reviewer that one central novelty of the work is to upper bound the Mahalanobis metric with an orthotope. However, we would like to underscore that the application of this result is not to any traditional notion adversarial robustness, but to distributional individual fairness. The distributional component of fairness has not been certified in any prior work. Moreover, distributional certificates for any notion (fairness or robustness) have not been provided without assuming explicit knowledge of a Lipschitz constant. To this end, our optimization approach for this problem is novel and Theorem E.2 (to be moved to the main text as Theorem 5.3) showing that the upper bound formulation can be efficiently solved is another key non-trivial result of this paper which may find impactful application outside the current work. We appreciate that this novelty could have been further underscored and in the final version we will ensure that these points are emphasized.
### On point 3:
We agree with the reviewers that Table 2 and Section 6.4 may be moved to the Appendix to make room for further analysis. Specific details are provided in our global response.
### On point 4:
In Figure 1, the blue points have feature values that are generally within the observed range of individuals, albeit slightly outside what we have already seen. On the other hand, the purple stars fall wholly outside the range of feature values that were seen during training. The exact location of the blue stars is taken for illustration purposes. The blue points are not meant to illustrate the unobserved population (which implies unobserved points from the same distribution) but represent points that may be drawn from a population with a slightly different distribution. The purple individuals on the other hand illustrate individuals drawn from a drastically different distribution with different support.
### On point 5:
On line 196 of the paper, we state that the error introduced in the finite sample regime can be bounded by means of concentration inequalities (i.e., risk bounds as the reviewer alludes to). In the final version of the work, we will elaborate on this point and provide a clear lemma statement in the main text in order to fully clarify the guarantees we give. The lemma statement will be:
Lemma 5.3 Given an upper-bound $\bar{\epsilon}$ computed according to Equation (5) the error introduced by using a finite sample can be bounded from above by $\tau$. That is, the estimate $\bar{\epsilon}$ is within $\tau$ of the true expectation of $\bar{\epsilon}$ with probability $1- \lambda$, as long as $n$ is at least $\dfrac{-1}{2\tau^2} log(\lambda/2) $.
The proof of this Lemma is a straight-forward application of Chernoff or Hoeffding's inequality. This lemma allows us to quantify the error induced by using finite samples to approximate the expectation in Definition 2, and works for both our upper and lower bounds.
### On point 6:
We thank the reviewer for this note, and they are correct with this point. The phrasing of this should state "we can optimize any feasible selection of $\phi_{i}$ values to be a tighter lower-bound by observing ... the function is differentiable." We stress that any feasible selection of $\phi_{i}$ yields a valid lower bound to the DIF violation and that by observing our upper bounds we can quantify how "tight" the lower bound is w.r.t. the true value.
### On Local IF constraint when $\gamma = 0$
We believe this is a minor misconception on behalf of the reviewer. We highlight the specific phrasing used: "we recover the local IF constraint on each $x^{(i)} \in X$." This claim is that when $\gamma = 0$, $x^{(i)}$ will not be perturbed and therefore the upper-bound fairness violation is taken at $x^{(i)}$ which is identical to the local IF constraint (note this is Definition 1 not Definition 2).
### On the negative societal impacts comment
Given that we compute upper and lower bounds on the DIF violation, we are guaranteed that the true DIF value falls between these bounds. Therefore, we do know what is happening to the true fairness violation (w.r.t DIF) up to the tightness of our bounds. See Figures 2 and 3 of the main text for illustration of our upper and lower bounds.
In addition, where "true fairness violation" means bias not captured by the notion of DIF, we specifically address this in Section 6.4 where we state "it is currently the case that no one fairness metric alone captures a complete picture of model bias" and further make the point that DIF has no correlation with group fairness notions (see Table 2), thus a complete analysis of model bias should take these definitions into account. In the final version of the draft we will include this sentiment in our Broader Impact statement.
---
Rebuttal Comment 1.1:
Title: Thank you for your comments
Comment: I thank the authors for their responses, which have partially alleviated my concerns.
1. Understood, thanks. However note that, as posed in this paper, the local certification amounts simply to a local Lipschitz analysis. Several approaches exist for this that do not require an exponential number of iterations (as the authors suggest). See e.g. [Muthukumar et al, "Adversarial robustness of sparse local lipschitz predictors"].
2. Understood. I think stressing on this novelty will remark the novelty of the paper further.
3. Thanks.
4. Thank you for the clarification. However, I really do not think that this figure is helpful as it stands: note that the figure and caption refers everywhere to "likely"; strictly speaking though, the blue stars are as unlikely as the purple ones, since neither of those fall in the (apparent) support of the data distribution. I think the authors might be confounding distributions with samples, and what they mean is that purple stars are samples from a distribution that is further from the distribution used during training. Again, this has nothing to do with samples being "likely".
5. Yes, of course, one can employ concentration and provide a finite sample approximation result, but this requires a new fresh draw of samples. This is rather trivial, or "data waste-full", and should the need for further samples should be clearly stated in their comment on this point.
6. Thanks.
7. (on the reduction when $\gamma=0$: Thanks for the answer, but I don't think I agree: if $\gamma=0$, one requires $\mathbb E_x [ \mathcal I (f,x,\delta) ]<\epsilon$. However, this is different than requiring that $\mathcal I(f,x,\delta) < \epsilon ~\forall x \in X$, as stated -- the latter is stronger. Indeed, Definition 1 is *distributional*.
8. On societal implications: I appreciate the answer, but again I don't agree. I understand that you can control the upper bound to the fairness violation (the worst case violation), as indeed noted in the figures. However, reducing the upper bound does not imply that the true fairness violation (say, its mean) will be reduced as a result. I understand that the upper bound might be tight, but this does not mean that it will be tight for the specific data distribution one encounters in the real world. Maybe the authors can enlighten me as to why my understanding might be incorrect.
9. Oh, one last comment: As I was re-reading the paper to better understand your comments, I see that immediately after the definition of ID, the authors write "...$\mathcal I (f, x,\delta | Summary: This paper studies the problem of individual fairness in supervised learning. The focus is on studying how to certify distributional individual fairness (IF) (individual fairness over a set of distributions close to the observed empirical data distribution) in neural networks. Prior work has focused largely on certifying global IF, which is more expensive and thus can only be applied to smaller neural networks than the proposed certification/debiasing technique. The contributions of the paper are in showing how to certify distributional IF in neural networks and then using these bounds in the training process as regularizers to debias NNs.
The main methodology for certifying IF is presented in Section 5. The first step is to certify local IF by over-approximating the similarity ball to find a conservative estimate of the IF violation. They can then use this bound to certify distributional IF around the empirical data distribution and apply finite sample guarantees to give an estimate of the true distributional IF.
The authors then show how to use the bounds on distributional fairness as regularizers in the training procedure as a way to debias neural networks. They then provide experimental evaluation on a few benchmark datasets that demonstrates that their proposed training method indeed improves distributional individual fairness, at relatively modest degradations in accuracy.
Strengths: The main advantage is a relatively lightweight way to certify and train NNs for IF, in a way that requires little additional computation, compared to previous methods which are not able to scale to large NNs.
The experimental evaluation seems to confirm that DIF training as proposed by the regularization method does in fact improve significantly improve IF at modest degradation in classification accuracy.
Weaknesses: Section 5 is a little dense and it would be helpful for the reader if there was a little more discussion of the optimization procedure, particularly in Section 5.3. Theorem statements here might also be helpful for the reader to understand what the final guarantees are.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: What is the purpose of Table 2? It is a little difficult to interpret the punchline - it just seems to indicate that DIF training does not have a consistent effect on group fairness measures, either positively or negatively.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments on the presentation of our paper. Below we comment on the points and questions raised by the reviewer by providing specific actions that will be taken to address these presentation points.
### On Clarity of Section 5
We thank the reviewer for this valuable comment. First, we comment that the requested theorem statement already exists in the paper as Theorem E.2 in the Appendix. We agree that moving Theorem E.2 to the main text of the paper and writing it to emphasize the final guarantee will improve the paper, thus, we propose to change the text of Theorem E.2 to the following and to present it in the main text as follows:
Theorem 5.3: A solution to Equation (5), $\bar{\epsilon}$, is a sound upper-bound on the Distributional Individual Fairness violation, and therefore, is a certificate that no $\gamma$-Wasserstien distribution shift can cause the individual fairness of the model $f^{\theta}$ to exceed $\bar{\epsilon}$.
In addition, we will provide a clear statement of Lemma 5.3 (see our comment to Reviewer Lpfq) which adds clarity to the exact guarantees offered by our framework.
### On the purpose of Table 2
The intended purpose of Table 2 is two fold. Firstly, to demonstrate empirically that optimizing a model for distributional individual fairness (DIF) does not inherently worsen nor improve other popular notions of fairness. Secondly, by showing that it has no strong correlation with other notions of fairness we hoped to convey that while DIF is a flexible and powerful notion of fairness, other aspects of fairness should be analyzed to get a wholistic picture of model bias.
To improve the presentation of the paper, we will highlight this point in our Broader Impacts section and move the table and discussion to the Appendix where we can further elaborate on the discussion of this point. | null | null | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Beyond Geometry: Comparing the Temporal Structure of Computation in Neural Circuits with Dynamical Similarity Analysis | Accept (poster) | Summary: In this paper the authors propose a similarity metric to compare dynamical systems – in particular neural networks. They use a set of recent methods to transform a general dynamical system into a basis in which the dynamics is approximately linear. They compare two dynamical systems by comparing their linear dynamics in this transformed space by seeing how well one system can be turned into the other using only an orthogonal transformation. They test their metric on a variety of settings.
Strengths: I think the idea is a good one.
I think it has been well executed.
The experiments were well chosen.
The writing was generally pretty good.
Weaknesses: I don’t understand why section 2.3 was separate from section 3. It feels like section 2.3 played two roles – advertising the set of experiments that were going to test this new metric in interesting ways, and describing the experimental setup. The first of these is done in the first paragraph, and the second should, in my opinion, be moved into section 3. The separation between setup and results meant I had to keep flipping back, and you repeated yourself for no reason. It would make much more sense to me if each experiment was introduced and its results were discussed together.
I thought some of the experimental details were also unclear, and generally you rely too much on the reader recalling in detail many of the papers you cite. In particular:
1. I did not know what the 3-bit flip flop task was before reading this, and had to go find another paper that describes it. It seems crazy not to put a short description in the paper, even if it is in the appendix if page limits are the issue.
2. When trying to understand the ring attractor I eventually decided that r and s in equation 5 were phi(g) and g in equation 25. But then why change notation? And if I was right could this be made clear? If not, it was obviously not clear enough for this dumbo to understand what was going on.
3. Further, in appendix 9 you say “in our deformation analysis we only changed phi after simulation, which does not affect the ring topology but modifies the geometry”
I had no idea what this meant. In my head simulation is running the network dynamics, so you’re saying you only changed the non-linearity after you’d simulated, in which case it would have… zero effect. I must be misunderstanding?
The claim that DSA measures topological structure of the dynamics would be fairly remarkable and does seem to be backed up by the experiments. However it doesn’t seem at all proven – as you yourself point out in the discussion. In fact, what exactly is being compared through the dynamics of the eigenspace of these lagged timeseries is a little opaque to me – and I might wager to you too. Given this lack of certainty, I have two specific places I think the writing could use a fudge word, like in the discussion where you say “DSA _may_ be capturing similarity at the level of topology”. Namely:
1. Line 251: We demonstrate DSA compares two dynamical systems at the level of their topology.
2. Line 266: This suggests that DSA identifies similarity at the level of the topology…
Could you just write '_may identify similarity_', or '_is consistent with comparing at the level of topology_', because the pure claim “We demonstrate DSA compares two dynamical systems at the level of their topology” does not seem well founded.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - It seems like the scale of the MDS dimensions is being argued to be meaningful in your plots. Beyond some vague intuition that bigger means more variance, are the dimensions meaningful? What do their scales correspond to?
- How well do the linearised dynamics capture the underlying network dynamics? In my limited knowledge of the HAVOK methodology (gathered from Steve Brunton’s excellent youtube videos) the system cannot usually be modelled as a linear system without including a forcing term. Do you find the same thing? If so do you not think it could be that key aspects of the dynamics are entering through the dynamics of the forcing term, slightly weakening the power of your method. Regardless, I think it would be good to persuade people that these approximate dynamics you are comparing are actually good approximations.
- I did not buy one claim, could you elaborate:
“By linearity of PCA, the dynamics of the original system within the PC subspace are preserved” - Line 224.
Take the limit, say the dynamics of the system is just circling round an ellipse and you perform PCA to reduce that to 1D, the major axis of the ellipse. Sure, the dynamics on that 1D are the same as the original system but surely you lose something in this projection? And wouldn't that potentially be true of your original system too?
- You claim that all networks that solve the 3-bit Flip-Flop task have the same 8 fixed point structure, line 219, do you have any evidence for this? Or is this from Maheswaranathan et al. 2019? If from Maheswaranathan, make it clear, cite them straight after you make the claim!
- Why did you not choose figure 3 to also back up your claim that DSA might be capturing similarity at a topological level? (line 317)
- Related to the topological claim, my intuition must be off. Humour me for a moment and ignore the time lagging: if you had activity rotating on some ring, then morphed the behaviour to become ellipsoidal, the topology would be the same, but the rotation matrices would be different. Further, the two matrices could not be mapped into one another by an orthogonal transformation. As such, in my intuition your method would (potentially very reasonably) say these were different dynamics even if the topology was the same. Given this, why do you think your method is going to care only about topology? Is it the time-lagging doing some magic?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors include a section of limitations that sound reasonable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are thankful for the reviewer's extensive feedback and appreciate their comments--we believe we can answer all of their questions, which correspond to valuable improvements to the paper.
> I don't understand why section 2.3 was separate from section 3...
This is a reasonable point and we agree that we can restructure. Our intention was to follow the style of previous papers that leave technical descriptions of experiments in Methods alone. However, we agree that it would be more readable (and less redundant) to combine them. Thank you for the suggestion!
> I did not know what the 3-bit flip flop task was before reading this...
We greatly appreciate this feedback - it is important to ensure the reader is clear on the task structure so that our results can be fully understood. We will introduce the task in more detail, and can add another panel (similar to Fig. 1a in Maheswaranathan et al., 2019) describing the task visually.
> When trying to understand the ring attractor I eventually decided that r and s in equation 5 were phi(g) and g in equation 25...
This is correct and we apologize for the oversight. Thank you for catching the error.
> Further, in appendix 9 you say "in our deformation analysis we only changed phi after simulation, which does not affect the ring topology but modifies the geometry”...
We apologize for the lack of clarity and will reword this. In our experiment, we applied a new nonlinearity to the synaptic activations to get a different output of the rates of the neurons. So we transformed the data on which we applied DSA, while still using the same underlying trials--you can also think of this as a modification of how we observed the data. The goal was to demonstrate invariance to smooth deformations. In Fig. 4b, the Procrustes distance increases with the magnitude of the nonlinearity, which verifies that the data is deforming . We hope this makes sense and are happy to clarify further.
> The claim that DSA measures topological structure of the dynamics would be fairly remarkable... but is not proven...
We thank the reviewer for this important point and have since identified a proof that we will include. We can also add fudge words as you suggested. The proof is simple and we will include the formal proof in the paper: For topologically conjugate systems related by a diffeomorphism, their Koopman operators have the *exact same* eigenvalues, and the eigenfunctions are related by the diffeomorphism (Proposition 7, Budisic et al. 2012). This implies that the DMDs--the finite approximation of the Koopman operator--are related via the similarity transform.
> It seems like the scale of the MDS dimensions is argued to be meaningful... what does it correspond to?
As you identified, the dimensions are linear projections of the data, so a small scale implies small dissimilarity between systems. Thus in Fig 3b, being clustered around 0 means that the RDM is made up of mostly zeros. If they were scattered farther apart it would suggest that the models could be very different from one another. In revisions, we can display some statistics over the RDM or explain this logic in more detail.
> How well do the linearised dynamics capture the underlying network dynamics?...
In Appendix 8, we swept a range of DMD hyperparameters and found that the DMD fit the flip-flop system relatively well. Nevertheless, you are correct that most nonlinear systems are not linearizable and the DMD cannot perfectly fit. *However*, based on our analysis, that is not a problem for DSA, as the relevant outcome is for the system to converge to an approximation of the Koopman operator. Thus even if the dynamics are not perfectly fit, the similarity transform distance will identify conjugate systems as similar.
We should clarify that forcing is used to generate *autonomous* nonlinear dynamics, in that the system can be fed its own predictions and thus reproduce the dynamics. Time-delay observables have been used to approximate of the Koopman operator without forcing in several settings (Arbabi & Mezić 2016, Brunton et. al. 2016) and thus the omission of the forcing term should not (and indeed does not) impact DSA.
> I did not buy one claim, could you elaborate:...
We thank the reviewer for this concern and very much agree. Here's what we meant: in a high-dimensional system that has low dimensional dynamics (for example, 99% of the variance is explained by the first 5 dimensions), we can effectively project to that 5-d subspace and the system will evolve in the same way as in the ambient space. This is because the dynamics components orthogonal to those dimensions is negligible. But in general this is not true, as in your example. We can delete this line if it would eliminate confusion, or can add a section in the appendix about the viability of PCA in DSA in certain conditions (manifold hypothesis).
> You claim that all networks that solve the 3-bit flip-flop task have the same 8 fixed point structure, line 219, do you have any evidence for that?
Yes, this was shown in Maheswaranthan et al., 2019 and Sussillo and Barak 2013--they perform an extensive numerical analysis. We apologize for the oversight and thank you for the suggestion - we will cite directly after making this claim.
> Why did you not choose figure 3 to also back up your claim...
We apologize for this--it was a semantic distinction ('similarity' versus dissimilarity). Nevertheless, we should connect the concept of capturing topological similarity to Fig. 3.
> Related to the topological claim, my intuition must be off...
This is an interesting scenario, and is related to Fig. 4a. In DSA, we compare the two systems on the eigen-time-delay coordinates. In doing so, singular values are factored out, which "re-circularizes" your ellipse. In Williams et al. (2021), they solve a similar problem by whitening the data, and fitting HAVOK in the eigen-time-delay coordinates is equivalent to PCA whitening on the data.
---
Rebuttal Comment 1.1:
Title: Reviewer Response
Comment: I thank the authors for their thorough response. Some comments:
First, regarding the nonlinearity you applied to the ring attractor network, I think I understand now that you ran ring attractor dynamics, got a set of firing rates through time, then mapped those through some nonlinearity to create a morphed version of the same ring as shown in figure 4. If so that now makes sense to me but could definitely be more clearly explained. Further I liked another reviewer's worry that the DSA does not equal 0, and that you have worked out a way of making it equal 0 (though it does bring concerns about how hard the transformation is to optimise).
Second, I'm glad to hear you have a proof that your method extracts the topological structure, that's very cool. If it is correct then there's no need for including the fudge words! (Unless I'm misunderstanding what the fudge words are for, surely you want to make the stronger claim if you have the theoretical work to support it?)
I did not understand the author's explanation about why the forcing term doesn't matter. I think I get that maybe you only care about the internal dynamics operator (the Koopman operator) and perhaps it is true that with or without forcing you'd expect to extract the same Koopman operator, is that what you are saying? To the statement you make about autonomous nonlinear dyanmics in your response, is that not what you are doing?
I accept your argument that if all the activity is in a small set of PCs then the dynamics within that space should be the same as the whole system, and I would indeed be satisfied if you either deleted that claim or qualified it, as you suggest.
Finally, this is more for my own curiosity, but could you explain what you mean by 'singular values are factored out' in your last piece of response? My intuition is still off! It seems like the example I presented, the ellipse (and you are right, this is basically figure 4c), was wrong because the time lagging is key, so I tried thinking about the effect of time lagging. What I don't get is let's say I create a load of time-lagged versions of the same co-ordinates and then do eigendecomposition, then I would expect the areas of space in which the activity spends more time to be over-represented, upregulating their weight in the eigendecomposition. I think this would be the furthest points of the ellipse, which are those along which most of the data variance lies anyway, so it seems like lagging only make the situation worse?! Does this argument make sense? Do you know what's going wrong?
---
Reply to Comment 1.1.1:
Title: Response to Reviewer TLG4's Response
Comment: Thank you for your comments, and we are glad to be able to have satisfied some of your concerns. We will certainly implement your feedback in our revisions. Here are the answers to your outstanding questions:
> perhaps it is true that with or without forcing you'd expect to extract the same Koopman operator, is that what you are saying?
This is exactly correct, the same model fitting algorithm (reduced-rank regression) is used in each scenario. The forcing term is chosen after fitting the HAVOK model, by selecting the last r bases of the regression model. That is, the DMD matrix with $n$ modes and $r$ forcing terms is identified by learning a DMD matrix with $n+r$ modes. This means that fitting a HAVOK model with $n$ modes and no forcing would learn the same DMD matrix.
> To the statement you make about autonomous nonlinear dynamics in your response, is that not what you are doing?
In our case we are using HAVOK for its statistical properties in identifying the DMD matrix, not for its generative properties. We can illustrate what we mean by generating autonomous nonlinear dynamics with an example: Consider the HAVOK model fit to the Lorenz attractor--in the non-forced DMD setting, running this model autonomously (i.e. evolving the HAVOK model via tail-biting - generating predictions recursively based on prior predictions) cannot regenerate the full chaotic dynamics. This is because it is a linear model. Here we mean it does not regenerate the full dynamics in that the model cannot switch lobes of the attractor. The forcing term can be used to remedy this issue and allow for autonomous chaotic dynamics generated by the tail-biting procedure.
> Finally, this is more for my own curiosity, but could you explain what you mean by 'singular values are factored out' in your last piece of response?
Of course. Let's consider how this relates to PCA whitening (and we will add this to the supplementary information, as this question is very important and worth spending more time explaining). In PCA whitening, we are able to normalize the covariance matrix of the data:
$HH^T = U\Sigma^2U^{-1}$, where $H$ is the (already-centered) data matrix (Hankel Matrix), $U$ is the eigenvector matrix of the covariance matrix, and $\Sigma^2$ are the eigenvalues, which are equivalent to the squared singular values of the data. Then the PCA whitening on our data H is:
$H \leftarrow \Sigma^{-1}U^{-1}H$
Now, consider $H = U\Sigma V^T$. PCA whitening mathematically amounts to the following expression:
$H \leftarrow \Sigma^{-1}U^{-1}U\Sigma V^T = V^T$ as we cancel terms.
Thus PCA whitening is equivalent to simply extracting the right singular vectors from the SVD, which is exactly the transformation of the data on which we fit our DMD matrices! Thus we are able to 'sphere' our data just like in PCA whitening. Here, sphering is what is meant by "factoring out the singular values". That being said, we agree that we need to clarify what we mean by "factoring out the singular values" and will substitute this explanation instead. Thank you for your comment. | Summary: Understanding how different neural populations process a particular computation is critical for the study of brain computation, the development of brain-inspired technologies like brain-computer interfaces (BCI), and AI applications. Existing methods compare the underlying representations based on the spatial geometry of the latents. However, they fail to capture the dynamics, which are believed to drive the specific computation. To overcome this limitation, this work introduces a new analysis method that first extracts the dynamical structure of the neural network and then aligns these representations for comparison. They test the model when applied to several case studies training RNNs across different tasks and architectures. The proposed analysis can capture similarities between the computations across different networks while other models fail.
Strengths: The paper is clearly presented and technically sound. The method was tested and shown to work well when applied to artificial neural networks trained to solve different tasks and architectures, resulting in different dynamics and geometries. The analysis expands on two existing methods and combines them to understand the comparison between neural dynamics, which support the computation. Understanding similarities and differences between neural networks solving similar tasks is critical for the fields of neuroscience and AI. In this work, they showed how this analysis can identify overlap between computation solutions when other alternative methods fail. Moreover, the authors suggest that it could be used to understand biological network dynamics.
Weaknesses: While the authors provide compelling results supporting the relevance of this metric when applied to RNNs, they have not applied it to neural data. Since this project is largely motivated by the potential study of biological neural circuits and the application to BCIs, applying the analysis to biological data is critical to fully grasp the scope of applications of the proposed method and therefore its significance. As mentioned in the manuscript, the analysis is "[...] useful for experimental neuroscientists due to its simplicity of implementation and ease of interpretation due to its linearity." However, while its linearity provides the aforementioned benefits, it also imposes some limitations that could be interesting to explore. Lastly, when it comes to the application of alignment for BCI applications, it is important to note that the reported training time is between one hour and one day, making it impossible to use in its current state for practical applications when daily alignment is presumably needed. Along the same line of thought, it would be interesting to explore and note the data demands for the new method to work robustly.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The case studies using RNNs show the strengths of the proposed analysis in comparison to other methods. However, it is not always the case that the new analysis is superior to the alternatives. It would be interesting to explore cases when it isn't and why that is the case. As described, the modified Procrustes alignment seems to only support pairwise comparisons. If that is the case, are there limitations when comparing multiple networks? Does one have to set a reference network to compare all the others?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors thoroughly address the limitations on hyperparameter training and computational cost.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and greatly appreciate that they think the paper is clear and technically sound. We hope that our response will clarify some of the weaknesses and questions that the reviewer mentioned:
> Applications to neural data
While we haven't applied to neural data, we should note that the first example in the disentangling learning rules section (Fig 5a) utilized data from large-scale CNNs, trained on Imagenet (for example, a deep resnet, Nayebi et al., 2020). Although even this isn't quite as complex as biological neural data, we hope that this setting points to DSA's viability on neural data as well. We also agree that the application to neural data is one of the most exciting and important applications of our method, and are currently developing follow up work involving neural data.
> limitations of the linear method
While the model itself is linear, we note that the DSA gains nonlinear power via the delay embedding, which acts similar to a kernel function in an SVM. However, it is still worth thinking about DSA's limitations, and we're actively working on a deeper theoretical analysis.
> BCI limitations and training time
We apologize for the lack of clarity in our reported training time--we should note that the time to fit *one* DSA comparison is on the order of a minute or less, provided the number of optimization iterations is not absurdly large. We noted in the limitation that the total time per experiment (i.e., a figure) was between one hour and one day because we were performing tens of thousands of comparisons. For example, in Figure 2, we compared 240 neural networks pairwise, or around 29 thousand comparisons in total. Naturally, this takes a long time to compute, but each individual comparison is relatively fast, and we have GPU capability which drastically improves performance. We can clarify this in the limitations, and we hope that this explains what we meant by one hour to one day of computation.
> data demands
We agree that the question of how our method scales is a very interesting one to consider. We believe that the key limiting factor is in the DMD, which should scale with respect to the underlying dimensionality of the data manifold. We are actively studying this in a followup project--thank you for your suggestion.
> superiority of DSA vs alternatives
We agree that DSA is not always superior. We wish to emphasize that we do not intend to replace shape metrics, but rather to supplement them with a new comparison method to be done in parallel. Geometry is of course an important aspect to consider, and there exist many use cases for geometric methods. The goal of our paper was to highlight that if one cares about dynamics, there are situations in which geometric methods will not suffice to capture relevant similarities or dissimilarities at the level of the underlying dynamics.
> Pairwise comparisons in DSA
This is a similar concern for standard shape metrics--by virtue of being distance metrics, they can only compare pairwise. When comparing multiple networks (which we did in a few settings) there are a couple clear options, as you've mentioned: (1) compare pairwise, which results in a representational dissimilarity matrix of $R^{n \times n}$ (Fig 2,3,5) or (2) we can fix a reference network (Fig 4) for a representational dissimilarity matrix of $R^{n \times 1}$. In our open source code (which we are releasing), we've made it easy to do either one, with the option of setting hyperparameters for each model individually or in aggregate.
---
Rebuttal Comment 1.1:
Comment: i thank the authors for their through and detail answers. I believe that there is a lot of potential for this method. Specially since the computational demand is not that high. Still, in order to demonstrate the true value of this approach it would have to be tested on neural data. I am looking forward to your future work. | Summary: The authors developed a new computational tool named Dynamical Similarity Analysis (DSA) to measure the similarity between two systems focusing on the dynamics. They constructed the method by combining and modifying Dynamical Mode Decomposition and Statistical Shape Analysis. Their method successfully identified dynamical similarities between systems with different geometries, and conversely distinguished systems with different dynamics with similar geometries. They also used their method to distinguish learning rules underlying measured neural dynamics.
Strengths: I enjoyed this paper, which I found to be both elegant and useful. This work is a novel combination of previously developed techniques. This work is significant because the standard approach for comparing recurrent neural networks merely focuses on the spatial similarities of latent states, ignoring the importance of the difference between temporal similarities. Their proposal avoids the limitations of traditional tools by accounting for neural dynamics. Overall, the authors have clearly delivered their statement, proposed method, experiment, and results. The method is well-grounded and the ideas behind the proposed technique are well-explained and justified. The writing was clear, and the figures were largely appealing (although some of the labels became small and/or blurry). They analyzed when their method is superior to traditional methods, and used simple tasks to demonstrate this superiority. The distinguishing of learning rules from data was particularly nice.
Weaknesses: The authors could better explain and motivate the conventional spatial Procrustes Analysis, as that would help some readers take the next step for the temporal version. They could clarify the interpretation of their similarity transform CAC^-1, as I felt that the explanation that C^-1 “rotates [the vectors’] positions as well” was not a great explanation.
Minor: It’s strange to talk favorably in Figure 2 about DSA giving better accuracy but having lower accuracy scores, when discriminating between identical computations. If pairs of networks are correctly identified as “the same dynamics” then NOT distinguishing the networks should have higher accuracy. You might want “discriminability” on the vertical axis of Figure 2d.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: None. Very nice work.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors adequately addressed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and are very happy that they found the paper elegant, significant, and useful! We found their suggestions for improvements quite helpful as well, and hope that we can implement them in a manner that they agree with:
In their point that some of the figure's labels became small and blurry, we think they are referring to Figure 1? If so, we uploaded a figure that was too large and have since compressed it--we believe that will solve this problem. We are also happy to increase label sizes as needed.
We also agree that we did not explain and motivate Procrustes well, instead skipping over it to motivate our new metric. In the main paper, we can add an intuitive explanation of Procrustes describing that it captures distance in a manner that is invariant to orthogonal transformations, which is relevant because the coordinates of the space are not important for the setting of comparing neural representations.
Regarding the interpretation of $CAC^{-1}$ , we believe that the quoted interpretation of the similarity transform goes very well with our Figure 1 in the Appendix, and can move that explanation there so that they align. We can replace the interpretation in the main paper with a more mathematically motivated one based on homomeorphisms between conjugate dynamical systems, in this case linear systems, or another alternative interpretation that is clearer.
Finally, we agree that our description of "good" and "bad" in Figure 2 does contradict standard perceptions with respect to classification accuracy, and would appreciate your feedback on the rewording. We suggest that we could change the phrase "the networks intermingle" in L231 to "the networks are indiscriminable", and then reference Fig 2d in that line. We can chance the vertical axis to "discriminability" as you suggested, and add one more sentence at the end of section 1 explaining that it is desirable to our metric to have discriminability close to chance in this example. Alternatively, we could define a metric "indiscriminability" as 1-test\_acc, but this may be more confusing? | Summary: The authors propose Dynamical Similarity Analysis (DSA), a method for assessing the dynamical similarity between dynamical systems. The method combines ideas from the data-driven Koopman operator literature and a Procrustes-type distance between linear operators. By focusing on dynamics rather than geometry, the method is able to reveal the similarities between RNNs of different architectures trained to perform the same tasks and distinguish between dynamical systems with similar time-averaged states but different underlying dynamics.
Strengths: 1. The method is simple and appealing. Appealing use of time delay embeddings -- transform the temporal task into a geometric task.
2. DSA works very well on most of the experiments: Figures 1, 2, 3, and 5.
Weaknesses: 1. Should compare to baseline spectral method, for example PCs of CPSD matrix...
2. Figure 4: DSA is not zero. l262 speculates about numerical approximation error, but this seems to be fairly large. Ideally, the line for DSA would stay very close to zero. If this is not due to numerical approximation error, should this be taken as a failure case of DSA? The text mentions the low variability of the DSA distance across values of beta and C, but it seems like the magnitude is more relevant in this example.
3. The manuscript would improve with better theoretical grounding. For example, something in the vein of what is proposed in the paragraph starting at line 317 would greatly improve the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Figure 1 uses angular distance -- where is this defined?
2. There are several comments made about noise in the manuscript. For example, l58: "DSA factors noise into its estimate of neural dynamics." Also l172. I remain confused about how noise interacts with different aspects of the model. For example, how does Takens' theorem interact with noise?
3. How are lags and rank chosen for each experiment. Are the results robust to these hyperparameters? They seem to vary widely from experiment to experiment.
Suggestions/comments:
* Define what shape metrics are, or perhaps explicitly define one common one.
* Condition is not defined in the setup.
* Quantifying dissimilarity: Figure 3: MDS is a good visualization, but there should also be predictions as in Figure 2 or distances as in Figure 1. This should be uniform across figures.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: * Paragraph starting at l333: I would expect one would need to fit a model that fits task-relevant dynamics. That is, DSA would probably require some modifications before it is applied to neural data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful feedback and suggestions. We are glad that the simplicity of the method was appealing, and we hope that our responses to each of your comments will allow you to further appreciate DSA.
We appreciate the suggestion to add a spectral method of comparison, and agree that it would be valuable to compare to additional baseline metrics. After some investigation, though, we have to admit we are unclear on what the process would be for the baseline method you suggested. Would it be possible for you to elaborate on your suggestion by pointing us to a specific reference? This would help us incorporate your feedback better.
> Figure 4: DSA is not zero.
We agree that this is a problem, and we are happy to share that we have solved it since submitting the paper. We will update the figure accordingly--now, DSA reports \~0.06 with a standard deviation of 0.01 across the deformation values (Fig 4) . The error was in fact numerical approximation error due to optimization, and we solved it by including a multilayer perceptron in our pipeline, which makes the loss landscape more amenable to gradient descent. In our open source code (to-be-released accompanying the paper), we implemented a unit test asserting that a matrix $A$ and its similarity transform $CAC^{-1}$ would be identified as similar up to a threshold, which now passes.
> The manuscript would improve with better theoretical grounding.
We agree with the reviewer on this weakness and are happy to share that we have identified theoretical grounding. The proof is quite straightforward and we can describe it in words here, but we will certainly include the formal proof in the paper: For two topologically conjugate systems $f(x),g(y)$ related by a diffeomorphism $y = \phi(x)$ , their Koopman operators $K_x,K_y$ have the \textit{exact same} eigenvalues, and the eigenfunctions are related by the same diffeomorphism $\phi$ (Proposition 7, Budisic et al. 2012, "Applied Koopmanism"). This implies that the DMDs - the finite approximation of the Koopman operator - are related via the similarity transform, as similar matrices have identical eigenvalues. Thus, we can identify topological conjugacy of two nonlinear dynamical systems with DSA.
> Figure 1 uses angular distances -- where is this defined?
We only defined this in the appendix, section 3 (eq. 15). We agree that this may be confusing and are happy to add it into the methods section.
> comments made about noise in the manuscript
We agree that our description here was confusing, and are happy to clarify both here and in the paper.
Noise (in particular, small noise perturbations) is important for estimating the DMD appropriately by reducing degeneracy in the dynamics matrix estimation. As a motivating example, consider estimating the dynamics of a system observed close to a fixed point. Without noise, the regression would be degenerate as the system is barely changing. But with small noise perturbations, there is enough variance in the observed data to enable the construction of an appropriate dynamics matrix. A more complex argument for the importance of noise is also described by Galgali et al., Nat. Neuro (2023), from which we identified the examples in Fig. 3. Essentially, without small noise perturbations, our method would not be able to distinguish the examples in Fig 3, as the condition-averaged dynamics are exactly equivalent.
The reviewer also brings up an interesting question about how noise interacts of theoretical backing for our method, such as Takens' theorem. In the specific case of Taken's theorem, it seems as though there have been attempts to rigorously characterize the quality of the state space reconstruction at varying signal to noise ratios (see Casdagli et. al. 1991). However, again noting that our description was confusing, we wish to clarify that we meant to claim not that our method is robust to large amounts of noise, but rather that the method is able to harness small noise perturbations in order to obtain appropriate models of the dynamics.
> How are lags and rank chosen for experiment
Like most DMD papers, we chose them heuristically, based on our knowledge of the dimensionality of the system (both in terms of intrinsic manifold and number of neurons), and based on how well the DMD model predicted the next step data (3 metrics here, MSE, R\^2 and Correlation). However, the results are robust to hyperparameters--in Appendix Fig section 11 we showed that the predictivity metrics are optimized across a wide range of ranks and lags, which are reasonable proxies for how well DSA will do. We are aware of algorithms that optimize these hyperparameters based on predictivity metrics (Ahamed et al., Nature Physics 2020) but chose not to use them here, as picking hyperparameters by hand was surprisingly easy to get our method to work.
> Suggestions
Regarding your suggestions, we agree with all of them. We defined the Procrustes Analysis (a particular shape metric, William et al., 2021) in equation 3 but realize that we did not explain its function very well, so we will elaborate upon it in our revisions. Specifically, we plan to add an intuitive explanation of Procrustes describing that it captures distance in a manner that is invariant to orthogonal transformations, which is relevant because the coordinates of the space are not important for the setting of comparing neural representations. We also agree that we need to define what we mean by condition, and will do this in the revision--a condition in systems neuroscience experiments is the context that the experimenter varies (input stimuli and desired output mapping, e.g. the variable input drive in Fig 3--one condition has input $u$ with positive magnitude, the other condition has negative input). Finally, we agree that we should quantify the discriminability achieved by DSA in a panel in Fig. 3 and will add that into the paper.
---
Rebuttal Comment 1.1:
Comment: **Comparing to a spectral method:** I apologize to the authors for how vague my suggestion was in the original review. My concern was that the proposed method uses time information via DMD while the competing approach (Williams et al. 2022) does not use time information. I recognize that part of the contribution of this work is to extend the general sort of analysis in Williams et al. (2022) to the time domain and agree that this a valuable contribution. However, apart from this aspect of the experiment, the comparisons presented in Figures 2b, 3b, and others is possibly misleading/unfair because the competing method do not have access to any time information. This leads me to ask: could there be a stronger baseline which i) does have access to time information and ii) uses existing approaches? In my opinion, this would provide more relevant context for interpreting the experimental results.
Here is one concrete proposal that fits these two criteria: apply the method of Williams et al. (2022) to a spectral representation of the data. In more detail, given a single timeseries $X \in \mathbb{R}^{n \times t}$, estimate the cross power spectral density (for example, using ``scipy.signal.csd`` and taking the absolute values, discarding phase information) to create $\tilde{X} \in \mathbb{R}^{n \times n \times f}$ where $f$ denotes the number of frequency bins. Then simply treat these as features to input to the Procrustes metric (Eq. 4).
**Theoretical grounding:** I am glad that the authors have found a more explicit theoretical grounding for their method, which will substantially improve the paper.
**Figure 4:** I am glad that the issue in Figure 4 has been fixed.
**Noise:** I thank the authors for pointing out Casdagli et al. (1991). Is it fair to summarize the noise situation, meaning nondeterministic dynamics, as follows? i) The theory appealed to in motivating the method (Taken's theorem) was developed for deterministic settings. ii) There have been attempts to systematically study the effects of noise (eg Casdagli et al., 1991) and it has been found that noise somewhat complicates the task of state space reconstruction. iii) Small amounts of noise are helpful in practice for estimating the dynamics matrices and distinguishing different dynamics (as in Figure 3). Is this a fair characterization, I suggest mentioning all three points explicitly in the manuscript.
**Matrix similarity:**
The orthogonal group of matrices consists of two connected components, but the Cayley transform used (Eq. 5 in the supplementary material) produces only matrices in the special orthogonal group. Due to this, it seems that references to $O(n)$ in the text should be replaced with $SO(n)$.
A minor suggestion: it may be helpful for clarity to point out in the text around Eq. 4 that $d=0$ implies the two $A$ matrices are (unitarily) similar. It would also be helpful to mention in the main text that the converse is not true in general, due to the restriction of $C$ to $SO(n)$.
I'm curious if the authors tried optimizing the generic matrix similarity metric (Eq. 4 in the supplement) using gradient descent. Does the nonconvexity pose a practical problem?
On a related note, it may be safe to assume that both $A$ matrices contain distinct eigenvalues, given that they are estimated using noisy experimental data. If this is the case, it may be possible to define an analogous metric to DSA (call it DSA') by computing a metric on the eigenvalues themselves, for example a Wasserstein distance between the two sets of eigenvalues. This would have the advantage of the following implication: if the eigenvalues of the A matrices are distinct and DSA' is 0, then the A matrices are similar. Note that this is slightly more general than what can be said about the existing DSA. Of course DSA' also depends practically on the numerical stability of the eigenvalue estimation.
**One more minor comment:** It seems like DSA is not invariant to changes in timescales. For example, if we have two identical systems with one evolving a few percent faster than the other, the DSA score will be nonzero. This could be worth mentioning as a limitation, given that we may not expect neural dynamics to operate at identical speeds on different trials or across different animals.
Thank you to the authors for your detailed response. Most of my concerns have been adequately addressed, and so I will raise my score. However, I still think the manuscript would be substantially improved with some comparison to a spectral method like the one described above. Apologies for my slow response.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their extensive comments, and greatly appreciate their raising of the score. We are glad to have addressed your concerns, and believe that the points you have raised in this response are quite valuable.
**Comparing to a spectral method**: We believe that we understand the proposed spectral method, and agree that it serves as another relevant baseline to cover both the purely spatial and the purely temporal view of dynamics.
**Noise**: Yes, your points are well-described and we will add them to the manuscript.
**Matrix similarity**: You are correct that the Cayley Map restricts to $SO(n)$, and we will change the text accordingly. Likewise, you are correct that the metrized version of the task only identifies similarity up to unitary transforms, and we need to mention that as well. We also agree that testing the general metric ($C \in GL(n)$) is highly relevant and would capture general topological equivalence, even if it is not a proper metric, as far as we know. We will mention this and describe the relevant cases for using each group in DSA.
We previously considered an idea similar to your DSA', but found that just comparing the eigenvalues will not work in cases of non-normality. Consider this simple example in which case the pure eigenvalue comparison will fail: https://math.stackexchange.com/questions/1955796/find-two-2-times2-matrices-which-have-the-same-eigenvalues-but-are-not-simila
Thank you again for your insightful and thorough comments. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Bias in Evaluation Processes: An Optimization-Based Model | Accept (poster) | Summary: This paper studies the issue of biases present in evaluation processes like hiring and school admissions. The authors propose a model that estimates the distribution of utility that incorporates two main features: resource constraints for information, and risk-averseness of the decision-maker. They formulate an optimization problem to estimate the utility distribution with two parameters that represent the aforementioned two features. They study the effect of these two parameters on the solution of the optimization problem, and they conduct a numerical study to study the effect of interventions in a downstream selection task.
Strengths: The problem of bias in evaluation processes is a significant problem, and I like the approach of understanding how bias can emerge in this process via a stylized model. The authors model two important phenomena, resource constraints and risk aversion, that have been shown to arise in many settings and can contribute to bias. The authors use real-world datasets to validate their study.
Weaknesses: The exposition of the paper was poor, making it challenging to comprehend the main message of the work. I understood the paper as positing a stylized model of an evaluation process that incorporates resource-information and risk averseness, and the main question that is studied is to understand how these two features contribute to the emergence of bias. However, I did not get a satisfactory understanding of this question from reading this work.
1. The main point of confusion was the lack of a formal model of bias, and an interpretation of what the main framework, OptProg, represents in reality. OptProg outputs a density of scores, $f_{\epsilon}$, where $f_{D}$ is a “true” distribution of scores. I interpret this via the following example: the distribution of SAT scores across all students ($f_{\epsilon}$) is not equal to the distribution of “true ability” of same students ($f_{D}$). However, only looking at the distribution seems insufficient for understanding “bias”, since it does not specify how _each_ true score gets mapped to each “biased” score. This is the source of the issue in some of my next comments.
2. Section 3.2 studies how tau and alpha influence the solution to OptProg, which seems to be the main contribution of this work. However, this section relies on specific distribution examples and numerical investigations, with long discussions and alot of notation that was difficult to follow. Moreover, it fails to explain why mean and variance are the relevant statistics of interest and how they capture the notion of "bias". Providing explicit theorem statements would have greatly improved this section.
3. The numerical study had several parts that I found confusing:
- in Section 4.1, fitting the OptProg model to real-world datasets and evaluating the TV distance does not demonstrate the model's ability to capture "Ability to capture biases in data" (section title). Why does a small TV distance imply bias?
- The model fits the best tau and alpha, but wouldn’t tau = -\infty always be the best tau? Also, is it the case that alpha > 1 is a better fit than alpha <= 1?
- The interventions aimed at changing tau and alpha from the best fit would increase the error in estimation (increase TV). Why would these interventions improve utility? The confusion arises from the lack of a formal model of utility and bias.
- There was no clear conclusion from the numerical results: each intervention was the best in some regime. What should the reader take away from this, and how should a DM use these results?
- To strengthen the study, I suggest including numerical analyses that (a) validate the model and (b) rigorously estimate alpha and tau for different groups to demonstrate the existence of this type of bias.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please let me know if I have any misinterpretations in my review.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We are glad that you like our approach. There do seem to be some misinterpretations that we have clarified. We hope that you will increase your support for the paper.
**"...lack of a formal model...what...OptProg...represents"** OptProg is an abstract model of how an input distribution of true utility of a group of individuals gets transformed by an evaluation process to an output distribution in the presence of risk-averseness and information constraint. Since the corresponding parameters $\alpha$ and $\tau$ depend on socio-economic factors, we chose to focus on the input/output of OptProg at a population level. However, the intuition for OptProg does come from understanding how an individual's true ability $u$ gets mapped to a biased score. Indeed, we can recover this behavior from OptProg by plugging in $f_\mathcal{D}$ to be the Dirac delta distribution around $u$ and setting $\alpha$ and $\tau$ appropriately. We will clarify this.
**"...mean and variance..."** Our work is rooted in several prior works that have argued that mean and variance of population level distributions are relevant statistics in understanding bias. E.g., [81] points out that in the distributions of lifetime citations of men and women scientists, the average number of citations is lower for women than men. As for variance, quoting from [57] -- *different groups of candidates may exhibit different variability when their quality is estimated through a given test*.
Several theorems on mean and variance appear in the Supplementary Material (Theorems D.1, D.5, D.7, E.1, E.2). These theorems discuss the effect of changing $\tau$ and $\alpha$ on the mean/variance of the output distribution. Based on your suggestion, we will include informal versions of them in the main body.
**"...Why does a small TV distance imply bias?"** Sorry, this is a typo. We intended to say that a small TV distance between the best-first distribution generated by our model and the distribution of utilities in a dataset (which is already known to be biased; see the note below) implies that our model can generate distributions that capture biases arising in real-world datasets. We will fix this.
Note: All of the real-world datasets that we consider are already known to have biases in utilities across the protected groups (defined by gender and/or birth category for the JEE-2009 dataset and defined by gender for the semantic scholar dataset); see, for instance, [41] and [57].
**"...wouldn’t $\tau = -\infty$ always be the best tau?"** Setting $\tau=-\infty$ just leads to the output density having the least amount of uncertainty. E.g., when $f_{\cal D} = N(\mu,\sigma^2)$, $\tau=-\infty$, $\alpha=1$, $\ell(v,x)=(v-x)^2$, the output of OptProg is the Dirac-delta function at $\mu$. This is not the best fit solution. Whereas if we pick $\tau$ to be equal to the entropy of $f_{\cal D}$, the output is same as $f_{\cal D}$ which, by definition, is the best-fit distribution.
We report the best-fit $\tau$ in Figure 6 of the Supplementary Material.
**..is it the case that $\alpha > 1$ is a better fit than $\alpha \leq 1$?"** Not in general. For the Semantic Scholar dataset and the synthetic network data, $\alpha>1$ is a better fit than $\alpha\leq 1$. However, for the JEE-2009 dataset, the opposite holds: $\alpha\leq 1$ is a better fit than $\alpha>1$. We report the best-fit $\alpha$ in Figure 6 of the Supplementary Material. Concretely, Table 1 shows that the TV-distances with $\alpha=1$ are poorer than with the best-fit $\alpha$ (0.12, 0.11, 0.08, and 0.02 compared to 0.08, 0.07, 0.03, and 0.01 respectively).
**"...Why would these interventions improve utility?"** Changing $\tau$ and $\alpha$ from their best-fit values (for the biased distribution values) does increase the error in estimation from the biased distribution of values, but it may decrease the error in estimation from the true distribution of values. Since utility is measured as the sum of true (and not biased) values of the selected individuals (Lines 352 – 359), changing $\tau$ and $\alpha$ from their best-fit values can improve utility. We will emphasize this further in the final version.
**"...each intervention was the best in some regime...how should a DM use these results?"** That is correct. Given the regime that is of interest to a policymaker, the policymaker can use our model to study the effects in order to systematically identify the best intervention and, based on its assessed efficacy, decide whether to enforce it (Lines 395-397). As mentioned in the abstract (and the introduction), our work provides a tool for the policymaker to guide the deployment of interventions to mitigate biases. Empirical sections provide use cases that may help policymakers to see how they can use our model. Prescribing which intervention to use in which context is outside the scope of this work.
**"...I suggest including numerical analyses that (a) validate the model and (b) rigorously estimate alpha and tau for different groups..."** Perhaps there is some confusion, we do validate our model: We estimate $\alpha$ and $\tau$ for utilities of disadvantaged groups in three real-world datasets (Lines 12 - 13). We discuss implementation details of the estimation in Lines 303-310, report the resulting TV-distances between the best-fit distribution in Table 1, and report the best-fit values and distributions in Figure 6 in the Supplementary Material. We observe that the resulting distributions output by our framework are close in TV distance (they have TV-distances $\leq 0.08$; Table 1) to the distributions of biased utilities in the real-world data. Moreover, our model has a better fit than the implicit variance model [57] and the multiplicative bias model [81] on datasets where utilities have skew (Rows 1, 3, and 4 in Table 1) (Lines 115-116).
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their response.
I don't quite understand the authors’ first response, which was my main concern that there was no stated mechanism that maps "true ability” to “biased ability”. Could the authors spell this out further?
Next, if there is such a mapping, then the goal of these interventions should be to _recover_ the density of true abilities, from the biased abilities (i.e. "correct" the bias). But it doesn’t seem like the interventions are aiming to measure "recovery". For example, increasing tau increases both the mean and variance (Section 3.2), but what does that mean in terms of correcting bias? Let me know if this is not the goal of the interventions being studied.
---
Reply to Comment 1.1.1:
Title: A concrete example of how "true ability" gets mapped to "biased ability"
Comment: Thanks for reading our rebuttal and responding. We answer your question below and will be happy to provide further clarifications.
**1)"...mechanism that maps true ability to biased ability. Could the authors spell this out further?"**
Certainly. Consider the setting where a single individual with "true ability" $u$ is being evaluated. One concrete way in which our framework can model this is by setting the input density $f_{\mathcal{D}}$ to be concentrated on $u$ and letting the loss function to be $\ell_2^2$-loss (see Lines 204-205 for a definition) over the domain $\mathbb{R}$. For fixed parameters $\tau$ and $\alpha$, the output distribution $f^\star$ (over the "biased ability") can be shown to have a mean of $u - \sqrt{\frac{\gamma^\star}{2}}\cdot \frac{\sqrt{\alpha}-1}{\sqrt{\alpha}+1}$. Here $\gamma^\star > 0$ is as in Theorem 3.1 and is a function of $\tau$ and $\alpha$.
Thus, our framework allows one to derive a mapping from "true ability" to the (mean of the) "biased ability".
This mapping can be used to understand how the parameters $\tau$ and $\alpha$ in the evaluation process transform the true ability. For instance, using the fact that $\gamma^\star$ increases as $\tau$ increases, and that $\frac{\sqrt{\alpha}-1}{\sqrt{\alpha}+1}$ is positive for $\alpha > 1$, it follows that decreasing $\tau$ pushes the expected biased ability of the individual up towards $u$, their true ability.
Similar mappings can be derived for other loss functions as well and, if one is interested in the distribution of biased abilities, Theorem 3.1 gives a characterization.
**2)"...then the goal of these interventions should be to recover the density of true abilities, from the biased abilities (i.e. "correct" the bias)."**
Yes, that could be one goal, however, it may neither be necessary nor directly achievable. Obtaining estimates of abilities is not an end to themselves: they are used in downstream tasks such as selection. The interventions (in our paper and in prior works) try to ensure that the outcomes of the downstream tasks with biased abilities and interventions are (approximately) the same as the outcomes with true abilities.
**3)"For example, increasing tau increases both the mean and variance (Section 3.2), but what does that mean in terms of correcting bias?"**
Concretely, as discussed in the example point (1) above, decreasing $\tau$ moves (mean of) the biased ability of the individual up toward their true utility. Moreover, as can be seen by the expression of the mean of the biased utility in point (1) (i.e., $u - \sqrt{\frac{\gamma^\star}{2}}\cdot \frac{\sqrt{\alpha}-1}{\sqrt{\alpha}+1}$), moving $\alpha$ towards $1$ also ensures that the mean of the biased ability of the individual approaches their true ability. | Summary: The paper proposes an optimization-based framework for modeling bias in evaluations. The perspective of the paper is to provide a well-founded and interpretable model of evaluations that can replicate biases observed in real settings without invoking an intrinsic utility for producing biased evaluations. This model is a generalization of previous models for evaluation bias which assume a particular parameterized model for the output density (e.g. Gaussian or Pareto). The primary utility of this model is to help policy-makers study the effectiveness of different interventions aimed to reduce bias, such as requiring proportional fairness of evaluations or decreasing the informational cost of evaluations. The authors apply their model to data such as IIT JEE-2009 scores, obtaining a closer fit with realized scores than previous models, and test out different interventions.
Specifically, the paper models the density of evaluations f_{\mathcal{E}} as the output of an optimization problem, in which the evaluator seeks to produce an evaluation as close to the true value v of the individual as possible, which is drawn from a density f_{D}. Closeness is measured by a loss function \ell(x,v) between evaluation x and quality v. This optimization problem is subject to constraints and adjustments which introduce bias, which are well-founded in the literature. The first is information constraints. Evaluations are noisy in practice because it is difficult and costly to obtain a clear evaluation signal, and this cost can vary across different groups (e.g. groups that speak different languages). This is modeled as a constraint requiring that the entropy of f_{\mathcal{E}} be lower-bounded by some \tau. The second source of bias is risk-aversion: the evaluator's loss may not be symmetric between over and under-estimation of the true value, and may penalize over-estimation more. This is modeled in the loss function between the evaluation and the true value v, in which loss aversion (parameterized by \alpha\geq0) causes the evaluator to penalize over-estimation of v more than under-estimation by a factor of \alpha. They derive an exponential functional form for the solution of this optimization problem, which is standard by the maximum-entropy principle. They also perform sensitivity analysis of how the solution changes with \tau and \alpha.
This modeling framework generalizes the implicit variance and multiplicative bias models. The only inputs to the model are the information constraint \tau, risk aversion parameter \alpha and loss function \ell. For empirical validation, they consider two real-life datasets and one synthetic example of scores in different contexts. They fix a group G_{1} of individuals to be a baseline group and use the distribution of scores from that group to represent the true distribution of values. They consider another group G_{2} whose scores are potentially biased (and thus arising from f_{\mathcal{E}}), and they compare how well different models of evaluation bias reproduce the distribution of G_{2} using scores from G_{1} as the true values. They find that their more generalized model can improves the fit of the score distribution of G_{2} across all datasets. They then use their model to assess various interventions for subset selection tasks, based on whether they reduce bias and allow the evaluator to pick a subset of individuals with higher expected value. These interventions include requiring proportional or equal representation, increasing \tau, and lowering \alpha, which all are related to interventions considered in practice. They find that optimal interventions depend case by case based on different parameters of the subset selection task (e.g. how many individuals to hire, etc.), and their model can guide policy-makers to understand when certain interventions will be more effective than others.
Strengths: The greatest strength of the paper is in its conception. Invoking the maximum entropy principle obtains a natural generalization of previous models in the literature, without involving an excess of extra parameters. Including risk aversion makes sense too, given that it is a well-studied source of bias and better enables the framework to model skewed distributions. The method is computationally feasible since evaluation scores are 1-D and ultimately discrete, and produces realistic evaluation distributions. The optimization-based formulation retains much of the interpretability of simpler models, while crucially allowing for greater modeling capacity through computation. In contrast to previous papers, which are primarily concerned with producing simple models that illustrate a particular source of bias, this paper provides a computational framework designed to be applied to data (I believe this aspect should be emphasized more in the paper).
The paper is clearly written and the empirical benchmarking is solid with compelling examples (JEE-2009 and Semantic Scholar). The evaluation of different interventions is also a nice illustration of the usefulness and interpretability of the model.
Weaknesses: Overall the paper could benefit from more exposition. While the paper references key sources in the literature, it does not clearly explain the mechanism by which information constraints and risk aversion lead to bias. This could be clarified by 1 or 2 specific examples. Without this, it is difficult to understand why these are the particular sources of bias incorporated in the framework and why others are not. It would also be better to spend more time discussing concrete interventions, e.g. moving some of the material from Supplemental Material H to the main text. To aid in this, would it be possible to evaluate an intervention like 'change score from out of 100 to out of 10' or 'truncate the range of scores'? If so, this could help in demonstrating the applicability of the model for helping with realistic policy decisions.
The theoretical results on the evaluator's optimization problem seem to be fairly standard and straight-forward implications of the maximum entropy principle. This is not a bad thing, but at least this section could be trimmed down and streamlined. The proof of Theorem 1 seems fairly involved, and it would be useful to explain how this setting departs from standard maximum entropy settings. The results on sensitivity analysis with respect to \tau and \alpha are also not surprising and this section could be shortened and crystalized to better capture how \tau and \alpha affect the output density (with greater focus on how the mean changes). Figure 1 is difficult to interpret and could either be removed entirely or replaced.
I think the biggest need for improvement is in showing the strength of the contribution compared to previous models. I think a strong case can be made using supplemental material, but from the current draft alone the strength of the contribution is not very clear. First, it would help to use some supplemental material (e.g. Figure 6 from Supplemental Material H) into the paper to show that a wider class of models is actually necessary for modeling real life score distributions (comparing the best fits from other models). One could also look at a synthetic example with exponential or laplacian tails, but this is not necessary.
And since the framework a generalization of previous models, it is perhaps unsurprising that it achieves greater fit to data. More than fitting the data, ultimately what matters are implications for intervention evaluation/sensitivity analysis for policy-making. If the conclusions are identical to what one obtains from less sophisticated but simpler models, it's unclear whether a more powerful modeling class actually helps. What would have been more compelling is if there are instances where the current model gives different answers for what interventions work well or not compared to simpler models, and justifying why the current model's conclusions are more sensible. In that light, it would be useful to evaluate how well the framework assesses interventions, which can be done through synthetic examples (e.g. intervening on the synthetic example and seeing how well the model predicts the effect of the intervention).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: When fitting \alpha and \tau to minimize total variation distance with the G_{2} distribution, is there any train-test splitting, or are they fit on all the data? Otherwise there might be a concern about overfitting.
In the empirical evaluation, the underlying assumption is the distribution of true values for the G_{2} individuals is exactly the same as the distribution for G_{1}. While this assumption makes sense for convenience since we don't observe the true values of G_{2}, if this assumption were false in practice, how could this affect the validity when assessing different interventions?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper primarily mentions the limitation of the work as being that it only concerns with scalar values, and mentions that the framework can be extended to multivariate values. The paper also mentions that there may be other models out there that achieve the same fit as the maximum entropy one. This is all reasonable. It does not include the limitation that the distribution of true values for G_{2} is assumed to be identical for G_{1}, in which case one cannot disentangle evaluation bias from natural differences in these distributions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We are glad that you think that our model can guide policy-makers to understand when certain interventions will be more effective than others. Thanks also for appreciating the invocation of the maximum entropy principle to generalize previous models.
**"proof of Theorem 1...”** Please see our response [here](https://openreview.net/forum?id=7b4oobeB4w¬eId=2eKb6fBuEX).
**"…specific examples…"** While we do explain how changing $\alpha$ and $\tau$ give rise to biases, we present a couple of concrete settings to show how $\alpha$ and $\tau$ may arise and give rise to biases in an evaluation process [here](https://openreview.net/forum?id=7b4oobeB4w¬eId=2eKb6fBuEX).
**“…concrete interventions…e.g. moving material from Supplemental Material H.”** Thanks. We will move the discussion on interventions from Supplementary Material H (Lines 1362-1381) to the main body.
While such interventions (e.g. conducting the exam in multiple languages) can be assessed by our framework, to evaluate them, one needs a model of how the intervention affects $\tau$ and $\alpha$, and designing such models is beyond the scope of this work.
That said, for interventions that directly manipulate the values, such as truncating the range of the values as suggested by you, one can evaluate their effect on $\alpha$ and $\tau$ by computing the best-fit distributions to the truncated value distribution and, hence, use our framework to assess the efficacy of these interventions.
**"...could be shortened...to better capture how \tau and \alpha affect the output..."** Thanks for the suggestions. We will trim the discussion of sensitivity analysis with respect to $\tau$ and $\alpha$, which focus on their effect on the variance and the mean of the output density. Even though the sensitivity analysis with respect to $\tau$ and $\alpha$ shows that the variance and the mean of the output density change as expected, proving these results are non-trivial. The key reason is that we do not have a closed form expression for the optimal density in terms of $\alpha$ and $\tau$. We can only express the optimal density in terms of the optimal dual variable $\gamma^\star$ (see Thereom 3.1), but understanding the sensitivity properties of $\gamma^\star$ turns out to be non-trivial. We draw intuition from the analogy with Gibbs equation (see equation (6) and Section C in the supplementary material), which helps us in predicting the properties of $\gamma^\star$. Sections D, E, F, and G in the Supplementary Material are dedicated to the proofs of these results.
**"Figure 1...could be...replaced."** Thanks, we simplified Figure 1: instead of the three-dimensional plots, we present two-dimensional plots that show the effect of $\alpha$ (respectively $\tau$) on the mean of the distributions for a fixed value of $\tau$ (respectively $\alpha$). The updated figure is in the one-page PDF attached with our rebuttal.
**“...contribution compared to previous models…would help to use...Figure 6…”** Thanks for your suggestion, we will add a version of Figure 6 that compares the best fits of our model and the best fits of previous models with the distribution of biased utilities in the final version. As expected from the TV-distance values in Table 1, the best fit achieved by our framework is at least as good as the best fit of the implicit variance model [57] and the multiplicative bias model [81], and is a better fit than these models [57, 81] on datasets in which utilities have skew (Rows 1, 3, and 4 in Table 1).
**“...instances where the current model gives different answers...compared to simpler models…”** Yes, indeed, for certain interventions that affect both $\tau$ and $\alpha$, our model may lead to different assessments than simpler models. For instance, in the context of standardized testing, consider the intervention that requires conducting the exam in multiple languages. This intervention can reduce the information constraint faced by the disadvantaged group (by reducing the cognitive load on the non-native speakers during the examination) and, at the same time, may also reduce risk-aversion parameter $\alpha$ (by eliminating the need for non-native speakers to enroll in additional training for the examination language). For such interventions, simpler models–that only consider the effect of the intervention along one dimension–may underestimate its positive impact while our framework could give a more accurate assessment. We will include a discussion on this in the final version.
**"...train-test splitting"** Thank you for your suggestion. We did not use a train-test split when fitting $\alpha$ and $\tau$. We have repeated the simulation with an 80-20 train-test split and will include them in the final version.
The results with the train-test split are similar to the results in the paper: across all datasets, the densities output densities by our model are close to the density of biased utilities in the datasets (TV distance $\leq 0.09$) and our model has a better fit than the implicit variance model [57] and the multiplicative bias model [81] on datasets where utilities have skew (Rows 1, 3, and 4 below).
Concretely, we report the TV-distances with a train-test split in the one-page PDF attached with our rebuttal.
**"..G_{2}...is ...same as...G_{1}....if this assumption were false..."** This assumption builds on the premise that there are no differences (at a population level) between $G_{1}$ and $G_{2}$; see, e.g., [81,57,41]. If this premise is false, then the effectiveness of interventions can be either underestimated or overestimated which may lead a policymaker to select a suboptimal intervention. That said, if all the considered interventions reduce risk aversion and/or resource constraints, then the chosen intervention should still have a positive impact on the disadvantaged group. We will add a brief discussion in the limitations section on this.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thoughtful response. I appreciate the explanation for the proof of Theorem 1. I also really appreciate the explanations for why \alpha and \tau would affect bias, they help to illustrate why these parameters are invoked in the model and why other parameters are not. All of my concerns and questions have been addressed by the above response.
I do still wish there were a more extensive empirical evaluation that could clearly show that this more computationally powerful method gives _qualitatively_ better insights into which policies would work better than others. And also if there is any way to concretely interpret the fitted values of \tau and \alpha. I think this is a mostly a matter of rewriting the paper so that it's usefulness to potential policymakers (or even possibly empirical social scientists) is more apparent.
I will stick to my rating since I believe it is appropriate. Once again, I like the core idea of the paper, my comments are only on the presentation. | Summary: This work presents a theoretical model to quantify bias in the task of evaluating candidates (ie minimizing loss while subject to an information constraint). It presents a formula/representation of the problem, parametrized by roughly "real-world" factors of 1) resource-information tradeoffs; and 2) risk-aversion. After presenting some properties of this model, the work loosely applies it to quantify types of bias in real-world datasets (eg standardized testing by class/gender; citations; etc). Finally, it explores how different real-world-inspired interventions (eg Rooney Rule for representational constraint, structured interviews for standardization) could impact the state of bias in the parametrized models.
edit: I have had the rebuttal and feel better about the derivations. I remain borderline accept
Strengths: - I quite like the design decision to choose model parameters and intervention types that are inspired by plausible tradeoffs and concerns in evaluation bias.
- This paper would not be as strong if it were just the formulas/models without trying to measure any grounding in real-world datasets.
Weaknesses: - Although engaging with real-world datasets empirically is commendable, the analysis conducted was rather light. It didn't offer particularly novel insights or alternative ways of thinking about bias in the data (eg could try to demonstrate how the model can help policymakers with actionable interventions; I don't think a policymaker would be able to gain such insight with the current presentation of results/discussion)
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Overall this paper seems very interesting, though I couldn't follow all of the derivations/formulas. The empirical part, alone, doesn't have enough insight, but I do like applying the theoretical parametrized model to the real-world datasets to discuss how to interpret them. I'm inclined to accept, though I would feel more comfortable about this paper if one of the other reviewers were able to speak to the derivations' validity and contribution.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: - Some of the modeling choices were noticeably reductive. For instance, on page 8, Section 4.2 uses college admissions as an example of a subset selection task. However, it suggests modeling the task is the sum of each accepted applicant's individual values, rather than considering network effects (eg Scott Page's "The Diversity Bonus" where a team of different people can contribute different kinds of knowledge to solution resulting in a better result than if the top-2 individual values had high overlap & therefore didnt offer complementary strengths). Of course, that is just nitpicking one example, however my general concern is that this work may or may not ultimately prove useful enough for policymakers. It might end up being the case that any tasks that are tractable enough to model mathematically are poor fits for social science dynamics in practice)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating the model, the empirical work, and the presentation of the paper. We have addressed your questions and concerns below and hope that you will consider supporting our paper further.
**"I'm inclined to accept .. "** Thanks. Please take a look at the review of reviewer gzsA concerning the validity and contributions of the derivations.
**" ... how the model can help policymakers with actionable interventions ..."** Most of the prior works on interventions in selection settings have focused on adding representational constraints for the disadvantaged groups. Such constraints, often framed as a form of affirmative action, could be beneficial but may not be possible to implement in certain contexts. E.g., in a landmark 2023 ruling, the US Supreme Court effectively prohibited the use of race-based affirmative action in college admissions. Our model posits that reason for the emergence of differences in distributions of abilities of different groups could be because of the difference in the values of the information-resource parameter $\tau$ and/or the risk-averseness parameter $\alpha$ of the group. The theoretical results establish the impact of these parameters on the mean/variance of the output distributions. Thus, in all, the model allows a policymaker to evaluate interventions beyond affirmative action that focus on procedural fairness; this allows working towards diversity and equity goals without placing affirmative-action-like constraints.
In this framing, we can consider decreasing either $\alpha$ or $\tau$. Improving either would work towards equity, but which ones to target or to what extent and via which method would be context dependent and vary in cost. A decrease in $\alpha$ can be achieved by reducing risk-averseness in the evaluation process; e.g. by investment in better facilities for disadvantaged groups, or making the evaluation process blind to group membership. A reduction in $\tau$ may follow by allocating additional resources to the evaluation process, e.g., by letting a candidate choose an evaluation in their native language. Our framework allows a policymaker to study these trade-offs, and we discuss specific examples in Supplementary Material H.
**"a team of different people can contribute different kinds of knowledge to solution resulting in a better result"** Sorry for the confusion, summing up the utility of each accepted applicant's individual utility in the example of subset selection task is not a modeling choice made in this paper. It is borrowed from prior works [81,57,41].
The focus of this paper is to give a model of how population level differences in utility distributions can arise in evaluation processes. In the empirics, the main goal was to show that the model can explain biases in real-world data sets. The goal of showing the subset selection application was to guide policymakers on the deployment of interventions to mitigate biases.
That said, we appreciate your suggestion. Indeed, works in sociology (such as the one you suggest) have shown that one of the benefits of diverse teams is that complementary skill sets increase the overall production -- the sum is more than the parts. This corresponds to having a supermodular set aggregation function in the selection task. The output of the model can be used by policymakers to assess the impact of interventions in this supermodular setting as well. We will mention this in the final version. | Summary: The paper studies how to examine the group distributional difference using loss minimization. The authors propose a loss with a max-entropy constraint.
Strengths: The paper studies an important problem of how to examine the evaluation bias in many applications such as hiring and school admissions. The authors nicely motivate the problem and have a detailed related work on how such biases arise in practice.
Weaknesses: The paper became quite hard to read after related work. I had a hard time understand what is the formulated problem and the reasons for many design choices are not clear to me. I do not follow the loss formulation, since it seems that the authors are trying to do a density estimation task with certain constraints on the density function. Why not minimize classic metrics like f-divergence, IPM, or use methods like GMM or kernel density estimation? There is also little explanations on why we should use max-entropy as constraint.
Evaluation: It seems the experiments are a density estimation task for two groups, why usual density estimation methods cannot be applied here?
I am also not convinced the proposed method can be used to examine effect of interventions, since they are dynamic and more information is needed to see the effect of interventions in the long run.
Typo: “risk averseness:”, incomplete sentence L121-122
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: See weaknessses.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: 1. The paper only considers binary sensitive groups, while in practice, sensitive groups are often overlapping with multiple attributes.
2. How would the estimation error and model misspecification lead to negative impact?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating the paper. We address your questions and concerns below and hope you will strengthen your support for the paper.
**"..since it seems that the authors are trying to do a density estimation task with certain constraints on the density function. Why not minimize classic metrics like f-divergence, IPM, or use methods like GMM or kernel density estimation? There is also little explanations on why we should use max-entropy as constraint.."**
Thanks for this question and sorry for the confusion. Our goal is not to do *density estimation*, but to *model* how an input density representing the utility of the population gets *transformed* by an evaluation process to an output density in the presence of risk-averseness and information constraint. in Section 2, our model modifies the classical maximum entropy framework (Equation 2) to incorporate the resource-information parameter $\tau$ (Equation 3) and the risk-averseness parameter $\alpha$ (Equations 4).
We do use an $f$-divergence (differential entropy) in our formulation, albeit in the constraint as it allows us to incorporate the resource-information parameter $\tau$. However, methods like GMMs or kernel density estimation are not relevant to what we are trying to achieve.
In Section 2, we will add further on the use of max-entropy as a constraint and also contrast our approach with density estimation.
**"It seems the experiments are a density estimation task for two groups, why usual density estimation methods cannot be applied here?"** The empirical results are trying to find the values of the parameters $\alpha$ and $\tau$ in our model that best-fit the density of the biased values rather than estimating density itself, so the usual methods for density estimation do not seem useful.
**"I am also not convinced the proposed method can be used to examine effect of interventions, since they are dynamic and more information is needed to see the effect of interventions in the long run."** As in many prior works [81,57,41], the proposed model can be used to understand the effect of interventions in one round. Indeed, to understand the effect of interventions in the long term, additional work would be required, perhaps evaluated as in [40], and would be an important direction for future work. We will add this as a limitation in the final version.
**"The paper only considers binary sensitive groups, while in practice, sensitive groups are often overlapping with multiple attributes."** Indeed, we limit our discussion and simulations to binary-sensitive groups. However, our model easily extends to multiple sensitive groups by considering a group-specific risk-aversion parameter and a group-specific information constraint. Prior models of biased densities [81, 57] also limit their discussion to binary-sensitive groups, and similar extensions (with group-specific parameters) have also been proposed for them. For example, if two groups $G_1,G_2$ overlap, then we can consider three disjoint subgroups $G_1\cap G_2,$ $G_1\backslash G_2$ and $G_2 \backslash G_1$. We will add a remark in the final version.
**"How would the estimation error and model misspecification lead to negative impact?"** Estimation errors and model misspecification can lead to errors in the assessments of interventions that possibly, in turn, leads to the deployment of suboptimal interventions. These errors are not specific to our evaluation method and one can try to assess and reduce their negative impact via validation methods such as train-test splits, control trials, and ablation studies. We will include a discussion about such errors in the final version.
---
Rebuttal Comment 1.1:
Comment: Thank authors for the responses. I will keep my score. | Rebuttal 1:
Rebuttal: We thank the area chair for their time and effort in engaging with the reviewers and considering our rebuttal.
We thank all the reviewers for their excellent suggestions which will help improve the paper and for considering our rebuttal. We take the feedback of reviewers seriously and have addressed their specific questions and concerns in individual responses.
Based on suggestions by Reviewer gzsA, we give two responses below that may be of general interest. The first is an overview of the proof of Theorem 3.1 and the second is some specific examples of mechanisms by which information constraints and risk aversion lead to bias.
We also attach a pdf file that contains a new figure that we reference in response to Reviewer gzsA.
We will include all these changes in the final version.
**1. Overview of the proof of Theorem 3.1.**
(i) We first consider the dual of OptProg and show that strong duality holds (see Sections B.2 and B.3). The proof of strong duality (Theorem B.3) uses Slater's condition. This step allows us to derive the form of the optimal density to OptProg and is standard (e.g., in formulations that maximize entropy subject to additional constraints).
(ii) The next step is to show that the optimal solution (density) of OptProg exists and is unique. This requires proving that the dual variable $\gamma^\star$ (corresponding to the entropy constraint) is *positive* -- while this variable is always nonnegative, the challenge is to show that it is *nonzero*. There are instances to OptProg where $\gamma^\star$ is zero, and an optimal solution does not exist (or an optimal solution exists, but it is not unique). Proving that $\gamma^\star$ is nonzero under general conditions is the main technical difficulty.
(iii) The next step is reducing the proof of $\gamma^\star \neq 0$ to understanding the properties of the following integral that captures the expected loss when the estimated utility is $x$ (see lines 1026--1027): $I(x) = \int_{\Omega} \ell_\alpha(x,v) f_{\cal D}(v) dv.$ The objective function of OptProg is equal to the expectation of $I(x)$ over the output density $f^\star(x)$.
(iv) In section B.4, we show that $I(x)$ can be expressed as a sum of two monotone functions (see Theorem B.10). This decomposition allows us to show that the optimal value of OptProg is finite and $I(x)$ has a global minimizer $x^\star$ (Claim B.12).
(v) In Section B.5, we show that the optimal value of OptProg is strictly larger than $I(x^\star)$ (Lemma B.13, B.14). This requires us to understand the interplay between the growth rate of the expected loss function and the entropy of a density as we place probability mass away from $x^\star$.
(vi) Finally, in Theorem B.15, we show that $\gamma^\star$ is nonzero. This follows from the fact that if $\gamma^\star=0$, then the optimal value of OptProg is equal to $I(x^\star)$, which contradicts the claim in (v) above. Once we show $\gamma^\star > 0$, the expression for the (unique) optimal solution follows from equation (13) in line 1128.
To summarize, the technical difficulty in proving Theorem 3.1 lies in showing $\gamma^\star \neq 0$ and the conceptual difficulty lies in formulating general conditions under which this property holds. Proving that these conditions hold for a general class of loss functions is non-trivial. We carry out these steps for Gaussian, Pareto, Exponential, and Laplace densities in Sections F, G, I, and J respectively.
**2. Specific examples of mechanisms by which information constraints and risk aversion lead to bias.**
One context is college admissions. It is well known that SAT scores are implicitly correlated with income (and, hence, test preparation) in addition to student ability ("Is Income Implicit in Measures of Student Ability?", Budget Model, Penn Wharton). While the true ability may be $v$, the score is skewed depending on amount/quality of test preparation, which depends on socio-economic status. The parameter $\alpha$ in our model can be used to encode this. As for $\tau$, while an evaluator may know what a GPA means at certain universities well known to them, they may not understand what GPA means for students from lesser-known school to them. This lack of knowledge can be overcome, but takes effort/time, and without effort entrenches the status quo.
Another example is evaluation of candidates using a standardized test. In time-constrained settings, high value of the resource-information parameter $\tau$ for the disadvantaged group indicates that such candidates may not be able to comprehend a question as well as someone from an advantaged group. This could be due to various factors including less familiarity with the language used in the test or the pattern of questions, as opposed to someone who had the resources to invest in a training program for the test. Similarly, a high value of the risk-averseness parameter captures that an evaluator, when faced between a choice of awarding low or high marks to an answer given by a candidate from the disadvantaged group, is less likely to give high marks. More concretely, suppose there are several questions in a test, where each question is graded either 0 or 1. Assume that a candidate has true utility $v \in [0,1]$, and hence, would have received expected score $v$ for each of the questions if one were allowed to award grades in the continuous range $[0,1]$. However, the fact that the true scores have to be rounded to either 0 or 1 can create a bias for the disdvantaged group. Indeed, the probability that an evaluator rounds such an answer to 1 may be less than $v$ -- the risk-averseness parameter measures the extent to which this probability gets scaled down.
Pdf: /pdf/e0cd4b2adb9e2d645f20d7718df3da1833dc08a7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this paper, the authors model evaluation processes that estimate the density of value for an individual (on a task) as a loss minimization problem subject to constraints. The authors proceed to derive various properties of the output densities of their model and evaluate it on two real world datasets.
Strengths: - This is a good solution that seems to provide clarity to a difficult and important problem
- Strong (though limited) empirical section, and ablation studies on the effects of $\tau$ and $\alpha$
Weaknesses: - I found it really hard to read this paper as someone not too familiar with the field, in particular I thought the intro could use some more clarity. There's a bit of measure theory too that I wonder is necessary -
- Id like to see a theorem about the performance of this model relative to others, it seems like its fairly to compare it to some of the Related Work mentioned. Though this might be fixed by adding more clarifications --
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - So how would a policy maker use this model? It isnt immediately clear to me as someone unfamiliar with the literature
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors note some limitations to their work - though I didn't see much discussion of the misuse of their model and it's negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback. We are glad that you find our work as a good solution to an important and difficult problem. We address your specific questions below.
**"hard to read this paper as someone not too familiar with the field"** We apologize that you found it hard to read parts of the paper. Given the multitude of disciplines touched by this paper, it was difficult to write. We chose a style similar to some of the most related works [81,57,41]. We take your feedback seriously and will try to simplify some of the expressions while preparing the final version.
**"...theorem about the performance of this model relative to others..."** Thanks for the suggestion. The two most-related prior models are the implicit variance model of [57] and the beta-bias model of [81]. In the implicit variance model, the observed utility of the advantaged group is a Gaussian random variable with mean $\mu$ and variance $\sigma_0^2$, and the observed utility of the disadvantaged group is a Gaussian random variable with mean $\mu$ and variance $\sigma_0^2 + \sigma^2$. This model is a special case of our framework as captured by the following theorem (this is a direct corollary of Lemma F.2 and we will include it in the final version):
**Theorem:** Consider an instance of the implicit variance model given by parameters $\mu, \sigma, \sigma_0$. Consider instances $I_1=(\Omega, f_{\mathcal{G}}, \ell, \alpha, \tau_1)$ and $I_2=(\Omega, f_{\mathcal{G}}, \ell, \alpha, \tau_2)$ of OptProg, where $\Omega=\mathbb{R}$, $f_{\mathcal{G}}$ is the normal density $N(\mu, \sigma_0^2)$, $\alpha=1$, $\ell(v,x) = (x-v)^2$, $\tau_1=\frac{1}{2}(1+\ln (2 \pi \sigma_0^2))$ and $\tau_2 = \frac{1}{2}(1 + \ln(2 \pi (\sigma_0^2+ \sigma^2)))$. Then the output density of OptProg on $I_1$ is $N(\mu, \sigma_0^2)$ and the output density of OptProg on $I_2$ is $N(\mu, \sigma_0^2 + \sigma^2)$.
In the $\beta$-bias model, the true utility of both groups is drawn from a Pareto distribution and the output utility for the disadvantaged group is obtained by scaling down the true utility by a factor $\beta > 1$. This changes the domain of the distribution to $[\frac{1}{\beta},\infty)$ from $[1,\infty)$ and, hence, does not fit exactly in our model which does not change the domain. Nevertheless, we can show that, for any fixed $\tau$, increasing $\alpha$ reduces the *mean* of the output density, effectively capturing the $\beta$-bias model at a population level. Formally, the following theorem follows from the calculations in Section G; we will include it in the final version. Recall that the Pareto density with parameter $\gamma$ over $[1, \infty)$ is defined as $f_\gamma(x) = \frac{\gamma}{x^{\gamma+1}}, x \in [1, \infty).$
**Theorem:** Consider an instance $\cal I$ of the $\beta$-bias model specified by the Pareto distribution $f_\gamma$ with parameter $\gamma$ over $[1, \infty)$ and a parameter $\beta > 1$. Consider the instance $I_1 = (\Omega=[1, \infty), f_\gamma, \ell, \alpha_1 = 1, \tau)$ of OptProg, where $\tau = 1 + \frac{1}{\gamma} - \ln \gamma$ and $\ell(x) = \ln (x/v).$ Then the optimal solution to $I_1$ is given by $f_\gamma$. Further, there exist values $\alpha_2 \geq 1$ and $\beta_\gamma > 1$, which is a function of $\gamma$ only, such that if $\beta \leq \beta_\gamma$, then the output density of OptProg on the instance $I_2 = (\Omega, f_\gamma, \ell, \alpha_2, \tau)$ has expectation $1/\beta$ times the expectation of $f_{\gamma}$.
**"how would a policymaker use this model?"** Most of the prior works on interventions in selection settings have focused on adding representational constraints for disadvantaged groups. Such constraints, often framed as a form of affirmative action, could be beneficial but may not be possible to implement in certain contexts. E.g., in a landmark 2023 ruling, the US Supreme Court effectively prohibited the use of race-based affirmative action in college admissions. Our work, via a more refined model of how bias might arrive at a population level in evaluation processes, allows for evaluating additional interventions that focus on procedural fairness; this allows working towards diversity and equity goals without placing affirmative-action-like constraints.
In this framing, we can consider decreasing either $\alpha$ or $\tau$. Improving either would work towards equity, but which ones to target or to what extent and via which method would be context dependent and vary in cost. A decrease in $\alpha$ can be achieved by reducing risk-averseness in the evaluation process; e.g. by investment in better facilities for disadvantaged groups, or making the evaluation process blind to group membership. A reduction in $\tau$ may follow by allocating additional resources to the evaluation process, e.g., by letting a candidate choose an evaluation in their native language. Our framework allows a policymaker to study these trade-offs, and we discuss specific examples in Supplementary Material H.
**"I didn't see much discussion of the misuse of their model and it's negative societal impact."** In any work that focuses on debiasing, the ideas and models could be used adversarially to achieve the opposite goal. We need third-party evaluators, legal protections, and available recourse for affected parties -- crucial components of any system -- though beyond the scope of this work. We will add a brief discussion to acknowledge these important points in the final version. | null | null | null | null | null | null |
Generalized equivalences between subsampling and ridge regularization | Accept (poster) | Summary: The authors investigate the problem of ridge regression, and prove equivalence results between ridge regularisation and ensambling of weak learners trained on subsamples of the original dataset.
The equivalences hold under very mild assumptions, and notably there is no requirement on the data model.
Two kind of equivalence are proven: i) equivalence at the level of a quite generic class of risks ii) equivalence at the level of the ridge estimator itself.
The equivalence basically say that one can trade a bit of subsampling for a bit of ridge regression without altering the performance of the estimator.
The equivalences hold on paths in the plane defined by the ridge regulariser and the subsampling ratio.
The authors provide both a "theoretical" characterisation of such paths, which requires knowledge of the population covariance of the features, and a "data-driven" characterisation, which requires only access to the sample covariance of the features.
Finally, the authors discuss possible extensions of their results to real-data scenarios and random features regression.
Strengths: The works seems sound, relevant (answering open questions in the literature), well-motivated and well-presented.
Code is available for reproducibility.
I would like to highlight that the authors need basically no structural assumptions on the data model to prove the equivalence.
Weaknesses: I did not identify any substantial weakness in the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I do not have any question of the authors.
I only suggest the authors to improve Figure 3: the caption could additionally describe the difference in x-axis between left and right panel.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do not discuss explicitly limitations. I find that this is not strongly necessary, as the assumptions under which the results hold are clearly presented.
The only improvement I could suggest is for the authors to discuss whether they expect the equivalences they prove to break in some specified setting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks the nice summary, positive feedback, and the thought-provoking comment about plausible variations of the equivalence paths or even non-equivalences! We are glad that you found the paper interesting and appreciate the kind words.
Below we address the question and limitation raised.
- **[Response to question]** _(Clarification of Figure 3 caption)_: Thanks for the suggestions!
We have now updated the caption of Figure 3 with additional clarifications on the linear functionals in the left panel and the risk functionals in the right panel and added pointers to the corresponding theorems.
- **[Response to limitation]** _(Plausibility and conditions for different equivalence paths or non-equivalences)_: This is very interesting and thought-provoking comment! With i.i.d. samples, we expect the results to hold more generally, even beyond RMT features; see the discussions in Section 6 of the submitted paper.
However, we acknowledge that the equivalences could potentially break, particularly if the observations are not sampled in an i.i.d. manner.
For instance, if the observations follow a specific dependence structure, such as in a time-series setting, then the equivalence paths could be influenced by this dependence structure.
Furthermore, we expect different variations of the equivalence paths under different sampling strategies, such as sampling with (or without) replacement within each subsampled dataset (and across all subsampled datasets).
A precise characterization of these variants is an interesting direction for future work.
Thanks again for this instigating this thought!
---
Rebuttal Comment 1.1:
Comment: The authors addressed the only minor point I raised.
I confirm my initial review and grading, and thank the authors for replying to my curiosity on the possible breaking of the equivalences they prove. | Summary: This paper shows an asymptotic equivalence between an ensembled+subsampled (E+S) version of ridge regression and the standard version, in the proportional asymptotic regime. The equivalence result shows that there exists a linear path in the space (ridge-parameter, aspect-ratio) along which all estimators yield essentially the same solution.
This equivalence also resolves an important open question regarding the behavior of optimally tuned ridge regularization. Specifically, the generalization error monotonically decreases with overparameterization (assuming the same level of SNR).
Strengths: This paper was delightful to read.
The problem considered is of very broad interest. The contribution is fundamental and is potentially path-breaking. At the very least, it yields a very satisfactory understanding of the effect of ridge regularization under both important settings.
- optimal tuning
- interpolation (lambda = 0)
Weaknesses: The paper can do with a round of proof-reading. There are some minor issues.
Perhaps the title should read "ensembled subsampling" instead of just "subsampling".
A sketch of the proof is missing. It would be good to highlight in a couple of paragraphs the core ideas behind the proof of the main result.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In Section 2, does M have to grow at a certain rate wrt n ?
It appears that the Conjecture regarding Kernel Ridge Regression may already be within the reach of this paper, following the equivalence result from Sahraee-Ardakan et al. https://arxiv.org/pdf/2201.08082.pdf
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: No negative impact envisioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 10: Award quality: Technically flawless paper with groundbreaking impact, with exceptionally strong evaluation, reproducibility, and resources, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for the excellent concise summary, encouraging comments, and kind words!
We are delighted to hear that you found our paper enjoyable to read.
Paper aside, we also find the structural and risk equivalences quite neat, in understating the effects of ridge regularization, and particularly in the context of optimal ridge tuning and corresponding risk monotonicity.
Below we comment on the general weaknesses raised (abbreviated by **[W]**).
- **[Response to W1]** _(Paper proofreading)_: Thanks, we are revising the manuscript, taking into account typos from other reviewers as well.
- **[Response to W2]** _(Paper title)_: That is a good suggestion!
The reason why we opted for simply subsampling is that the structural equivalence result (Theorem 3) holds for any ensemble size $M$, including $M=1$ when there is no ensemble.
The generalized risk equivalence (Theorem 1) holds for the "infinite'' ensemble $M \to \infty$.
This is why we simply use `"subsampling,'' though we fully agree with the sentiment of the comment.
- **[Response to W3]** _(Proof sketches)_: Thanks for the suggestion!
We agree that proof sketches (at least for Theorems 1 and 3) would be beneficial to the readers.
However, due to space constraints, we were not able to include more in the main text of the submitted paper.
We will try to squeeze in key ideas in the additional space for the paper revision.
The core ideas behind the proof involve: 1) using asymptotic equivalences and calculus (instead of computing the actual risks), which lets us bypass the assumptions on features; and 2) generalizing certain linear and non-linear concentration results using rank-2 perturbations of the ridge resolvent, which lets us handle the uncorrelated non-linear component without assuming independence.
Below we address the two questions raised (abbreviated by **[Q]**).
- **[Response to Q1]** _(Growth rate of ensemble size)_: Good question!
We note that we do not actually need an ``infinite'' ensemble when definite the estimator in Section 2.
It suffices to consider an ensemble of all possible subsets $\binom{n}{k}$ of size $k$ out of a dataset of observations $n$, which we call a "full'' ensemble. One can show that for any fixed dataset $\mathcal{D}_n$, the ridge estimators fitted on the "full ensemble'' match that with the "infinite ensemble'' almost surely conditioned on $\mathcal{D}_n$, as mentioned in the paragraph after Equation (2) of the submitted paper.
For this result, we do not need asymptotics in $n$ or $p$ when defining the full-ensemble estimators in Section 2.
Now when we consider the equivalence results in Sections 3 and 4, they hold under proportional asymptotics.
The result for structural equivalence (in Section 4) holds for any ensemble size $M\in\mathbb{N}$.
Thus for this result, one does not need to consider varying $M$.
The result for risk equivalence (in Section 3) holds for the infinite ensemble as $M \to \infty$ or equivalently for a full ensemble.
Thus for this result, $M$ changes with $n$ as $M(n) = \binom{n}{k}$.
When $n=k$, this does not grow to infinity.
In other words, one may not have "growing'' number of distinct subsets of size $k$.
- **[Response to Q2]** _(Subsampling equivalences for kernel ridge regression)_: Thanks for the reference!
It indeed looks very relevant, and we are working on the kernel extension, and we will add the reference [SAEPRF22] to the list.
Because under proportional asymptotics, the behavior of the kernel ridge resolvent is similar to the "linearized'' kernel, the conjecture indeed looks within reach, as you suggested. We are also looking into a similar conjecture (Conjecture I.1 in the submitted paper) for random features regression using universality ideas (e.g., from [HL22], among others).
**References**
[SAEPRF22] Sahraee-Ardakan, Mojtaba, Melikasadat Emami, Parthe Pandit, Sundeep Rangan, and Alyson K. Fletcher. Kernel methods and multi-layer perceptrons learn linear models in high dimensions. arXiv preprint arXiv:2201.08082, 2022.
[HL22] Hu, Hong, and Yue M. Lu. Universality laws for high-dimensional learning with random features. IEEE Transactions on Information Theory, 2022.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: I am happy with my initial assessment of the paper. Please make the necessary revisions to improve the quality of presentation. | Summary: The authors study the relationship between ridgeless ensembles constructed from subsampled data and a ridge estimator in a setting with mild assumptions on the joint distribution $(Y,X)$. They establish equivalences for a generalized class of risk functionals, which include quantities related to coefficient estimation and both in and out of sample errors. These equivalences are proven in the case where the ensemble includes estimators trained on all possible subsamples of a given size; the authors extend these results to equivalences between two finite ensembles. The authors use these equivalence results to settle a conjecture in a previous paper about risk monotonicity of ridge regression as a function of $p/n$.
Strengths: - The authors substantially relax the distributional assumptions considered by previous work in the literature. This is conceptually important since it was not known how critical the linearity assumption is for these types of equivalences to hold.
- The theoretical analysis is both novel and technical, invoking various concepts from random matrix theory.
- Overall, the paper is well-written and well-organized.
Weaknesses: While equivalences between ridge regression and subsampling are conceptually interesting, at this stage it appears that consequences for data analysis are a bit limited. However, the authors generously discuss several potential extensions for which some of the tools developed in the paper may be helpful.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - In Theorem 3, is the ensemble size $M$ fixed?
- While the paper is well-written overall, some additional exposition/clarification in certain parts may be helpful to readers. For example, the authors could elaborate more on what they mean by first-order and second-order. In addition, the coefficient confidence interval case can be explained more. Also, before theorem 3, it is stated that "We can go a step further and ask if there exist any equivalences for the finite ensemble and if there are any equivalences at the estimator coordinate level between the estimators." Do you mean that the conditions in this theorem have been previously shown to imply equivalences at the coordinate level and these conditions also imply equivalences with finite ensembles or do you mean something else?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors are quite forthcoming about various limitations of their work and possible future extensions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for the encouraging comments and feedback!
We are glad to hear that you liked the relaxing of distributional assumptions, the novelty of theoretical analysis, and the clarity of presentation. In the sequel, we will first comment on the weaknesses raised and then address the questions.
Comments on weaknesses follow:
- **[Response to W1]** _(Consequences for practical data analysis)_: Thank you for the comment!
We acknowledge that the immediate practical application of our results may not be readily apparent, but the insights gained can inform practical data analysis.
Even though our paper primarily considers ridge regression, which may seem limited for real-world data analysis, it is good to note the close connections between ridge regularization and other forms of regularization methods, such as dropout regularization, noisy training, random sketched regression, among others, as mentioned in the related work.
Understanding the relationship between subsampling and ridge may provide valuable insights into the behavior of subsampling and other implicit/explicit regularization methods.
Moreover, our data-dependent characterization of this effect, as presented in Proposition 4, could potentially guide model selection involving other types of regularization for ensemble learning. This understanding could also potentially guide the selection of appropriate subsample sizes in practice.
Now it is true that the equivalence results are derived under certain assumptions on the features (RMT, in particular), we expect it to be true more generally.
For example, we really only need the concentration of the ridge resolvents.
Beyond RMT features, for the Marchenko-Pasture law to hold, which forms the backbone of of work, it is recently been shown (see [L22], [CM22], for example) that convex concentration suffices.
Thus we expect that the equivalences we establish in the paper to hold for a range of real data distributions.
For instance, as illustrated in Figure 3 of the paper, our equivalences seem to hold very well for real image datasets.
Formalizing precisely the extend to which these equivalences hold true is indeed on our list of future work!
- **[Response to Q1]** _(Ensemble size for structural equivalences)_: Yes, Theorem 3 on structural equivalence works for any ensemble size $M$.
We will stress this in the revised version.
- **[Response to Q2]** _(Clarifications on the terminology)_: Thanks for the suggestions!
We appreciate the comment and agree that the paper will benefit from additional clarifications on the terminology.
When we refer to first-order and second-order, we are referring to the order of the functional of the estimators.
First-order refers to the equivalence of linear functionals of the estimators.
Second-order, on the other hand, refers to the equivalence of quadratic functionals of the estimators.
Regarding your question about the commentary before Theorem 3, we will clarify it better.
When we mention equivalences at the estimator ``coordinate'' level, we mean that each coordinate (of the $p$-dimensional vector of the estimators) asymptotically equals.
Such a type of equivalence on finite ensembles has not been studied in the previous works.
As summarized in Table 1, the previous related papers only show certain risk equivalences under restricted assumptions.
**References**
[L22] Cosme Louart. Sharp bounds for the concentration of the resolvent in convex concentration settings. arXiv:2201.00284, 2022.
[CM22] Chen Cheng and Andrea Montanari. Dimension free ridge regression. arXiv:2210.08571, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for the interesting comments and clarifications in the rebuttal. I am maintaining my score. Good luck! | Summary: This submission establishes equivalences between ridge regression (i.e. $\ell_2$-penalized
linear regression) and ensembles of linear models trained on sub-sampled datasets.
In particular, the authors prove that for a fixed feature/sub-sample-size $d/k$, ratio, there
exists a ridge-regression model with risk asymptotically equivalent to the full
ensemble, i.e. the average of all $k$ models trained on sub-samples of size $k$.
Moreover, this equivalence holds for all convex combinations of the
ridge-regression and full sub-sampled model.
Note that the asymptotics assume $d$, $k$, $n$ approach infinity such that
the ratios $d/k$ and $d/n$ are held constant.
This equivalence result is then extended to other metrics, such as training error
and the weight estimation error, and to "structural" results which
show asymptotic equivalence of the weights of the models.
The authors leverage these equivalences to show that the prediction
risk of the best ridge regression model is monotone increasing in $d/n$.
Strengths: The main strength of this work is the theoretical contributions. In
particular:
- The authors extend exist results on equivalent risk of ridge regression
models and sub-sampled ensembles to new metrics and to the model weights
themselves ("structural results").
- The theorems are proved under relaxed conditions compared to previous work.
In particular, general data distributions are allowed provided a fairly
weak assumption on the moments holds.
- The authors answer an open problem on the behavior of risk for ridge regression
models with the optimal regularization constant.
The paper is also well written, with very few typos. I congratulate the authors
on their polished manuscript.
Related work appears to be correctly cited, although this is not my research
area so it is hard for me to check.
Note that I did not check the theoretical derivations in the appendix so I
cannot comment on their correctness.
Weaknesses: The greatest weakness of this paper is that the theoretical results are
somewhat incremental and unlikely to have an impact outside of learning theory.
In particular,
- The connection between ridge regression and sub-sampled ensembles was
previously established, so that the main contributions of this work
are weakened conditions and new types of equivalences.
- It's not clear how interesting it is to answer the conjecture from
Nakkiran et al. While the authors prove that the risk for the ridge
regression model with optimal regularization constant is monotone
increasing in the ratio $d/n$, this only applies to the asymptotic
regime and so its practical importance may be limited.
- While the authors motivate their work by highlighting connections between
ridge regularization and dropout, noisy training, data augmentation,
and early stopping. However, these connections are not developed any further
and I am skeptical the asymptotic equivalences in this submission will
impact those areas.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Line 93: Is this supposed to mean that Assumption 2 defines RMT features?
It's not obvious because the acronym RMT is not defined anywhere and
not used in Assumption 2.
Line 176: I suggest Changing the name of Assumption 2 from "Feature Vector Distribution"
to "RMT" features as well as including a definition for the initialism/acronym, since
it isn't stated anywhere.
Line 156 and Definition E.1: Does $C_p$ need to be bounded away from zero?
Otherwise the asymptotic
equivalence definition will be meaningless when $C_p = 0$ almost surely
for every $p$. Or perhaps when you say "any sequence $C_p$" you mean
"every sequence"?
Line 183: Shouldn't this solution be $(\\lambda, \\bar{\\psi})$?
Figures 1/2: The paths between equivalent models don't appear to be linear
in these figures, although Equation 5 seems to indicate that they are always
linear combinations. Is this because the figures are in log-log scale?
Theorem 3: I don't understand what is "structural" or "first-order" in this
theorem compared to Theorem 1. Is this because the estimators themselves
are equivalent, rather than a risk functional of the estimators?
While is equivalence of risk functionals "second order"?
Theorem 3: "this implies that the predicted values (or even any continuous function
applied to them due to the continuous mapping theorem) of any test point will eventually be the same,
almost surely, with respect to the training data."
The asymptotic equivalence of parameters in Theorem 3 and this
statement seem to imply that Theorem 3 covers both Theorem 1 and Proposition 2
by taking $M = \\infty$ --- is this correct?
If yes, what is the novelty of these two previous results given Theorem 3?
Proposition 4: Is Equation 8 always guaranteed to admit a solution?
Moreover, what is the utility of this result when the equivalence only
holds asymptotically? That is, when $n \rightarrow \infty$ and the
root-finding problem defined by Equation (8) is impractical to solve.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: As mentioned in the "Weaknesses" section, I think the main limitation of this
submission is the impracticality of the main theoretical results.
Since they extend and generalize a previous result showing asymptotic equivalence
of the risk, I do not see the theoretical derivations having an impact on practice.
Furthermore, the connections to high-interest topics in ML like early stopping
and dropout seem tenuous at best.
Since I am not actively involved in learning theory research, I cannot
comment on the importance of this work for other members of this community.
It would be nice if the authors could provide additional context for their
work, including some comments on the novelty of their proof techniques and so
on. That way I can better understand the impact on this specific research community.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for the detailed constructive feedback! We appreciate the careful reading and questions.
Below we comment on the weaknesses within the allowed space.
- **[W1]** While it is true that the connection between ridge regression and sub-sampled ensembles has been previously established, our work significantly extends this connection. Apart from generalizing results to hold general functionals, a key aspect of our results compared to prior work is that we do not assume any model for the response for any of these equivalences. We also establish structural equivalences, which were not studied in the prior works. This allows our theoretical results to be applicable to practical data analysis. Furthermore, our data-dependent method for determining equivalent paths is also another novel contribution, which we believe has practical implications as it allows one to determine the level of ``induced'' ridge regularization based on the available data. Please also see our response to **[Q1]** of reviewer **DNS5** for technical novelties and to **[W1]** of reviewer **nskG** for practical impact.
- **[W2a]** Many common methods, such as ridgeless or lassoless regression, have been recently shown to exhibit non-monotonic behavior in the sample size or the limiting aspect ratio. In this regard, the conjecture from Nakkiran et al. is interesting because the non-monotonicity risk behavior implies that more data can hurt the performance, and it's important to investigate whether optimal ridge regression also suffers from this risk.
- **[W2b]** The conjecture on the monotonicity of optimal ridge regression is of interest, even in the asymptotic regime. This is because the monotonicity/non-monotonicity is largely governed by the structure and relationship between the bias and variance at various regularization levels. Understanding the bias-variance tradeoff at the optimal regularization is not as affected by the finite-sample effects. Furthermore, as can seen empirically verified, even for $n$ and $p$ in the order of 100, one starts to observe the asymptotic behavior. The asymptotic approach simplifies the proofs and allows us to focus on the essential characteristics of the problem, under minimal assumptions. The extension of our results to the finite sample regime is possible but requires additional assumptions and depends on the specific nature of the distribution of features and response. See also response to **[W2]** of reviewer **DNS5** for more details.
- **[W3]** Our mention of connections to dropout, noisy training, data augmentation, and early stopping was to highlight the broad relevance of our focus on ridge regularization and subsampling. While the direct application of our asymptotic equivalences to these areas is not immediately clear, we believe our work provides insights that could indirectly inform these areas and inspire future work. For instance, understanding the trade-offs between subsampling and ridge regularization could potentially inform more effective strategies for dropout or data augmentation.
Below we address the questions.
- **[Q1]** We define these precisely in the discussion after Assumption 2. We will clarify this in the revision.
- **[Q2]** We will make the names of the assumptions "Moment-bounded response" and "RMT features". We will also mention the acronyms explicitly when they first appear.
- **[Q3]** The definition requires the convergence "for every sequence $C_p$ that is uniformly bounde".
- **[Q4]** For (4), we fix $\bar{\psi}$ and try to determine the values of $\bar{\lambda}$ and $v$.
Here the tuple $(\bar{\lambda},v)$ is a solution to (4) when $\bar{\psi}$ is fixed.
- **[Q5]** Yes, the figures are in the log-log scale, only for better illustration purposes. We mention this in both captions of the submitted paper.
- **[Q6]** Yes, "structural equivalence" means that estimators themselves are equivalent, rather than a risk functional of the estimators. Theorem 3 states that ${\mathbf c}^{\top}(\hat{\mathbf{\beta}}\_1 - \hat{\mathbf\beta}\_2) \xrightarrow{a.s.} 0$ for every constant vector ${\mathbf c}$ with bounded norm. This is a linear functional of the difference, so we view this equivalence as a "first-order" result. On the contrary, Theorem 1 and Proposition 2 compare the distance between two estimators to the ground truth ${\mathbf\beta_0}$, i.e., $\|\|\hat{\mathbf\beta}\_1 - {\mathbf\beta_0}\|\|\_{\mathbf A}^2 - \|\|\hat{\mathbf\beta}\_2 - {\mathbf\beta_0}\|\|\_{\mathbf A}^2 \xrightarrow{a.s.} 0$ where $\mathbf A$ is a weight matrix. Hence the latter is in the "second-order" sense.
- **[Q7]** Theorem 3 does not imply Theorem 1 and Proposition 2. When ${\mathbf A} = {\mathbf c}{\mathbf c}^{\top}$, the latter reduces to $({\mathbf c}^{\top}\hat{\mathbf\beta}_1)^2 - ({\mathbf c}^{\top}\hat{\mathbf\beta}_2)^2 \xrightarrow{a.s.} 0$. But for general $\mathbf A$, there is no direct relationship between the two. For instance, when $\mathbf A=\mathbf I_p$, each coordinate of $\hat{\mathbf\beta}_1 - {\mathbf\beta_2}$ converges to zero almost surely, does not imply $\|\|\hat{\mathbf\beta}_1 - {\mathbf\beta_0}\|\|_2^2 \|\|\hat{\mathbf\beta}_2 - {\mathbf\beta_0}\|\|_2^2 = \|\|\hat{\mathbf\beta}_1\|\|_2^2 - \|\|\hat{\mathbf\beta}_2\|\|_2^2 +2{\mathbf\beta_0}^{\top}(\hat{\mathbf\beta}_1 - \hat{\mathbf\beta_2})$ also converges to zero almost surely.
- **[Q8]** Yes, (8) always has at least one solution. This is because the RHS of (8) is monotonically decreasing in $\bar{\lambda}_n$, and the LHS always lies within the range of the RHS. One can solve (8) for a given set of $n$ and $p$. Since for sufficiently large $n$ and $p$, the equivalence holds, we can solve (8) in finite samples. While it is true, the theorem statement at the moment does not say anything non-asymptotically, we expect that one can derive a high-probability statement that explicitly characterizes a high probability bound on the difference between the two asymptotically equivalent quantities.
---
Rebuttal Comment 1.1:
Comment: Many thanks for responding to my review and answering my questions.
**[W2b]** Right, I agree that this can be interesting. However, it is a bit
strong to say "more data can hurt performance" when the results only apply to
the limiting ratio of $d/n$. It seems more accurate to say "more data relative
to features". I also think it is appropriate to qualify the claim of having
resolved this open question as per Reviewer DNS5's comment that this applies to
the proportional limit regime only
**[Q6]** and **[Q7]** Thanks for clarifying these issues. I now follow why
these are first-order or "structural" results. I think it would be useful to
remind unfamiliar readers of the type of convergence proved after Theorem 3,
i.e. that it is not almost sure convergence of the estimator difference
but of linear functional of the difference.
**[Q8]** Great. I think this is worth commenting on in the paper.
Overall, I think this is a nice submission. The subject area is niche, but the
paper is well-executed and the author response has been helpful. I will
consider raising my score after the discussion with the other reviewers. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work compares the ridgeless ensemble and the ridge estimators in the proportional limit setting (i.e., $d/n\to \phi$). Prior works [11,12,13] show that these two estimators achieve the same out-of-sample risk. The main contribution of this paper has been to (1) weaken the assumptions and (2) broaden the equivalence from out-of-sample risk to other risks such as empirical risk, in-sample risk, and transfer learning risk. Based on this theory, this work also shows that the out-of-sample risk achieved by the optimal ridge estimator monotonically increases as a function of the data aspect ratio (i.e., $\phi$).
Strengths: + Excellent presentation. I have enjoyed going through the paper.
+ Compared to [11,12,13], this work has shown equivalence between ridgeless ensemble and ridge estimators in a broader sense and under weaker assumptions. Notably, in this work, the data is allowed to be misspecified and the limiting covariance spectrum needs not to exist, and the proved equivalence holds for transfer learning risk, in-sample risk, empirical risk beside out-of-sample risk.
+ Section 5 is a neat addition to the main results, showing that the optimal ridge induces an out-of-sample risk increasing with respect to the data aspect ratio.
Weaknesses: - The Conjecture 1 in [1] is stated for every finite $n$ and $p$. Section 5 in this work only applies to the proportional limit setting, where $p/n\to \phi, n,p\to\infty$ for a finite $\phi$. While I think Section 5 is very interesting, I am not sure it is proper to claim that "This resolves a recent open problem raised by Nakkiran et al. [1] under general data distributions and mild regularity conditions."
- The theory is limited to the proportional limit regime. To what extent the theory holds in the finite sample regime is unclear.
- The technique novelty could be further clarified.
- I tried to read the proof and believe they are mostly correct. However, there are a number of typos that confuse me (and prevent me from spending more time checking the proof). For an incomplete list:
1. The equation after Line 514. Missing a factor of 2 in the cross-term.
2. Line 520. $B = AX - I$, missing $-I$.
3. Line 534. $v(- \lambda_1; \psi_1) = v(- \lambda_2; \psi_2)$. $\lambda_2$ instead of $\lambda_1$.
4. The first inequality after Line 622. Left hand side of the inequality, $R(0; \phi, \psi)$ should be $R(0; \phi_1, \psi)$?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Upto my quick glance, it seems the proof methods are largely built upon existing works such as [12] and [13]. I understand that this work has derived broader equivalence results under weaker assumptions compared to [11,12,13]. Would you please clarify what are the new ingredients in this work that allow so?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Sincere thanks for the positive feedback, insightful comments, and the list of typos.
We are happy that you enjoyed reading our paper and appreciated the presentation (that we paid special attention to while writing the paper).
To echo your sentiment, results in Section 5 are also our favorite!
Below we will first provide our responses to the weaknesses raised (abbreviated by **[W]**).
- **[Response to W1]** _(Finite-sample monotonicity of optimal ridge)_:
Thanks for the comment.
We agree that our claim regarding the resolution of the open problem (Conjecture 1) raised by Nakkiran et al. in [NVKM21] is perhaps overly broad.
Our results in Section 5 indeed apply in the proportional limit setting, where $p/n \rightarrow \phi$ for $\phi \in (0, \infty)$ as $n, p \rightarrow \infty$.
We acknowledge that this does not cover every finite $n$ and $p$.
We will add the qualifier that the results in Section 5 resolves the conjecture in the proportional asymptotic sense.
However, we expect that all the asymptotic statements in the paper can be converted to non-asymptotic statements with distributional assumptions on the feature and response distributions.
Please see our response to **[W2]** for more details.
In that sense, we expect that our Theorem 6 largely covers the meat of general monotonicity claim, though of course the finite-sample analogues are important also.
We will make sure to point it out.
- **[Response to W2]** _(Non-asymptotic analogues of equivalences)_:
Thanks for the feedback.
Indeed, our current theoretical framework is primarily developed under the proportional limit regime.
This approach simplifies the proofs and allows us to focus on the essential characteristics of the problem.
Our empirical results, as shown in the Figures 3 in the paper, suggest that our results hold even for finite $n$ and $p$.
These empirical findings provide practical validation of our theoretical results beyond the proportional limit regime.
The extension of our results to the finite sample regime is possible, but requires additional assumptions.
The precise error bounds in the finite sample regime would depend on the specific nature of the distribution of features and response.
We expect techniques of [KY17], [CM22], [WHS22], among others, to provide non-asymptotic versions of the main statements in our paper.
- **[Response to W3]** _(Clarification of novelties)_: We will briefly recall our novelties below, compared to previous related works:
- Generalized risk framework:
We consider a generalized risk framework that allows us to evaluate the performance of ensemble ridge estimators relative to the oracle parameter.
This is a significant departure from previous works that only focus on specific risk functionals.
- Structural equivalences:
We show structural equivalences in the form of linear functionals of the ensemble ridge estimators. This aspect is not explored in any of the previous papers.
- Data-dependent method for equivalent paths:
We provide a data-dependent method to determine the equivalent paths of $(\lambda, \phi)$.
We believe this is an important contribution as it allows us to apply our theoretical findings in practice.
- Technical novelties: The new technical tools we develop allow us to relax the assumptions on features and responses.
See also response to **[Q1]** below for more details on technical novelties.
- **[Response to W4]** _(Typos)_
Thanks so much for the list.
We have corrected these in our revision.
Next we address the question (abbreviated by **[Q]**).
- **[Response to Q1]** _(Clarification of technical novelties)_: While it is true that our work builds upon the foundations laid by previous studies, including [12] and [13], it is important to note that our contributions are not merely extensions of these works and requires several technical novelties. We elaborate below.
To derive the equivalence result under minimal assumption on the feature distribution and feature covariance, we use the notion of asymptotic equivalence to compare sequences of matrices of arbitrary dimensions. Using asymptotic equivalences and associated calculus (instead of computing the actual risks) lets us bypass the assumptions on features.
To accommodate any nonlinear dependence structure between $y$ and $\mathbf{x}$, we generalize the linear and quadratic concentration results to allow for arbitrary models. In contrast, in all previous works, a well-specified linear model $y=\mathbf{x}^{\top}\beta_0 + \epsilon$ is assumed. At a high level, the proof of these results requires rank-2 perturbations of the ridge resolvents to separate out the dependent and independent parts, and exploiting the property of uncorrelation between the features and the nonlinear residuals (after projecting out the linear part). Please see Appendix D of the submitted paper for more details.
**References**
[NVKM21] Preetum Nakkiran, Prayaag Venkat, Sham M. Kakade, and Tengyu Ma. Optimal regularization can mitigate double descent. In International Conference on Learning Representations, 2021.
[KY17] Antti Knowles and Jun Yin. Anisotropic local laws for random matrices. Probability Theory and Related Fields, 2017.
[CM22] Chen Cheng and Andrea Montanari. Dimension free ridge regression. arXiv:2210.08571, 2022.
[WHS22] Alexander Wei, Wei Hu, and Jacob Steinhardt. More than a toy: random matrix models predict how real-world neural representations generalize. In International Conference on Machine Learning, 2022.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thank you for your response.
This paper makes several extensions to prior works on the equivalence between ridgeless ensemble and ridge. However, the theoretical results in this work are still limited to the proportional regime as in prior works. So the contribution of this work is not super high given prior works from my perspective. Also, it is worth noting that this work partially resolves Conjecture 1 in [1] in the proportional limit regime (rather than in the finite-sample/finite-dimensional regime).
Therefore, I'd like to maintain my initial review and rating. | null | null | null | null | null | null |
Information Theoretic Lower Bounds for Information Theoretic Upper Bounds | Accept (poster) | Summary: This paper provides a lower bound on the mutual information between the output weights and the input training data in the context of stochastic convex optimization to examine the tightness of the mutual information-based generalization bound. It is shown in the paper that mutual information grows with the dimension of the parameter, which implies that existing information-theoretic generalization bounds fall short in capturing the generalization capabilities of SGD and regularized ERM, which have dimension-independent sample complexity.
Strengths: It is important to consider the lower bound (converse) result to understand the fundamental limits of information-theoretic analysis in characterizing the generalization of learning algorithms. To my understanding, pessimistic analysis can be more insightful than some ad-hoc method for tightening existing bounds.
Weaknesses: 1. It is hard for me to understand why Theorem 1 holds for any learning algorithm. Let us take the Gibbs algorithm considered in Section 4.3 of (Xu and Raginsky [2017]) and further investigated in the following paper as an example.
Aminian, Gholamali, Yuheng Bu, Laura Toni, Miguel Rodrigues, and Gregory Wornell. "An exact characterization of the generalization error for the Gibbs algorithm." Advances in Neural Information Processing Systems 34 (2021): 8106-8118.
The above reference provides an exact characterization of generalization error using information measures. It has been shown in their proof of Theorem 2 that the mutual information $I(S;W_S)$ can be upper bound by $O(1/n)$, which is independent of d. Thus, it is hard for me to digest the result presented in Theorem 1.
2. This paper mainly criticizes the tightness of the MI bound in (Xu and Raginsky [2017]) and the CMI bound in (Steinke and Zakynthinou [2020]). However, these issues are well-known in the literature. For example, MI can be infinite for deterministic learning algorithms. There are results tightening these results by considering the MI (or CMI) between W and each individual sample $Z_i$ (see below). Is there a way to show that these bounds also suffered from the same issue described in the paper?
Bu, Yuheng, Shaofeng Zou, and Venugopal V. Veeravalli. "Tightening mutual information-based bounds on generalization error." IEEE Journal on Selected Areas in Information Theory 1, no. 1 (2020): 121-130.
Zhou, Ruida, Chao Tian, and Tie Liu. "Individually conditional individual mutual information bound on generalization error." IEEE Transactions on Information Theory 68, no. 5 (2022): 3304-3316.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Line 212, to my understanding it should be d^{1/7}/m^{6/7}. Please double-check.
2. Line 280, it is said that CMI_m(A)=O(m). Can you provide a reference or insights into why it is the case?
3. Line 20 “inorder” should be “in order”.
4. The term suboptimality is often referred to as excess risk in literature.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: No potential negative societal impact was observed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your detailed review. It seems the main weakness pointed out is an alleged contradiction between the result and existing upper bounds. There is no such contradiction and hopefully below it will be clarified, please do ask for further clarifications if this is not the case or if other issues were found to be emergent and are not resolved.
> It is hard for me to understand why Theorem 1 holds for any learning algorithm. Let us take the Gibbs algorithm considered in Section 4.3 of (Xu and Raginsky [2017]) and further investigated in the following paper as an example.
There is no contradiction between the result in this paper and the analysis of the Gibbs algorithm. Notice that the bound on the information scales with the temperature hyper-parameter (\alpha). In more detail, if the temperature is very low then the Gibbs algorithm behaves like an ERM algorithm and has no generalization guarantees. At the other extreme, if the temperature is very high the algorithm behaves close to purely random and one can achieve a non trivial generalization error/ MI bound, but in this extreme there will be a trivial train error - and as a result, trivial test error..
Notice that theorem 1 applies only to an algorithm with non-trivial **test error**. So there’s no contradiction and the conclusion is that, to obtain non trivial train error, the temperature must scale with the dimension. Notice also that it is trivial to obtain small MI-bounds to algorithms with trivial test error (e.g. random guessing) so the assumption of small test error is in fact necessary here.
> However, these issues are well-known in the literature. For example, MI can be infinite for deterministic learning algorithms. There are results tightening these results by considering the MI (or CMI) between W and each individual sample Z_i (see below). Is there a way to show that these bounds also suffered from the same issue described in the paper?
Thanks for the reference. First the issue shown in this paper is not that there are algorithms that carry information but that it is necessary to carry information for learning. The paper does not rule out the possibility of exploiting MI-type bounds in general, but to simply observe that some excess information is necessary.
Regarding the specific variation proposed in Bu et al. Thanks for the reference. The argument depicted here can be easily applied to their setup. As this was asked by several reviewers I will provide a proof sketch in a separate official comment (a full derivation can also be provided upon request)
>It is said that CMI_m(A)=O(m). Can you provide a reference or insights into why it is the case?
.Given Z the entropy of S is merely the chosen example (i.e. z_i^1 or z_i^0), this is a r.v.with 2^m possible configurations hence entropy at most m.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: I thank the authors for providing a detailed response. The rebuttal addresses all my concerns, and I would like to increase my score to 6.
Although the review process does not allow the reviewer to enforce any required changes, I would like to emphasize that the authors should include the discussion on the Gibbs algorithm and the result of individual samples bound into the final version of the main paper. Also, it would be insightful to add a concluding section to discuss the interpretation of this lower-bound result and possible future directions. I feel that this kind of converse analysis can be very helpful in improving the existing understanding of generalization, and I
echo with the author, "The paper does not rule out the possibility of exploiting MI-type bounds in general, but to simply observe that some excess information is necessary for some distributions."
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Thank you very much,
A discussion on the Gibbs algorithm will be further expanded as well as the result of individual samples, Other advices suggested by the reviewer (as well as the rest of the reviewers) will be followed. | Summary: This paper challenges the tightness of some information-theoretic upper bounds on the generalization error in the stochastic convex optimization (SCO) setting (i.e. convex, Lipschitz, bounded loss on a bounded domain). More precisely, it challenges the bound from Xu and Raginksy that (vaguely) states that the generalization error is bounded from above by a function of the mutual information between the dataset and the output of the learning algorithm. With this purpose, the authors derive lower bounds on the mutual information that grows with the dimension of the input space. This, in turn, shows that there is a distribution over an input space such that the mutual information is arbitrarily large. Therefore the mutual information-based bounds are vacuous for that distribution.
These lower bounds are obtained by means of three essential tricks:
1) Noting that one may only study discrete algorithms and discrete distributions over the inputs. The fact of considering discrete distributions over the input comes naturally from the fact that they are looking for lower bounds. Then, restricting to discrete algorithms is possible since (i) the loss is Lipschitz and the generalization error of a discretization of the parameters can at most incur into a constant multiplicative penalty and since (ii) the mutual information will only be smaller by the data processing inequality.
2) Employing a fingerprinting lemma from Kamath et al. (2019). For random variables $(X\_i)\_{i=1}^m \in \lbrace -1, 1 \rbrace$ such that $\mathbb{E}[X\_i]=p$. This lemma essentially states that the MSE of an estimator of $p$ decreases the more correlated the error of the estimator is with the error of the empirical mean. That is, a good estimator of $p$ is highly correlated with the empirical mean.
3) Noting that if two random variables $X$ and $Y$ are correlated with $\mathbb{E}[XY] = \beta$, then the mutual information $I(X;Y)$ is grows as $\Omega(\beta^4)$.
Overly simplifying, this way, they construct an example where the loss function is $f(w,z) = \lVert w - z \rVert^2$. Then, by the fingerprinting lemma, any algorithm that attains a good error is necessarily correlated with the empirical mean. Therefore, the mutual information between the output of the algorithm and the empirical mean is bounded from below. Finally, by the data processing inequality, the mutual information between the output of the algorithm and the dataset is also bounded from below. The use of the chain rule makes sure that this scales with the dimension of the input.
Strengths: The problem studied is interesting. While Bassily et al. (2019) presented some lower bounds of the mutual information for learning thresholds and Haghifam et al. (2022) showed lower bounds of the mutual information, conditional mutual information, and its variants for gradient descent (GD) in the SCO setting, studying general lower bounds for any algorithms in this setting is important.
As mentioned in the summary, the realization of being able to only consider discrete algorithms and the usage of the fingerprinting lemma to find lower bounds on the mutual information of algorithms in the SCO setting is original and interesting. Moreover, formalizing the expected result that correlated random variables have large mutual information is valuable in its own right.
Weaknesses: - The authors sometimes fail to position the related literature correctly.
- They only glass over Haghifam et al. (2022) in Section 1.1 and they say that the paper finds lower bounds on the mutual information (MI) and the conditional mutual information (CMI) of GD and a variant of GD with a perturbation on the final iterate. They also find lower bounds for the individual sample versions of the MI and CMI as well as the evaluated CMI for the GD algorithm. Moreover, they mention that the bounds have a logarithmic dependence on the dimension. This is not accurate, while the dependence is logarithmic for the perturbed version of GD, for vanilla GD the bounds grow with the square root of the dimension.
- In line 257 it seems they forgot to cite Bassiliy et al. (2018). In the same sub-subsection (CMI-bounds) it seems that they also forgot to mention the existing lower bounds from Haghifam et al. (2022).
- Also in the CMI-bounds sub-subsection they describe wrong the CMI. The CMI is not $I(w\_S;S|Z)$, it is $I(w\_S; U|Z)$, where $U$ is a sequence of Bernoulli random variables whose realization is used to select between $z\_i^0$ and $z\_i^1$ to generate the dataset $S$.
- The paper is not very clear.
- Most of the citations in the paper are not in the correct format (that is, they are not within brackets in the \citep style of natbib, but directly written in the \citet style of natbib), difficulting readability.
- Sometimes the sentences are a little overstated: e.g. in line 174 they say "Theorem 1 shows that any algorithm with non-trivial learning guarantees must carry a dimension-dependent amount of information on the sample or require a large sample". This is not completely accurate. To be precise, for every algorithm $A$, there is a distribution $D$ over a space $\mathcal{Z}$ and a convex, Lipschitz function $f$ such that if the algorithm achieves a non-trivial learning guarantee, then it must carry a dimension-dependent amount of information on the sample or require a large sample.
- The sub-subsection CMI-bounds seems a little disconnected. The whole paper focuses on finding lower bounds of the mutual information, showing limitations to these bounds. Haghifam et al. (2022) show that at least for GD one can have a population risk of $O(1/\sqrt{m})$ while having a CMI of $\Omega(m)$, which would go in the direction of obtaining lower bounds. Moreover, it is not clear why $O(1/\sqrt{m})$ is chosen for the population risk and $o(\sqrt{m})$ is chosen for the CMI. The subsampling statement above seems to try to clarify this, but why would not one choose to subsample $O(m^{1/4})$ or $O(\log m)$? There seems to be no justification.
- Lemma 3 is used without any proof in the Appendix of the supplementary material, which is in the template of the submission to another conference whose main text differs in some places from the main text submitted.
- The final part of the paper (from line 333 on) seems disconnected from the rest of the text.
Small mistakes:
- In 237-238 it states that "for any algorithm such that $\Delta\_S(w\_S) = O(1/m)$ we will have that the bound in Eq. (2) is order of $\mathbb{E}[\Delta\_D (w\_S)] = \tilde{O}(\sqrt{d/m})$. This is not correct, the bound is of that order, but not $\mathbb{E}[\Delta\_D (w\_S)] $.
- In Appendix A, it should be $\lbrace -1 / \sqrt{d}, 1 / \sqrt{d} \rbrace$, right?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Why is there a vector $p$ such that for $d / (10^6 m \epsilon)$ of the coordinates (22) holds? Why is it not possible that less cordinates have that value be much larger than 1/108?
- Why can you employ the data processing inequality after (24)? X\_p(t) depends both on $Z\_i$ and $W\_S$, and $Y\_p(t)$ depends on $W\_S$?
- Could the techniques from this paper be employed to also find lower bounds on stronger information-theoretic bounds on the generalization such as the individual MI from Bu, Zhou, and Veeravalli (2020)?
**References**
Y.Bu, S. Zhou, and V. V. Veeravalli. "Tightening Mutual Information-Based Bounds on Generalization Error". Journal on Selected Areas in Information Theory. 2020.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Some limitations are touched on in the comparison of the results with uniform convergence bounds in lines 239-256. However, there is not a large discussion of the limitations of the paper. For instance, what happens if an algorithm chooses another loss function? or, could one just consider a stronger information-theoretic notion such as the individual MI and then achieve bounds with a good rate?
Maybe the space from lines 333 onwards could be used to reflect more on these issues in the Discussion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thanks for the review. Hopefully, the rebuttal will help the reviewer be convinced that there is no reason for providing such a low score. Especially given that the reviewer identifies strengths (important problem and interesting techniques) and that the weaknesses can be easily addressed and require some small formatting changes, adding further elaborations, as well as a few citations.
Below is an address to what seem like the most emergent issues, further elaboration can be provided if needed.
> They only glass over Haghifam et al. (2022) in Section 1.1
The paragraph will be changed to clarify that they can achieve bounds to other variants. The final sentence will be clarified: that the logarithmic bound applies to the perturbed version.
> In line 257 it seems they forgot to cite Bassiliy et al. (2018) …. it seems that they also forgot to mention the existing lower bounds from Haghifam et al. (2022).
A discussion on the work of Bassiley et al, in the context of CMI, will be added here, and also the work of Haghifam et al will be reiterated.
> Also in the CMI-bounds sub-subsection they describe wrong the CMI.
Thanks for the clarification, this detail can be fixed. This is an inaccuracy in the discussion part, it is self-contained and has little to no effect on the rest of the paper (which does not deal with the CMI bound)
> Sometimes the sentences are a little overstated: e.g. in line 174 they say "Theorem 1 shows that any algorithm with non-trivial learning guarantees must carry a dimension-dependent amount of information on the sample or require a large sample".
This sentence appears in the discussion section, after stating the exact statement, and after establishing the paper's distribution-independent setup (a standard arrangement) in previous sections. In this context, which is distribution independent, there's no overstatement. The sentence, though, can be rephrased as: "any algorithm with distribution-independent nontrivial guarantees..."
>Lemma 3 is used without any proof in the Appendix of the supplementary material, which is in the template of the submission to another conference whose main text differs in some places from the main text submitted.
Thanks for this input, there was indeed a mistake in the main text version that was attached to the appendix, apologies for the inconvenience and confusion. This will be fixed ofcourse, and the proof of Lemma 3 will be added to the appendix. For now, the proof does appear in the current reviewed manuscript exactly under the statement of Lemma 3 in the supplementary.
Questions:
>Why is there a vector p such that for (d/106m\epsilon^2) of the coordinates (22) holds? Why is it not possible that less cordinates have that value be much larger than 1/108?
Notice that in (22) we take expectation only over the sample, i.e. one should think of (22) as an event that depends on the realization of (p,t). Thus, if for every p, (22) holds for less than (d/106m\epsilon^2) of coordinates, then for every p, the probability of (22) to hold, conditioned on p and w.r.t t, is less than (1/106m\epsilon^2). Then, the probability (w.r.t p and t) of (22) to hold is less than (1/106m\epsilon^2) which is a contradiction.
>Why can you employ the data processing inequality after (24)?
Notice that X_p(t) doesn’t depend on w_s as you write. (It depends on the variance of w_S but that is not a random variable but a fixed number determined by p and t)
>Could the techniques from this paper be employed to also find lower bounds on stronger information-theoretic bounds
Yes, thank you very much for this question and for pointing out the bound of Bu et al. Because this was inquired by several reviewers, I will add an additional official comment that discusses this (and a full derivation can be added upon request).
---
Rebuttal Comment 1.1:
Title: Answer to rebuttal
Comment: Thank you for your rebuttal.
I agree with the authors that the score given was lower than needed. My main motivation for the score was the lack of a proof of Lemma 3, which in turn was needed to prove Lemma 2, which was needed to prove Theorem 1 (the main result of the paper). Admittedly, having the supplementary material with a different formatting, and a main text very similar to the main text submitted but with some distinctions confused me. It made me look only at the Appendix of the pdf given as the supplementary material. Now I realized that the proof of Lemma 3 **is given** in the main text of the supplementary material, something I missed in the first round. The proof seems correct. This essentially dissipates my main concern. I apologize for missing that and I will take that into account for the final evaluation.
Most of my minor issues and questions are also addressed.
Some smaller things:
* My question and doubts about the CMI subsection remain. Could you discuss them a little further?
* I still believe the statement in 174 is overstated. The statement "any algorithm with non-trivial learning guarantees must carry a dimension-dependent amount of information on the sample or require a large sample" seems to imply that this always happens. But it is only shown that there is at least a situation where it happens. To me something of the style "for any algorithm with non-trivial learning guarantees, there is at least one scenario where it must carry a dimension-dependent amount of information on the sample or require a large sample" would represent better the presented result. Do you agree?
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Thanks, and again apologies for the confusions in the uploaded manuscript.
> My question and doubts about the CMI subsection remain. Could you discuss them a little further?
A negative answer to the open problem will show that any sample-efficient learning algorithm must carry at least $\Omega(\sqrt{m})$ CMI and in turn a non-tight CMI generalization error bound of $O(1/\sqrt[4]{m})$. A $\Omega(\sqrt{m})$ lower bound will then be analogue to the $\Omega(d)$ limitation of the MI bound.
The discussion above the open problem shows that, unlike for MI, $\Omega(\sqrt{m})$ is not necessary for non-trivial population loss (by subsampling for example $\log m$ examples): But that is at the expense of the training error -- which is also a bound on the population error. The author conjecture that the training error and CMI bound will always be larger than $\Omega(1/\sqrt[4]{m})$. A weaker conjecture (which is the focus of the open problem and captures the important cases) is that just for the optimal learning algorithms, the above bound (training error + CMI) will be larger than $\Omega(1/\sqrt[4]{m})$.
Granted, there are other open problems to be resolved here. For example, as suggested by the reviewer, to investigate the feasible tuples (training error, CMI bounds), and whether subsampling provides the pareto-optimal frontier.
> "for any algorithm with non-trivial learning guarantees, there is at least one scenario where it must carry a dimension-dependent amount of information on the sample or require a large sample"
The sentence can be rephrased as such. | Summary: This work considers the setting of stochastic convex optimization in $\mathbb{R}^d$ with learning algorithms that achieve less than $\epsilon$ expected excess risk when at least $m(\epsilon)$ examples are given. The main result of this work (Theorem 1) states that such algorithms have $\tilde{\Omega}\left(\frac{d}{\epsilon^5 m(\epsilon)^5}\right)$ amount of Shannon mutual information between the input and the output (i.e., $I(w_S; S)$ with $w_S$ being the output on input sample $S$). The significance of this lower bound is that there is a linear dependence on dimension $d$, which renders the bound of Xu and Raginsky [2017] dimension-dependent in case $m(\epsilon)=o(d^{1/5})$. For example, if one considers minimax optimal learning algorithms (that have $\frac{1}{\epsilon}$ or $\frac{1}{\epsilon^2}$ sample complexity depending on assumptions about the loss function), the generalization gap bounds with input-output mutual information become dimension dependent and can be vacuous when $d \gg m$.
Strengths: **Strength #1: Significance.** Recently information-theoretic generalization bounds have gained a lot of attention, partly because they are algorithm and distribution-dependent and some variants of them are nonvacuous in practical setting for deep learning. Understanding limitations of information-theoretic generalization bounds is thus important and relevant for the NeurIPS community. The lower bound derived in this work is a good contribution to the list of already known limitations.
**Strength #2: Originality.** To my best knowledge the derived lower bound is novel. The related work is cited adequately.
Weaknesses: **Weakness #1: Soundness.** I couldn't verify some parts of the proof of Theorem 1. Some clarifications are needed.
- In the line after equation (17), "Next, for fixed $p$ we define two random variables": Is $p$ a vector here or a constant? Should it be $\frac{1-9p(t)^2}{9-9p(t)^2}$ instead?
- In the line after equation (18), "Applying Lemma 1, ...": It was hard for me to follow the proof. I suggest to clearly write what $f$ is, verify that $f$ has the signature $f: \\{-1,1\\}^m \rightarrow [-1/3, +1/3]$ as required by Lemma 1, and clearly write what $X_i$ of Lemma 1 are in your construction. As I understand $X_i$ of Lemma 1 correspond to $\sqrt{d} z_i(t)$, but in that case $\hat{p}(t)$ is not a function of $\\{\sqrt{d}z_i(t)\\}_{i=1}^m$.
- In the line after equation (20), "Write $Z = \mathbb{E}_{S\sim D^m(p)}\left[ X_p(t) Y_p(t)\right]$": I suggest to use another letter instead of $Z$, which is already used to denote the input. Additionally, in order to apply Paley-Zygmund inequality, $Z$ needs to be positive. I think this should be verified. As I understand, the algorithm can be such that it fails to be positively correlated with the mean for some choices of $p$. For those choices of $p$, $Z$ can be zero or negative.
- I could verify the proof after equation (21).
**Weakness #2: Clarity & Presentation [minor].** The paper would benefit a lot from improvements in terms of clarity and presentation. It was hard for me to read and understand the details, as there were many small mistakes, some parts of derivations were not detailed enough, and some parts of the main text made sense only after reading the later parts of the paper. Please see more detailed comments and suggestions below.
**Weakness #3: Limitations [minor].** Deriving a lower-bound for the input-output mutual information is does not have strong implications about the line of information-theoretic generalization bounds.
It is well-known that information-theoretic generalization bounds that depend on input-output mutual information have many limitations. For example, they can be infinite for well-generalizing continuous learning algorithms. For this reason many techniques of improving such bounds were proposed.
1. Bu et al. [1] derived an improved expected generalization gap bound that depends on $\sum_i I(w_S; Z_i)$ rather than $I(w_S, S)$ with $S=(Z_1,\ldots,Z_m)$ being the sample.
2. As discussed in the paper, there are conditional mutual information bounds that are dimension independent. These CMI bounds of Steinke and Zakynthinou have been improved a lot subsequently: (a) deriving sample wise bounds $\sum_i I(w_S; J_i | \tilde{Z})$ (where $J$ is the train-test split variable and $\tilde{Z} \in \mathcal{Z}^{n\times 2}$ is the supersample) [2], (b) conditioning on individual pair of supersample examples: $\sum_i I(w_S; J_i | \tilde{Z}_i)$ [3]; (c) measuring information in function space [4]; and (d) leave-one-out bounds [5].
To my understanding, the result of this paper does not extend to these stronger bounds.
Final note: the main determinant of the score assigned below is the weakness #1. I am willing to increase the score if the concerns are properly addressed.
**References**
[1] Y. Bu, S. Zou, and V. V. Veeravalli. Tightening mutual information-based bounds on generalization error. IEEE Journal on Selected Areas in Information Theory, 2020.
[2] M. Haghifam, J. Negrea, A. Khisti, D. M. Roy, and G. K. Dziugaite. Sharpened generalization bounds based on conditional mutual information and an application to noisy, iterative algorithms. NeurIPS 2020.
[3] H. Harutyunyan, M. Raginsky, GV. Steeg, and A. Galstyan. Information-theoretic generalization bounds for black-box learning algorithms. NeurIPS 2021.
[4] B. Rodríguez-Gálvez, G. Bassi, R. Thobaben, and M. Skoglund. On random subset generalization error bounds and the stochastic gradient Langevin dynamics algorithm. IEEE ITW 2021.
[5] M. Haghifam, S. Moran, DM. Roy, and GK. Dziugiate. Understanding generalization via leave-one-out conditional mutual information. IEEE ISIT 2022.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - Please use `\citep` or `\citet` (depending on whether authors are a part of the sentence or not) rather than `\cite`.
- Lines 74-78: Need to be rewritten to improve clarity.
- Line 116: Calling $\Delta_S(w)$ "empirical risk" is confusing. It is better to call it "excess empirical risk".
- Line 116: "With the optimality" -> "with the suboptimality".
- Line 117: Typo in "Leranbility".
- Lines 117-118: As I understand from the expectation in between lines 121-122, only deterministic algorithms are considered. Does the main result of Theorem 1 extend to stochastic algorithms?
- Just below line 187: it should be $\le$ in the first equation line.
- Appendix A. Proof of Theorem 1, first paragraph: It should be $z \in \\{-1/\sqrt{d}, 1/\sqrt{d}\\}$. Also, in "each coordinate z(t) is chosen uniformly" the word "uniformly" should be removed or replaced by "independently".
- Lines 200-201: It is clear what is the main message here, but the information-theoretic bound applies to the generalization gap and should not be directly compared with the suboptimality rate.
- In equation between lines 212 and 213, it should be $d^{1/7}$.
- The CMI bound in eq. (7) instead of $I(w_S; S | Z)$ it should be $I(w_S; J | Z)$ where $J$ denotes the selection variables. Note that $I(w_S; J | Z) = I(w_S; S | Z) + I(w_S; J | Z, S)$ and the last term is not always zero (e.g., when $\mathcal{Z}$ is a finite domain with small size).
- Line 282: It should be $O(\sqrt{f(m)/m})$.
- Lines 280-286: Subsampling will result in dimension-independent *generalization gap* bounds, but one might not be able to convert that into an excess risk bound.
- In a few places throughout the paper we see "true risk", does it refer to the excess risk (suboptimality) or the expected loss?
- In the equation below line 315, there should be parentheses for the logarithm to indicate that the $\frac{c^2}{\beta^2}$ part is inside the logarithm.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: See weakness #3.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback and comments. The proofs are correct, but certain typos have been identified by the reviewer, and indeed, a certain assertion was neglected that could clarify things (see below).If further clarifications are in order, those can be delivered..
> In the line after equation (17), "Next, for fixed p we define two random variables": Is a vector here or a constant?
Thanks!, you are right and it is a typo. It should be p(t) instead of p.
> In the line after equation (18), "Applying Lemma 1, ...": It was hard for me to follow the proof. I suggest to clearly write what
Thanks for pointing out the need for further elaboration here. What is missing is the statement that without loss of generality we can assume that $|w(t)| <= 1/(3\sqrt{d})$. Indeed, concatenating w(t) will only diminish the loss and will not add information.This will also mean that f is bounded as required.
Next, as you correctly point out, to apply fingerprinting, f is $\hat{p}(t)$. f is a function of $z_i(t)$’s as $w_S$ is a function of $z_i(t)$, all other r.v.s are independent of $p(t)$ hence the fingerprinting Lemma can be readily applied. This part indeed can benefit from a more detailed derivation but it is well justified
> in order to apply Paley-Zygmund inequality, Z needs to be positive. I think this should be verified.
Thanks for this remark. Paley-Zygmund is indeed generally formulated so that $Z$ has to be positive. However, it is true that one can apply the inequality to any random variable with positive expectation (i.e. $E[Z]>0$) which follows from eq. 19.
The proof is so straightforward that it can be verified here: Indeed, the standard derivation begins with the equality:
$Z = Z\cdot 1_{Z\le \theta E[Z]} + Z\cdot 1_{Z> \theta [Z]}$, hence $E[Z] \le \theta E[Z] + E[ Z\cdot 1_{Z> \theta [Z]}]$.
This step ofcourse doesn’t require positivity. Next, we apply C.S which also doesn’t require positivity:
$E[ Z\cdot 1_{Z> \theta [Z]}] \le \sqrt{E[Z^2] P(Z> \theta E[Z])}.$
Rearranging we obtain
$$(1-\theta) E[Z]) \le \sqrt{E[Z^2] P(Z> \theta E[Z])}.$$
Now taking the square of both sides (which now requires $(1-\theta) E[Z]\ge 0$) we obtain the inequality.
This remark will be added to the manuscript, to make it clear for everyone that Paley-Zygmund can be applied here.
> I could verify the proof after equation (21).
While there are a few tedious lines after eq.21 they are all being justified. Perhaps if the reviewer can point out the difficulty here, that could be gladly clarified.
> It is well-known that information-theoretic generalization bounds that depend on input-output mutual information have many limitations. For example, they can be infinite for well-generalizing continuous learning algorithms.
This is true, but this deficiency is not due to the input-output mutual information, one can artificially encode any information in the model. For example, think of a perfectly well-generalizing algorithm where the model includes storage of all the data that was used in training, such an algorithm will hold information even on the individual samples. The point in this paper is to develop a technique that shows that **every** algorithm must hold information.
> Bu et al. [1] derived an improved expected generalization gap bound
First, thanks for the reference! The bound in Bu et al indeed requires discussion and that will be added. Similarly, a dimension dependent lower bound can be derived to the form of Bu et al. Since this was asked by several reviewers, a proof sketch will be added in a separate official comment. A full derivation can also be provided upon request.
Regarding further questions, the reviews does identify certain typos and places where clarification is in order, these will be addressed in the paper and Thanks for pointing these out. if some further clarifications are in order, those will gladly be provided.
---
Rebuttal Comment 1.1:
Title: Reviewer response
Comment: Thank you for the clarifications. I recommend to improve the proof of Theorem 1 in future revisions, especially where Lemma 1 is applied. It should be mentioned that $f$ is a function of not only $z_i(t)$, but also of $z_i(t')$ where $t'\neq t$. Therefore, one has to first fix these other variables, verify that Lemma 1 applies, apply Lemma 1, and then take expectation over $z_i(t')$ and $t$.
With these clarifications and corrections in place, this submission is improved and I will adjust my score accordingly.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Thank you very much.
A revised version will include further elaborations and clarifications following the reviewer's advice. | Summary: The paper shows that there exist stochastic convex optimization problems, which are easy to learn but for which the mutual information based generalization bound of [Xu-Raginsky, 2017] scales at least linearly in the dimension of the parameter space.
Strengths: The paper presents the limitation of mutual information based generalization bound of [Xu-Raginsky 2017] in capturing dimension dependency. The work improves on the previous results of [Haghifam et al 2022] in this respect, in which the lower bound scales logarithmically with dimension and is restricted to gradient descent.
I did not carefully check all proofs but they look fine.The results of this paper are original. This work is among the very few that point to the limitations of information-theoretic upper bounds for generalization errors of learning algorithms. Such results are important for the learning theory community to developed improved information-theoretic tools that overcome such limitations.
The paper is also very well written. The presentation is clear to this reviewer. The presented results are very well positioned in the related literature.
Weaknesses: NONE
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: NONE
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: NONE
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much! | Rebuttal 1:
Rebuttal: Thank you very much for the thoughtful reviews.
Several reviewers pointed out to the work of Bu et al. and asked it the techniques can also be applied to this individual sample bound, here I am providing a proof sketch explaining why the technique easily applies. I will only show dependence on the dimension and not on the accuracy (similar to the technical overview in the main text) to avoid complications. Of course, a full derivation as is done in the paper can also be provided. But the argument here already shows that measuring the mutual information with the individual samples will not lead to dimension-free bounds (and that the proof is almost identical).
The main observation is that applying Lemma 2 we have that:
$$ \sum_{i=1}^m \sqrt{I(w_S(t),z_i(t))} = \sum_{i=1}^m \sqrt{I(\sqrt{d} w_S(t)-P(t),\sqrt{d} z_i(t)-P(t))} \ge \sum_{i=1}^m \Omega (E[(\sqrt{d} w_S(t)-P(t)) \cdot (\sqrt{d} z_i(t)-P(t))]^2)$$
and by convexity:
$$ \sum_{i=1}^m \Omega (E[(\sqrt{d} w_S(t)-P(t)) \cdot (\sqrt{d} z_i(t)-P(t))]^2) \ge m \Omega (E[(\sqrt{d} w_S(t)-P(t)) \cdot (\frac{1}{m}\sum_{i=1}^m \sqrt{d} z_i(t)-P(t))]^2)$$
From here the derivation is the same where we use the fingerprinting Lemma to show that
$$ E[(\sqrt{d} w_S(t)-P(t)) \cdot (\frac{1}{m}\sum_{i=1}^m \sqrt{d} z_i(t)-P(t))] = \Omega (1) $$ and we conclude that
$$ \frac{1}{m}\sum_{i=1}^m \sqrt{I(w_S(t),z_i(t))} = \Omega (1) $$
for each single coordinate. By proper derivation we can show that the information accumulated in all coordinates scale with the dimension. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Bounding the Invertibility of Privacy-preserving Instance Encoding using Fisher Information | Accept (poster) | Summary: In this paper the authors propose (diagonal) Fisher information leakage (dFIL) as a theoretically-grounded and practical framework for assessing the privacy guarantees of instance encoding. At its core, dFIL quantifies the potential invertibility of the encoding mapping of an instance encoding scheme. The authors establish, under mild regularity conditions, that the reciprocal of dFIL, plus the information theorist's Fisher information, serves as a lower bound for the mean squared error (MSE) of any input reconstruction attack (Corollary 1). This lower bound is derived from the so-called van Trees' inequality. Furthermore, the authors present extensive numerical experiments that highlight the key features of dFIL as a privacy metric for instance encoding.
Strengths: 1. dFIL is a very intuitive privacy metric under unbiased reconstruction attacks. In such cases, the Cramér-Rao bound establishes that the reciprocal of dFIL is a lower bound for the MSE of an adversary attempting to reconstruct the raw data based on the encoding.
2. The limitations of dFIL are discussed thoroughly.
3. The paper is well-written.
Weaknesses: Although dFIL serves as an intuitive privacy metric under unbiased reconstruction attacks, these attacks are uncommon in ML applications. In the case of biased attacks, which are more representative of practical scenarios, the main result (Corollary 1) highlights the need to consider not only dFIL but also the information theorist's Fisher information. Consequently, certain practical and theoretical concerns are shifted to the latter quantity, potentially diminishing the significance of dFIL itself. Furthermore, it is worth mentioning that the paper's technical contribution is minor since the main result is a direct consequence of van Trees' inequality. While this is not a problem in itself, it does not contribute to make the paper stand out.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: One of the key contributions of this paper is the introduction of dFIL as an intuitive privacy metric for instance encoding. However, it is important to note that, for (general) biased attacks, the information theorist's Fisher information plays an equally, if not more significant role than dFIL. Regrettably, the paper lacks a thorough discussion of the empirical and theoretical properties of the latter quantity, apart from its estimation using score matching. Consequently, it becomes challenging to evaluate the relative importance of dFIL compared to the information theorist's Fisher information. Clarifying these aspects would enhance the overall assessment of dFIL's significance.
Addendum: In their rebuttal, the authors clarified the relative importance of dFIL and the information theorist's Fisher information.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors addressed the limitations of the proposed method very well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank for the insightful review and feedback. We provide our answers to the questions and comments below. For the new evaluation data and their brief explanation, please check the global response.
**Relative importance of dFIL compared to the information theorist's Fisher information**. While it is true that the information theorist’s Fisher information is important in deriving the bound, it is a quantity only related to the dataset, not the encoder design. It can be calculated once and shared with the community for each dataset, and also it can be refined over time within the community; each encoder designer does not have to calculate the information theorist’s Fisher information individually.
Moreover, when comparing several encoder designs’ privacy under the same dataset, the information theorist’s Fisher information is just a constant, so calculating dFIL is more important in comparing each encoder’s relative privacy. Our work focuses on calculating dFIL, as our goal was to develop a metric that can be used to compare and evaluate the privacy of different encoders.
**Minor technical contribution**. While our Corollary is directly based on the van Trees inequality, we are the first to connect the van Trees inequality to score matching in generative AI and use it to provide a bound for a real-world data distribution. Deriving any inferential privacy guarantee (bounding the posterior success rate of arbitrary adversaries) is known to be very challenging for real-world datasets with complex priors, and most existing works (on similar fields) assume simple data priors, such as data following Gaussian or uniform distribution [a, b]. We believe our insight in connecting the concept of score matching in generative AI is novel, and can be extended to other fields on inferential privacy.
We also want to highlight that the theoretical result of our work, albeit simple, will have a high impact within the field. Currently, the field of instance encoding lacks **any** measure of privacy that is theoretically justified and empirically useful, despite the field’s high popularity. Our work is the **first** to bring theory to instance encoding privacy, which will inspire future researchers in the field to design encoders based on theoretical principles and deter bogus, empirical claims.
[a] Arpita Ghosh et al., "Inferential privacy guarantees for differentially private mechanisms." Arxiv 2016.
[b] Borja Balle et al., "Reconstructing training data with informed adversaries." S&P 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for your response, now it is easier to understand the relative importance of dFIL and the information theorist's Fisher information. I suggest you to add a comment in the paper on this matter. I updated my score. | Summary: This paper discusses the privacy concerns surrounding data encoding methods employed in machine learning (ML) operations. Its primary objective is to present a theoretical framework that quantifies the extent of privacy leakage in an encoding scheme and facilitates the calculation of its invertibility. The authors propose a novel technique that leverages Fisher Information Leakage to compute the lower bound of Mean Squared Error (MSE) for an encoding scheme, considering both unbiased and biased attackers. Finally, The authors evaluate their technique against some simple reconstruction attacks both for training and inference tasks.
Strengths: 1. The paper takes a theoretical approach to explain reconstruction attacks. The problem is interesting and the presentation and flow of the paper are good. It was easy to follow and understand.
2. The authors' approach of initially focusing on unbiased attacks and subsequently extending their scheme to encompass attackers with prior knowledge is a good approach and beneficial in comprehending their contributions. The paper also provides evaluation of their technique considering both unbiased and biased attackers.
3. The paper is upfront about many of its limitations. This is commendable and helps understand the solution better and paves the way for identifying future research avenues.
Weaknesses: 1. The core contribution of the paper is adopting Fisher Information Leakage and van Trees inequality to come up with a lower bound for MSE. While this is a new idea, it would benefit from more comprehensive construction. Furthermore, the reconstruction attacks examined by the authors to showcase the efficacy of their technique are relatively simplistic. It would be advantageous for the authors to consider incorporating relevant literature from the privacy and security domain and evaluate their approach against more sophisticated and state-of-the-art reconstruction attacks.
2. The paper should provide more rigorous analysis of the privacy and utility trade-off. The paper readily discards Differential Privacy on utility ground, but later adopts noise addition to introduce privacy. Regardless, the privacy guarantee by noise addition and the corresponding utility trade-offs should be analytically explained. While the paper includes an empirical demonstration of the utility trade-off, it falls short of its intended goal of providing a theoretical explanation.
3. The authors should clarify how this lower bound contributes to preventing attacks, developing privacy-preserving encoding techniques, or addressing other important aspects related to data privacy.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See weakness.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank for the insightful review and feedback. We provide our answers to the questions and comments below. For the new evaluation data and their brief explanation, please check the global response.
**Reconstruction attacks are simplistic**. We want to first highlight that the attacks we studied are based on *state-of-the-art works from top-tier conferences*. We tried both well-cited classics (e.g., CVPR ‘15 [34], CVPR ‘18 [35], ACSAC ‘19 [33]) and recent works (e.g., CCS ‘21 [36], CVPR ‘22 [20]), and chose the best-performing attacks. Especially, our attack-b in Figure 3--4 is directly adopted from CVPR ‘22 [20], which worked the best among the attacks we studied. The field does not have many sophisticated attacks because, unlike model inversion or gradient inversion which is much harder and still requires a lot of research, inverting instance encoding is easy, and existing simple attacks already work well in the absence of a defense like ours. Also, prior instance encoding works did not have theoretical justifications (unlike our work) and were considered not strong in terms of privacy [2, 23], so researchers had less motivation to design sophisticated attacks against them.
Nonetheless, we agree with the reviewer that the work can benefit from a more sophisticated attack. Thus, we additionally designed an attack based on a DDPM diffusion model pretrained by Google [73]. Please refer to Appendix: Section 7.6 for more details. Appendix: Figure. 7 shows the reconstructed images with different dFIL. Similar to Figure 3-4, it can be seen that 1/dFIL >= 10 is relatively safe: pixel-to-pixel reconstruction becomes hard for 1/dFIL >= 6.42, and even the high-level semantic information becomes hard to reconstruct when 1/dFIL >= 49.5. At the same time, the figure shows that sometimes the high-level information of the image can be reconstructed (e.g., the fact that there is a white car facing left), even when our dFIL bound disallows exact pixel-by-pixel reconstruction. The result gives us an insight into what kind of information can still leak, even when a pixel-by-pixel reconstruction is prohibited by Corollary 1.
We additionally plot what is equivalent to Figure 3 (left) for the diffusion model attacker in **Supplementary PDF: Figure 1**. The figure shows that the bound again works well even for the diffusion model attacker. Here, we used inputs with randomized smoothing noise as in Figure 3 to calculate the bound, and because the pretrained model is not trained with the randomized smoothing noise, we trained the diffusion model for 5000 epochs with the noisy inputs, following the hyperparameters from [73].
**Why discard DP, but later adopts noise addition?** We want to clarify that adding noise is a privacy mechanism that is more general than DP, and our approach, although it adds noise, is not differentially-private. Prior work [2] theoretically proved that indistinguishability, which differential privacy aims to provide, is *fundamentally incompatible* with utility in instance encoding. As the privacy notion of indistinguishability is impossible to achieve, our approach is designed to achieve an alternative (weaker) privacy, non-invertibility. While it is also possible to achieve non-invertibility with DP methods, as DP was originally designed for indistinguishability, the privacy-utility trade-off becomes much worse. We show the empirical comparison with DP in Appendix: Section 7.5.
**Lack of more rigorous theoretical analysis of the privacy and utility trade-off**. The main contribution of this paper is to (1) theoretically show that dFIL is a useful privacy metric that can bound the invertibility of instance encoding, and (2) empirically show that an instance encoding with a low dFIL can still have reasonable utility. Providing a theoretical analysis of the privacy-utility trade-off was not part of the scope of this paper, and we leave it as a future work.
We believe our theoretical contribution is still meaningful, as we are the **first** work to provide a meaningful privacy metric that can bound the invertibility of an instance encoding. The result will help future researchers to design instance encoders using theoretical principles, rather than relying on empirical claims. Also, we do so by connecting the van Trees inequality with score matching in generative AI, a novel connection first made by our work. It allows bounding the posterior success rate of arbitrary adversaries for real-world data (e.g., CIFAR10) with intractable priors, which is known to be challenging in general. Prior works on similar fields usually assume data with simple priors like Gaussian or uniform [a, b]. We believe our idea of connecting score matching to capture real-world data priors can be extended to similar fields.
**How does the lower bound contribute to developing privacy-preserving encoding?** dFIL allows us to quantitatively measure the invertibility of an instance encoding for the first time. System/model designers can now measure and compare their encoding techniques’ invertibility and select encodings that are harder to invert. This is exactly what we did in Section 4--5, where we tried out several empirically-proposed techniques from prior works [17, 20, 22, 44, 58, 59] and chose techniques that provided the lowest dFIL (better privacy) with the same accuracy (Line 262--264). Before our work, there were no known ways to theoretically measure how private or not private each technique is, other than fully empirical methods.
**The idea would benefit from more comprehensive construction**. We could not understand what this sentence meant exactly. Can you please elaborate, so that we can respond and/or add more results if necessary?
[a] Arpita Ghosh et al., "Inferential privacy guarantees for differentially private mechanisms." Arxiv 2016.
[b] Borja Balle et al., "Reconstructing training data with informed adversaries." S&P 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed responses.
The concerns that I have about this paper are well addressed.
I will raise my score. | Summary: Instance encoding (and some closely related lines of research) aim at finding ways to encode data examples (in a training set) as $E=Enc(e)$, in such a way that one can train models on the encoded examples $E_1,...E_n$, while $E_i$ does not leak much about $e_i$ to an adversary who inspects it.
Previous efforts for proposing "private" instance encodings have failed due to (1) subsequent attacks, and (2) lack of formal definitions of what privacy means here. In fact, it is known that such mechanisms cannot be DP in a strong sense, while they are also useful for training.
This paper aims to put forward a new way of arguing about privacy of instance encoding, based on Fisher Information (FI). FI is a useful measure of information revealed in random variables about a related random variable. In particular, knowing FI allows one to lower bound the Mean Squared Error (MSE) of the best reconstruction attack, assuming that the adversary has no prior knowledge about the distribution of the original instance.
More specifically, using FI, the paper shows how to give an *hardness of inversion* type privacy. The paper shows that both in the (more elementary) case of unbiased attacks (who see the original instances uniformly distributed) as well as more powerful attackers who have some prior information (e.g., know the distribution of the instances), one can lower bound the MSE of the best possible (information theoretic) attack, based on two parameters: FI of the encoding and a second parameter that depends on how flat/concentrated the distribution of the instance x is. Note that the second parameter also inherently shows up as one can approximate x well if it is too concentrated already.
The paper also discusses how to compute the two parameters of interest, or at least approximate them, in practical settings.
In particular, when the encoder is a Neural Net, the paper discusses how to use tools from previous work to approximate them using smooth functions that will be suitable for computing their FI.
The final product is an "average case" notion of privacy that is tailored for inputs coming from (flat enough) distributions.
The paper also presents experiments to validate its theory.
Strengths: Differential privacy is the golden standard for privacy at the moment. But it is also a good idea to look for new notions privacy that could be meaningful on their own and useful for certain settings. Hence, I find the overall goal of the paper positive and useful for putting new perspectives on privacy on the table.
I also liked the fact that the paper made an effort to algorithmically compute their notion of privacy by approximating the two parameters of interest using tools from previous work.
Weaknesses: (also a limitation) Bounding MSE sometimes might not say anything about privacy, when all an adversary wants to do is finding a specific sensitive feature (and not approximating the instance in $\ell_2$ norm). So, more (context dependent) work is needed for finding whether this notion is actually a good notion of privacy or not.
(also a question) The figures are not clear. For example, in Figure 2, what is the column numbers (Ref, 25, 10, 1) next to MNIST images?
(also a question and limitation) The paper does experiments in which the MSE of certain encoders are evaluated. However, it is not clear to me how *useful* these encodings are *for training models* over the encoded instances. Note that without knowing how useful the encodings are, there is always a trivial fully private encoding: output $\bot$ all the time.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see the questions from the previous box.
is there any hope that your approach can say something about the expected $\ell_\infty$ distance between adversary's guess and the true instance (rather than MSE) ?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See the "weakness" section for some comments related to limitations as well.
On the positive side, the paper itself has a a good "limitations" section at the end.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank for the insightful review and feedback. We provide our answers to the questions and comments below. For the new evaluation data and their brief explanation, please check the global response.
**dFIL against sensitive attribute inference**. The initial motivation for dFIL was to protect against input reconstruction, which significantly hampers user privacy directly if allowed [28]. Still, dFIL can be adopted to protect sensitive attributes in some cases, if the attribute is contained within a subset of the input features. Currently, dFIL is an aggregate value across all features (e.g., pixels). Alternatively, we can calculate a per-feature FIL by looking at each entry in the diagonal of FIM instead of calculating the trace (Eq. 3) [27]. The per-feature FIL then gives us the reconstructability of each feature (pixel) individually. **Supplementary PDF: Figure 2** shows an example of this per-feature FIL for different images, for the split-middle architecture in Section 4. The figure shows that the key points of an image (e.g., contours of an object, tire of a car, tag of a cat, etc.) leak more through the encodings and can be easily reconstructed. If the sensitive attribute is contained in a subset of the image (e.g., eye color, background, …), we can measure its reconstructability directly using this per-feature FIL and understand if the attribute can be reconstructed by an adversary. If the sensitive feature is not contained within a few pixels (e.g., ethnicity), it would be harder to measure its leakage. A potential future direction is to disentangle the sensitive feature into a certain dimension in a latent space [a, b] and use dFIL in the latent space.
**Figures are not clear (what are the column numbers in Fig 2?)**. The values indicate the 1/dFIL values (Line 231--232), which means more private if the value is higher. We will refine the figures in the final draft to make them more clear.
**Utility of the encoders in Sec 3**. We perform two separate evaluations. First, in Section 3, we evaluate whether our MSE bound holds well against attacks. Second, in Section 4--5, we evaluate whether an encoder with a reasonably-low dFIL (high MSE bound) can still have good utility. We separate out the evaluations because we wanted to evaluate our bound in Section 3 against the most adversarial setup to see if it always holds.
Encoders used in Section 3 were deliberately chosen to be simple and relatively easy to invert (e.g., single-layer convolution with a large output channel), and as they are designed solely for the ease of attack and not for utility, it is almost certain that they will not be very useful in training models. The goal of encoders in Section 3 was to show that the attackers cannot achieve an MSE below our bound *even for the encoders that are deliberately designed solely to make the attack easy (and do not care about utility)*. The results from Section 3 show that our MSE bound is reliable even in these adversarial cases.
The second set of evaluations in Section 4--5 studies whether an encoder with a reasonably-low dFIL can still be useful in training/inference. For these encodings, we did not try inverting and comparing their MSE with the bound, as these practical encoders are already empirically harder to invert than the encodings that are deliberately made easy to invert (from Section 3), and we already showed that even the easier ones cannot be inverted to break the bound.
**Can we bound L-inf?** While we cannot directly bound the expected L-inf, we can adopt the dFIL to bound something similar. Instead of calculating dFIL which is the average quantity of all features, we can calculate the expected squared L2 distance for each feature (the per-feature FIL discussed in the first bullet). If we take the max of these instead of an average, we can get the expected “squared” L-inf, which is similar (but not the same) with L-inf.
[a] Ricky TQ Chen, et al. "Isolating sources of disentanglement in variational autoencoders." NeurIPS ‘18.
[b] Zheng Ding, et al. "Guided variational autoencoder for disentanglement learning." CVPR ‘20.
---
Rebuttal Comment 1.1:
Title: thanks
Comment: thanks for the response. | Summary: This paper introduces a new theoretical measure, the "diagonal Fisher Information Leakage" (dFIL), for quantifying the privacy leakage in instance encoding mechanisms in machine learning models. The authors construct a framework that balances the trade-off between data privacy and utility in instance encoding.
The authors start by developing the mathematical foundation of dFIL, leveraging the principle of Cramér-Rao Lower Bound (CRLB). They present a theoretical analysis which demonstrates that dFIL serves as an upper bound on the Mean Squared Error (MSE) of any estimator trying to reconstruct the original input from the encoded instances.
The paper presents two case studies to showcase the practical application of dFIL. The first one focuses on private inference, where instance encoding is applied to an existing pretrained model. The second case study explores the possibility of training a model on instance-encoded data. The results from these studies confirm that models guided by dFIL can achieve reasonable performance while ensuring a high degree of privacy.
Despite these advancements, the authors acknowledge the limitations of dFIL, such as the MSE bound not always correlating well with the semantic quality of the reconstruction, the average case bound provided by dFIL, and the difficulties of interpreting dFIL values in certain data types where MSE might not be directly meaningful or accurate. Recognizing these limitations, the paper calls for further research to explore and optimize the use of dFIL in designing and analyzing systems that use instance encoding for private inference and training.
Strengths: Strengths:
1. **Originality**: The concept of Diagonal Fisher Information Leakage (dFIL) is novel, offering a fresh perspective on privacy leakage quantification in instance encoding for machine learning models. The authors have done a commendable job in providing a new metric that has potential broad applications in designing and analyzing private machine learning systems.
2. **Quality**: The theoretical foundation of dFIL is well-crafted and robust, using the Cramér-Rao Lower Bound to derive the maximum leakage. This gives dFIL a solid grounding in statistical theory. The authors carefully reason about the limitations of their approach and provide a roadmap for further exploration.
3. **Clarity**: The paper is well-written and structured, making the complex concept of dFIL comprehensible. The authors articulate the theoretical underpinnings and practical implications of dFIL with clarity and precision. The use of case studies adds to the understanding of how dFIL can be applied in practical scenarios.
4. **Significance**: The presented work is of significant value as it paves a way for protecting data privacy in instance encoding mechanisms, a critical issue in today's data-centric machine learning landscape. The demonstrated applications in private inference and training suggest the potential of dFIL to guide the design of machine learning systems that respect privacy constraints while maintaining utility.
Weaknesses: 1. **Limited empirical validation:** Although the theoretical development of dFIL is thorough and robust, the empirical validation seems somewhat limited. The authors conduct case studies on split inference and training with dFIL, which are commendable. However, additional experiments, especially on more diverse datasets and models, could strengthen the validity and generalizability of the dFIL measure.
2. **Absence of comparative analysis:** While the paper introduces dFIL as a novel concept, it doesn't provide a comparative analysis with other similar measures, if any exist. This makes it difficult for readers to evaluate the practical advantages or disadvantages of dFIL in contrast to other measures or approaches.
3. **Limited exploration of potential use-cases:** The authors present two case studies, however, the range of potential applications for dFIL could be broader. Additional use-cases illustrating how dFIL can guide the design of private machine learning systems would provide a more comprehensive view of the practical utility of dFIL.
4. **Interpretability of dFIL:** As the authors acknowledge in the Limitations section, for data types where MSE is not directly meaningful or the bound is inaccurate, interpreting the privacy of an encoding given its dFIL may not be straightforward. More investigation is needed to establish a clear and intuitive interpretation of dFIL values, especially in the context of real-world applications.
5. **Assumptions and constraints:** The assumptions underlying the derivation of dFIL and its applications need further exploration. For example, how sensitive is dFIL to the assumptions of model linearity, Gaussian noise, and others? The impact of violating these assumptions, and how they can be mitigated, would be valuable areas to explore.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. The paper uses mean squared error (MSE) as a metric to quantify the quality of reconstructions. However, as the authors acknowledge, this may not always correlate with the semantic quality of reconstructions. Could you please elaborate more on the choice of this metric and how it might affect the overall performance of the system? And the paper mentions that for certain data types where MSE is not directly meaningful, interpreting the privacy of an encoding given its dFIL might be challenging. Could you provide examples of such data types and explain how dFIL might be adapted to handle them?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have done a commendable job addressing the potential limitations of their work. They detail four specific limitations in section 6:
1. dFIL only bounds the mean squared error (MSE), which may not always correspond well with the semantic quality of reconstructions.
2. The derived bounds are average-case, meaning that individual data instances could experience lower reconstruction MSE than the bound.
3. For data types where MSE is not directly meaningful or the bound is inaccurate, interpreting the privacy of an encoding given its dFIL might be difficult.
4. Systems with the same dFIL may have different invertibility, as the bound from dFIL could be conservative.
These limitations are articulated clearly, and the authors also suggest possible directions to address these limitations, such as exploring metrics other than MSE and calculating dFIL dynamically for each sample.
In terms of the broader societal impact, the paper does not directly address this aspect. However, it's understandable, given the theoretical nature of the work. Future research could consider potential misuse of this technology, as well as possible regulatory or ethical issues that might arise from its application, particularly in scenarios where private or sensitive data are involved. Nonetheless, the paper's focus on improving privacy in machine learning is itself an important societal contribution, as it addresses a growing need in an increasingly data-driven world.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank for the insightful review and feedback. We provide our answers to the questions and comments below. For the new evaluation data and their brief explanation, please check the global response.
**Comparison with similar measures**. Despite the popularity, there is *very little work on theoretical privacy analysis for instance encoding*, leaving us with no good alternative measures to compare. We discuss the lack of related works under the “Privacy metrics for instance encoding” paragraph in p. 2. The lack of theoretical measures in this popular field makes our work unique and important. We summarize the limitations of some of the related works below:
- **Metrics without theory**. Some proposed *empirical* measures, such as mutual information (MI) [45] or distance correlation (dCorr) [21, 22]. However, these works failed to theoretically show how these measures are related to any form of privacy, if related at all (i.e., they lack what is often called an *operational interpretation*, which is considered essential in privacy literature [38]). As we explain in Line 95--98, prior works [38] warned against using such seemingly-valid but unjustified measures, because they can give a false sense of privacy. For example, [38] showed that MI sometimes mischaracterizes the severity of the information leakage (a system with a lower MI can actually be less private) and is not a good measure of privacy. As these measures (MI, dCorr) are not theoretically justified and can be misleading, we do not quantitatively compare them with dFIL. We will enrich Section 2 with the above qualitative discussion in the final draft.
- **Metrics with theory**. There are metrics with operational interpretations; however, dFIL is much more practical than the others. Metric-DP [24] is not applicable to encoders used in Section 4 and 5 (Line 89). Differential privacy [30] degrades the utility too much (Line 82, Appendix: Section 7.5). A recent concurrent work proposed a new PAC privacy [a] that can bound the adversary’s success rate. However, the bound from [a] gets very loose on instance encoding setups we studied. For 1/dFIL=10 which was empirically hard to invert (Figure 5), the adversary attack success rate upper bound from [a] becomes 1 --- a trivial bound that does not give any practical information. We will add the discussion regarding [a] in the final draft.
**Assumptions and constraints need further exploration**. dFIL is generally applicable to any differentiable and randomized methods, using the general Equation 1. The noise does not have to be Gaussian, and using different sources of randomization can lead to different privacy-accuracy tradeoffs. We leave such an exploration to future works. The differentiability (e.g., absence of ReLU or MaxPool) assumption is necessary for the van Trees inequality and the Cramer-Rao bound to hold. Although the bound may empirically work similarly without the differentiability assumption, we confined the scope of the paper to encoders where our Corollary can be mathematically precise. Please let us know of any other assumptions/constraints that the reviewer wants to discuss, if there are any.
**Interpretability of dFIL**. We agree with the reviewer that interpreting dFIL might not be straightforward in some applications. As mentioned in Limitation 3 (p. 9), in such cases, acceptable values of dFIL should be determined for each application through further research. The well-adopted DP has similar issues: in many real-world scenarios, what $\epsilon$, $\delta$ means for a particular setting is not straightforward, and the acceptable values are often chosen empirically [69].
**MSE as a metric**. For some data types (e.g., user’s geographical location), MSE can directly provide an intuitive notion of privacy. For other data types such as image or word embeddings, MSE might not exactly capture the semantic similarity of the reconstruction. In this work, we use MSE as a proxy to similarity as in prior works [47, 48], but the approach has limitations: two images with relatively high MSE may look semantically similar to humans. For example, see the images containing a white car on a red background in Appendix: Figure 7. Two images that are very different pixel-wise may still convey similar high-level information (e.g., the fact that there is a white car facing left in front of a red background), and MSE cannot capture the fact that such high-level information is still leaking, while it disallows an exact pixel-by-pixel reconstruction. The users of dFIL must understand these implications of dFIL and use it only when the privacy it provides makes sense. We believe our extensive discussions and evaluations will help future researchers to adopt dFIL in a meaningful way.
As discussed in Limitation 1, van Trees inequality can be extended to potentially support metrics other than MSE (see Appendix: Theorem 2), which may be useful in some cases. We leave such an extension to future work.
**Limited empirical validation**. We want to emphasize that our case study spans *three distinct applications (image classification, recommendation, and sentiment analysis)* that use *three different input data types (image, user behavioral features, and natural language)*, showing the generality of dFIL. Following the reviewers’ suggestion, we additionally add the CIFAR-100 dataset to the split inference evaluation in **Supplementary PDF: Table 1**. The trend is similar to Table 1 in the original paper.
[a ] Hanshen Xiao, and Srinivas Devadas. "PAC Security: Automatic Privacy Measurement and Control of Data Processing." Crypto 2023 (to be appeared this August).
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thank you for your detailed rebuttal in response to my review. I have received and read your explanations concerning the weaknesses, questions, and limitations raised. I will carefully consider your responses as I evaluate the revised version of the manuscript. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful reviews and feedback. We respond to the concerns and questions individually in a separate rebuttal for each review. Here, we upload a supplementary PDF containing additional results that were asked by the reviewers. We also give a brief explanation:
**Table 1** holds an additional evaluation result for split inference, using the CIFAR-100 dataset. The result directly extends Table 1 in the paper, and all the model hyperparameters are the same except for the last fully-connected layer. The result shows that our optimizations again improve the accuracy significantly on a reasonably-low dFIL (e.g., 1.98% -> 59.51%).
**Figure 1** holds a 1/dFIL vs. reconstruction MSE plot similar to Figure 3 (left) in the paper, for a new diffusion model-based attacker that we designed. The details of the diffusion model attacker are in Appendix: Section 7.6. To calculate the bound, we need to add a randomized smoothing noise to the input, so we retrained the model for 5000 epochs with the noise with the hyperparameters from Google [73]. The figure shows that the bound holds reliably for the diffusion model attacker as well.
**Figure 2** plots the per-feature FIL that can bound the reconstruction MSE of individual features (pixels), instead of the average of the entire input. Per-feature FIL can be calculated by directly using the diagonal entries of the Fisher information matrix (FIM), instead of calculating the trace (Eq. 3) [27]. Plotting the per-feature FIL helps us understand which pixels, or attributes occupying that pixels, can be more easily reconstructed. Figure 2 shows that identifying features, such as the contour of a frog (8th column), the tire of a car (7th column), or the necklace of a cat (9th column) leak more and can be reconstructed more easily.
Pdf: /pdf/48f68239c14a1f69a618ffca34957642ccd10266.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Joint processing of linguistic properties in brains and language models | Accept (poster) | Summary: While several papers have recently shown that large language model embeddings are aligned with / predictive of fMRI reading data, this paper pushes these analyses forward by asking: what properties of the representations are responsible for this alignment? In particular, they compare LLM/fMRI alignment on reading data using BERT, but then compare this to BERT's representations _when certain linguistic properties have been removed_ (using both a novel and an existing removal method). The results show a significant decrease in alignment (crucially, not found when removing "random" information or when using random vectors) across all linguistic properties, suggesting that it really is BERT's encoding of linguistic information that's responsible for the alignment with fMRI reading data. This represents a finer-grained level of analysis than existing alignment works and should be of interest to many in the field.
Strengths: - Analyzes LLM representations' ability to predict fMRI data before and after "removing" linguistic information from said representations.
- Conducts analyses at both macro (whole-model, and whole-brain) as well as micro (layer-wise, ROI and sub-ROI) levels.
Weaknesses: - The Pearson correlations in Figure 3 are quite low, on average less than 0.1. What do the authors make of this? The ROI-level analyses seem relevant, but are not done with the same metric, so direct comparison is hard.
- Linguistic annotations for the text in the reading data come from an off-the-shelf NLP tool (stanza), so these are "silver" rather than "gold" annotations.
- More baselines would have been useful.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Table 1: what are baselines for each task? What would a most-frequent-class baseline score on each? This would be a helpful comparison for the post-removal numbers.
- Word length probing: I'm a bit confused here. Probing for word length should be hard for BERT since it does not have character-level information but instead uses sub-word tokenization. I noticed in the Appendix that the task is referred to as "Sentence Length". Is the task length-of-individual-word (that's what "Word Length" sounds like to me) or is it number-of-words-in-sentence ("Sentence Length")? Consistency with naming, and clarity on the exact task, would be welcome here.
- Why the choice of BERT as the model for comparison? In particular, I am curious about the choice of a "bidirectional" masked language model as opposed to a "left-to-right" language model like the GPT series. Some would argue that the latter more closely resemble the reading process, and so might also better predict data from reading.
### Typographic comments:
- Line 125 needs a closing period.
- Lines 185 and 186: "z-scored" -> "$z$-scored". Same for "t-test" and "p-value" throughout the paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - Use of stanza corenlp for linguistic annotations; these are "silver", not "gold".
- Only looked at one model, with relatively weak overall correlations with brain activity.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We thank the reviewer for their strong positive, insightful and valuable comments and suggestions which are crucial for further strengthening our manuscript.*
**1. The Pearson correlations in Figure 3 are quite low, on average less than 0.1. What do the authors make of this? The ROI-level analyses seem relevant, but are not done with the same metric, so direct comparison is hard.**
* The Pearson correlation results reported in our work are in similar ranges as reported in previous studies.
* The correlations are low but significant.
* We performed a two-tailed hypothesis test for all the tasks and did the Benjamni-Hochberg False Discovery Rate (FDR) correction for multiple comparisons.
* One primary reason for the low correlation is partially due to noise in the brain recordings. If one estimates a noise ceiling, we see that these numbers correspond to about 70% of the explainable variance by a model representation.
**2. Linguistic annotations for the text in the reading data come from an off-the-shelf NLP tool (stanza), so these are "silver" rather than "gold" annotations.**
* We agree that the annotations are silver but since current NLP tools show high accuracies, we can assume them to be fairly accurate as done in many other related papers [1], [2].
[1] Ganesh Jawahar, Benoît Sagot, Djamé Seddah. What does BERT learn about structure of language, ACL-2019
[2[ Hosein Mohebbi, Ali Modarressi, Mohammad Taher Pilehvar. Exploring the Role of BERT Token Representations to Explain Sentence Probing Results, EMNLP-2021
**3. Table 1: what are baselines for each task? What would a most-frequent-class baseline score on each? This would be a helpful comparison for the post-removal numbers**
* Unfortunately, there is no previous work to compare with. We reported the chance performance of each probing task in Table 4 in Appendix.
* It shows that after removal of a linguistic property our decoding task accuracy is close to chance performance showing that our property removal method works as expected. Also, Figure 5 displays the distribution of class labels in each of the probing tasks.
**4. Word length vs Sent Length**
* It is sentence length, thanks for pointing out the discrepancy. We will correct this.
**5. Why the choice of BERT as the model for comparison over GPT2**
* Please check “Common responses”, and Table 1 and Fig 1 in rebuttal PDF.
* The primary rationale behind our choice to incorporate the BERT model comes from the extensive body of research that aims to delve into the extent to which diverse English language structures are embedded within Transformer-based encoders, a field often referred to as "BERTology" [3] (as highlighted by Jawahar et al., 2019, and Mohebbi et al., 2021). We will clarify this.
[3] Rogers A, Kovaleva O, Rumshisky A. A primer in BERTology: What we know about how BERT works. Transactions of the Association for Computational Linguistics. 2021 Jan 1;8:842-66.
**6. Typos**
* We will address these typos.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed reply here and to everyone. I think the inclusion of GPT2 results is very welcome, as is the clarification on the baseline (I would like to see that in the body, not an appendix, in the full version). For now, I am happy to keep my score at 7 for a solid accept, but also would actively advocate for acceptance if there is disagreement with others.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's feedback and are confident that it has enhanced the paper's quality. | Summary: This paper aims to investigate the correspondence between HIPSs and models based on Neural Networks such as the transformers. The key innovation of this paper is to study this correspondence by elimiating specicific linguistic properties form BERT and to observe how this intervation affects the alignment with fMRI brain recording.
Strengths:
- This is really an innovative standpoint
Weaknesses: - There is a major innovation of this paper that is the cornerstone of the theory. Yet, this is not listed as a major result: the removal of linguistic properties from Transformers. Indeed, I was expecting citations to other papers as this is not listed in the main results of the paper. Yet, there are not citations and it seems to be proposed in this paper.
- Since the removal of linguistic properties from Transformers is a major cornerstone, all the theory could crumble if it is not investigated in a deep and correct way.
- Experiments to demonstrate that the removal of linguistic properties from Transformers are inconclusive. Indeed, Table 1 is not sufficient. It only says that after the adpatation BERT weights are so mess up that the classification task cannot be performed. There should be a comparative part or a sort of proof that the model is not compromised in the other parts. For example, there is a missing column: BERT with random parameters. Indeed, if the "after" models behave as BERT with random inizialization, results are not relevant.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: see weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 1 poor
Limitations: not applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments and suggestions which are crucial for further strengthening our manuscript.
**1. There is a major innovation of this paper that is the cornerstone of the theory. Yet, this is not listed as a major result: the removal of linguistic properties from Transformers. Indeed, I was expecting citations to other papers as this is not listed in the main results of the paper. Yet, there are not citations and it seems to be proposed in this paper.**
* For the removal of linguistic property information from the model representations, we use two previously published approaches: residual method [1] and INLP method [2].
* These approaches have been peer reviewed in good venues (Nature Computational Science and AAAI). We will clarify this.
* We also conducted validation experiments to ensure that these removal techniques work in our setting (Please see the Appendix Tables 5-10).
[1] Mariya Toneva, Tom M. Mitchell, Leila Wehbe. Combining computational controls with natural text reveals aspects of meaning composition, Nature Computational Science 2022.
[2] Zhang X, Wang S, Lin N, Zhang J, Zong C. Probing word syntactic representations in the brain by a feature elimination method. In AAAI 2022..
**2. Since the removal of linguistic properties from Transformers is a major cornerstone, all the theory could crumble if it is not investigated in a deep and correct way.**
* To investigate whether a linguistic property was successfully removed from the pretrained representations, we further test whether we can decode the task labels from the residuals for each probing task of the 21st year stimuli.
* To ensure that removal of a specific linguistic property works well, we show results in Table 1 (of the paper) by decoding from BERT before and after removal, decoding of linguistic property B when removing linguistic property A.
* We observed that most properties are not substantially affected by removing other task properties.
* Further, we observe that the brain alignment of a random vector is significantly lower than the residual brain alignment after removal of each linguistic property.
**3. Proof that the model is not compromised in the other parts. For example, there is a missing column: BERT with random parameters.**
* Fig 3 (in the paper) shows random embeddings are way worse compared to any of the “after” models. Table 1 (in the paper) and Appendix Tables 5-10 show that removing property A does not destroy the decoding performance from BERT on property B so therefore the embedding still contains important information.
* Based on the reviewers’ suggestion, we now perform brain encoding experiments with random weights of BERT. Please check the *Fig 3 in rebuttal PDF* at “Common responses”.
* Fig 3 (*in the rebuttal PDF*) shows brain predictivity performance using BERT with random weights is significantly worse compared to any of the “after” removal of linguistic property from pretrained BERT model. This shows that residual representations (i.e. removing a linguistic property from pretrained BERT) are informative and meaningful compared to random weights of BERT. | Summary: The paper investigates the relationship between specific linguistic properties and brain alignment across layers of a language model. The authors provide evidence for the importance of a linguistic property to the brain alignment. The focus of the study is to understand the degree to which various linguistic properties impact the performance of language models in predicting brain recordings.
Strengths: **Originality:** The paper offers an interesting perspective on the link between language properties and brain alignment in language models. It builds on existing work but adds a new dimension by directly showing the importance of specific linguistic properties.
**Quality:** The paper uses a large, public fMRI dataset and word features from the BERT model to get word-level representations. The results are interesting.
**Clarity:** The paper is well-structured and easy to understand. It explains the methods and results clearly, making it accessible to a wide range of readers.
**Significance:** The approach paves a path towards a way to understand how linguistic properties and brain alignment relate. This could help improve language models and give new insights in neuroscience and language processing.
Weaknesses: **Choice of Language Model:** The authors have chosen to focus on BERT, an encoder-only language model, for their study. Given that previous research has shown a deeper connection between autoregressive language models and the brain, the choice of BERT might limit the generalizability of the findings. The authors could consider including autoregressive language models in their analysis to provide a more comprehensive understanding of the relationship between linguistic properties and brain alignment.
**Correlation vs. Causation:** The authors aim to identify the effect of linguistic properties on brain alignment, but they rely on correlational metrics, which do not provide evidence of causation. To address this, the authors could consider methods from the causality+NLP literature to produce causal model explanations. This would strengthen their findings and provide more definitive evidence of the impact of linguistic properties on brain alignment. As it currently stands, it’s unclear which conclusions can be drawn from this study.
**Projection Method:** The authors use a projection method to identify the effect of linguistic information, which has been shown to be problematic and potentially misleading (Ravfogel et al., 2022). The authors could consider alternative methods for identifying the effect of linguistic information to ensure the accuracy and reliability of their findings. Some potential approaches include causal abstractions, which aim to provide a low-dimensional causal representation of LLMs.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. **Choice of Language Model:** Could you elaborate on why you chose to focus only on BERT, an encoder-only language model, for your study? Given that previous research has shown a deeper connection between autoregressive language models and the brain, have you considered including autoregressive language models in your analysis? How do you think the inclusion of such models might affect your findings?
2. **Correlation vs. Causation:** Your study aims to identify the effect of linguistic properties on brain alignment, but it relies on correlational metrics. Could you consider methods from the Causality+NLP literature to produce causal model explanations? How do you think a causal analysis might change your findings or conclusions?
3. **Projection Method:** You use a projection method to identify the effect of linguistic information, which has been shown to be problematic and potentially misleading (Ravfogel et al., 2022). Could you consider alternative methods for identifying the effect of linguistic information? How do you think a different method might affect your findings?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors do recognize some important limitations:
1. **Removal Approach:** They note that their method might unintentionally remove information correlated with the linguistic property they're studying. They suggest using larger datasets to mitigate this.
2. **Unaccounted Linguistic Properties:** They admit that there's still significant brain alignment even after removing all studied linguistic properties, indicating that there might be other linguistic properties at play that they haven't considered.
3. **Single Model Use:** They acknowledge that they've only tested their findings on one language model (BERT) and suggest that future work should test other models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We thank the reviewer for their positive, insightful and valuable comments and suggestions which are crucial for further strengthening our manuscript.*
**1. Choice of Language Model: BERT vs autoregressive models (GPT2)**
* Please check “Common responses”, and Table 1 and Fig 1 in the rebuttal PDF.
* The primary rationale behind our choice to incorporate the BERT model comes from the extensive body of research that aims to delve into the extent to which diverse English language structures are embedded within Transformer-based encoders, a field often referred to as "BERTology" [1] (as highlighted by Jawahar et al., 2019, and Mohebbi et al., 2021). We will clarify this.
* We also additionally reported six probing tasks results for GPT2 before and after removal in the rebuttal PDF.
[1] Rogers A, Kovaleva O, Rumshisky A. A primer in BERTology: What we know about how BERT works. Transactions of the Association for Computational Linguistics. 2021 Jan 1;8:842-66.
**2. Correlation vs. Causation**
* Thanks for the suggestion. Exploring the distinction between correlation and causation is an interesting avenue for future research work, and we will add it to the discussion.
* We still believe that our work is a substantial advancement in this area as it allows us to perturb linguistic representations from models and observe the outcome of this perturbation on brain alignment.
**3. Use of Projection Method to identify the effect of linguistic information**
* We have used two methods (residual method and INLP method) to remove information related to linguistic properties from the model representations, and we obtained similar results with both techniques.
* We use a linear projection method because the brain alignment function is also a linear function.
* We understand the concerns raised in previous work that more powerful decoding functions may be able to reconstruct some information about the linguistic property, but what we care about is the linear information, since that is what is used to predict the brain response.
* Additionally, our work is most closely related to that of [2], who employ a similar residual approach to study the supra-word meaning of language by removing the contribution of individual words to brain alignment.
[2] Mariya Toneva, Tom M. Mitchell, Leila Wehbe. Combining computational controls with natural text reveals aspects of meaning composition, Nature Computational Science 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal, I appreciate the time and effort, especially in adding the auto-regressive model experiments. While this paper has some flaws, I also think that is insightful and interesting. I increased my score.
---
Reply to Comment 1.1.1:
Title: Thank you for the reviewer's response
Comment: We appreciate the reviewer's feedback and are confident that it has enhanced the paper's quality. | Summary: The authors provide an analysis of correlations between BERT representations and fMRI data when abstracting away different linguistic properties from the BERT representations. They provide an in-depth analysis of the ways in which elimination of word length, tree depth, top constituents, tense, subject number and object number have on the overall and layer-wise alignment of BERT with fMRI data from 18 people. The authors find that removing any of these linguistic properties from the BERT representations reduces fMRI data alignment across all layers in the BERT model. Furthermore, syntactic properties are most relevant for the BERT-brain alignment globally, but semantic properties are highly locally aligned with brain regions generally associated with semantic processing.
Strengths: The overall question and method is relevant to the community. This paper contributes to the cognitive debate on the overall alignment that today's language models have with people's cognitive systems, here concretely their linguistic neural processing of stories. By abstracting away a number of linguistic properties from the BERT representations in a controlled way, the results speak for the type of information encoded in BERT that lead to the reasonably high alignment with fMRI data. The work therefore contributes to a better understanding of BERT representations and the usefulness of investigating them as cognitive systems.
The authors integrate valuable controls into their study, such as random baselines and correlational analyses of tasks that nicely contextualize the findings.
The paper is very well-written.
Weaknesses: To me, section 5.3 (Decoding task performance vs. brain alignment) has a number of interesting observations but I'm missing the overall motivation for this section. The authors write "While the previous analyses revealed the importance of linguistic properties for brain alignment at individual layers, we would also like to understand the contribution of each linguistic property to the trend of brain alignment across layers." (line 265ff). There seems to be a close connection between this question and Figure 3a from the previous section and it would be helpful to emphasize what the previous section lacks. To be clear, I think section 5.3 adds a lot, especially when it comes to the alignment of local instead of global brain regions but I currently can't find a motivation for whom this analysis is most relevant -- does this tell us something about BERT or are there any potential insights for neuroscience? The semantic alignment results seem to be backed up by prior findings in neuroscience. Is the same true for the rest, or if not, what does that mean?
BERT's layer-wise analysis of the correlations with fMRI data are central in the overall framing and centered in the main paper. I think that's rightfully so but I would like to encourage the authors to consider a strategy of substantiating claims that "BERT embeds a rich hierarchy [with] surface information at the initial and middle layers, syntactic information in the middle to top layers [...]" (lines 218ff). From inspecting the corresponding results table (Table 1), surface information seem maybe slightly more represented in the first and least in the second layer, but all other layers seem to be fairly evenly distributed. For the syntactic information, it seems uniformly high, and only subject number and object number seem to have a fairly monotonously increasing and decreasing structure that might suggest that it can't simply be due to random variance. Is there any way to estimate the variance of these terms or find a definition for when something is robustly represented more in some than other layers?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I find Figure 3 very intriguing. Removing any linguistic property results in a correlation drop from 0.1 to about 0.75. The authors also previously added a correlational analysis indicating that most linguistic properties aren't correlated with each other. Consequently, it would make sense to me that when one removes all linguistic properties, the correlation needs to drop further since different sources of variance have been removed which don't correlate with each other but clearly correlate with the fMRI data. However, even this correlation is at about 0.75. Do you have an explanation why this might be?
Picking up on a notion in the "Weaknesses" section: Why doesn't figure 3 already provide us with an answer to the central motivation of section 5.3 ("we would also like to understand the contribution of each linguistic property to the trend of brain alignment across layers.")? Or did you mean to motivate section 5.3 with "across brain regions"?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We thank the reviewer for their strong positive, insightful and valuable comments and suggestions which are crucial for further strengthening our manuscript.*
**1. Overall motivation for section 5.3 (Decoding task performance vs brain alignment)**
* While the previous analyses revealed the importance of linguistic properties for brain alignment at individual layers, the main motivation for Section 5.3 is to understand the contribution of each linguistic property to the trend of brain alignment across layers.
* This section provides more quantitative evidence and qualitative analyses by measuring the correlation across layers between the differences in decoding task performance from pretrained and residual BERT and the differences in brain alignment of pretrained BERT and residual BERT.
* We also look at the correlation between the drops in performance in decoding and encoding that are due to the removal of a specific linguistic property. Therefore this evidence is more direct.
* Overall, Section 5.3 helps us understand the contribution of each linguistic property to the trend of brain alignment across layers. It provides multiple neuroscience insights at the whole brain level, ROI level and sub-ROI level of language network.
**2. The layer-wise decoding results are not so precise in terms of what ling property maps to what layer. Is there any way to estimate the variance of these terms or find a definition for when something is robustly represented more in some than other layers? Can we do more to understand this?**
* As the reviewer points out, we observe that a linguistic property doesn’t precisely map to any particular layer (Fig 7 and Fig 13 in the Appendix).
* Instead, we focus on the brain alignment trend across all layers and specifically investigate the effect of the linguistic property on this trend, by measuring the correlation between the actual drop in performance resulting from the removal of a linguistic property and the subsequent reduction in brain alignment.
**3. Removing any linguistic property results in a correlation drop from 0.1 to about 0.75. However, even this correlation is at about 0.75 when one removes all linguistic properties. Do you have an explanation why this might be?**
* Fig 3a in the paper reports the average Pearson correlation across all layers of pretrained BERT and all voxels (i.e. whole brain analysis).
* It is possible that we don't see much difference on average because different linguistic properties may be related to different parts of the network and different brain regions.
* To investigate this possibility, we created a new brain plot (*Fig 2 in the rebuttal PDF*) by measuring the correlation across layers between voxelwise brain alignments before and after removing all linguistic properties. This shows that removing all properties shows very different region-level brain maps compared to removal of individual linguistic properties (Fig 4 in paper), and not much substantial brain alignment is left in the key language regions after removing all linguistic properties.
* Please check the *rebuttal PDF* at “Common responses”.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the thoughtful response. I especially appreciate the added experiment and the explanation behind the motivation of section 5.3. To me, the intended audience of the paper and the implications of the findings are still not quite addressed -- can the insights inform AI engineering, computational modeling in neuroscience, or is it another method for making models in general more explainable? From the introduction and abstract, the last of the three seems most likely to me but for that it's generally very detached from other model explainability work.
If the authors can provide additional clarification on this point, I'll revise my overall recommendation.
---
Reply to Comment 1.1.1:
Comment: Thank you for stressing this point. We agree that including a more in-depth explanation for the immediate and long-term intended impact of the work will strengthen the impact even more. The insights gained from our work could have implications for AI engineering, neuroscience and interpretability of models; some in the short-term others in the long term.
**AI engineering:** Our work most immediately fits in with the neuro-AI research direction that specifically investigates the relationship between representations in the brain and representations learned by powerful neural network models. This direction has gained recent traction especially in the domain of language, thanks to advancements in language models (Schrimpf et al. 2021 PNAS, Goldstein et al. 2022 Nature Neuroscience). Our work most immediately contributes to this line of research by understanding the reasons for the observed similarity in more depth. Specifically, our work can guide linguistic feature selection, can facilitate improved transfer learning and help in development of cognitively plausible AI architectures.
**Computational Modeling in Neuroscience:** Our work enables cognitive neuroscientists to have more control over using language models as model organisms of language processing.
**Model Explainability:** In the longer-term, we hope that our approach can contribute to another line of work that uses brain signals to interpret the information contained by neural network models (Toneva and Wehbe, 2019 NeurIPS; Aw & Toneva, 2023 ICLR). We believe that the addition of linguistic features by our approach can further increase the model interpretability enabled by this line of work.
*We will add this discussion to the revised manuscript.* | Rebuttal 1:
Rebuttal: *We thank the reviewers for their strong positive, insightful and valuable comments and suggestions which are crucial for further strengthening our manuscript.*
**Why choose BERT over other models? (reviewers YHJW and p9mD)**
* The primary rationale behind our choice to incorporate the BERT model comes from the extensive body of research that aims to delve into the extent to which diverse English language structures are embedded within Transformer-based encoders, a field often referred to as "BERTology" [1] (as highlighted by Jawahar et al., 2019, and Mohebbi et al., 2021). We will clarify this.
[1] Rogers A, Kovaleva O, Rumshisky A. A primer in BERTology: What we know about how BERT works. Transactions of the Association for Computational Linguistics. 2021 Jan 1;8:842-66.
**Results for a GPT2 model (reviewer YHJW)**
* Based on the reviewers’ suggestion, we now perform experiments with a new model - GPT2. We find the results to be similar to BERT (i.e. a rich hierarchy of linguistic signals: initial to middle layers encode surface information, middle layers encode syntax, middle to top layers encode semantics.)
* Table 1 in rebuttal PDF reports the result for each probing task, before and after removal of the linguistic property from pretrained GPT2. We verify that the removal of each linguistic property from GPT2 leads to reduced task performance across all layers, as expected.
* We also report the layer-wise performance for pretrained GPT2 before and after the removal of one representative linguistic property–TopConstituents in Fig 1 of rebuttal PDF. We observe that the brain alignment is reduced significantly across all layers after the removal of the linguistic property (indicated with red cross in the figure). We are working on completing the results for other linguistic properties, but don’t expect them to differ too much from those based on BERT, given that the linguistic properties map onto similar layers (Table 1 in rebuttal PDF).
**Evidence that representations are not unexpectedly compromised by the removal method: comparison of brain encoding results using BERT with random parameters. (reviewer 9Zaj)**
* Based on the reviewers’ suggestion, we now perform brain encoding experiments with random weights of BERT.
- Fig 3 (in the rebuttal PDF) shows brain predictivity performance using BERT with random weights is significantly worse compared to any of the “after” removal of linguistic property from pretrained BERT model. This shows that residual representations (i.e. removing a linguistic property from pretrained BERT) are informative and meaningful compared to random weights of BERT.
Pdf: /pdf/0a3762f476e80d77ef0a5b249c9548ab6977da58.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Masked Image Residual Learning for Scaling Deeper Vision Transformers | Accept (poster) | Summary: This paper finds that the deep layers of ViT fail to benefit from MIM pre-training. The authors replace deeper layers of MAE pre-trained ViTs with random initialization and demonstrate that this modified model achieves better performances than the original MAE pre-trained model. To this end, the authors conclude that MIM pre-training is only effective for shallow layers. To address this, Masked Image Residual Learning (MIRL) is proposed. Experiments under various benchmarks demonstrate the efficacy of the proposed MIRL method.
Strengths: 1. This paper is well-written and easy to follow.
2. The authors have systematically analyzed whether MIM pre-training is effective for deeper layers of ViTs.
3. The proposed method and the motivation are highly connected.
4. Experiments are sufficient and the code is provided in the supplementary material.
Weaknesses: **Major Concerns:**
1. **Unfair comparisons against other methods.** I have carefully checked the fine-tuning configuration of both MIRL (Table 2 in the supplementary material) and MAE [21], and find that *MIRL incorporates an extra exponential moving average (EMA)* while others do not, which appears to be an unfair comparison. It is better to fine-tune all models pre-trained by MIRL *without* EMA. Also, several hyper-parameters have been carefully tuned compared with MAE [21], such as base learning rate, batch size, and warmup epochs. Therefore, it is better to compare with MAE which is fine-tuned under the exact same configuration.
2. **Lack of intuitive explanation of residual learning.** I recognize the efficacy of learning residual in MIM, but the underlying motivation remains unclear. In other words, given the observation that deep layers of ViT fail to benefit from MIM pre-training, why learning residual can alleviate this problem?
3. **Observation I and II cannot be reflected in Figure 2 by using a single plot named "truncated MAE".** If my understanding is correct, models of these two observations are pre-trained in different ways. Specifically, the model of Observation I should be pre-trained by vanilla MAE while the model of Observation II should be pre-trained by truncated MAE. Therefore, these two observations cannot be reflected in Figure 2. It is better to plot three curves in Figure 2, including vanilla MAE, vanilla MAE with truncated fine-tuning (Observation I), and truncated MAE (Observation II).
**Minor Questions/Suggestions:**
4. Typo in Eq. (6). It should be $||\mathbf{x}^i - \hat{\mathbf{x}}_g^i - \hat{\xi}_g^i||_2^2$.
5. It is better to combine Table 1 in the main submission with Table 4 in the supplementary material to better compare the configuration (including #params and FLOPs) of different backbones.
6. It is encouraged to plot vanilla MAE with ViT-B and ViT-L in Figure 6 to demonstrate the scaling behavior of the proposed method.
7. It is better to compare deep ViTs pre-trained with vanilla MAE.
8. The configuration of Table 2a is not clear. Is residual learning performed on both pixel and feature?
9. The setting of Table 2d is confusing. Its caption says "multi-encoder" while in lines 203-212 is "multi-decoder".
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: I do not have any questions. Please refer to the weaknesses section. I am willing to raise my rating if the authors succeed in demonstrating that their improvements come from the proposed method and not from carefully tuned hyperparameters during fine-tuning.
**Post Rebuttal Comments**
Thanks for the rebuttal and additional comments. My concerns are well addressed. Since using EMA in fine-tuning will not affect the performance significantly, the reviewer thinks it might be a fair comparison. Also, the explanation of why residual learning works is intuitive. Therefore, I will raise my rating to 6 and the authors are supposed to revise their paper according to our dialog, including (1) clarification about the EMA, (2) an intuitive explanation about why it works, and (3) comparison with CVPR'23 works.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments. We answer the questions as follows.
**Q1. Unfair comparisons against other methods. MIRL incorporates an extra exponential moving average (EMA) while others do not, which appears to be an unfair comparison ... .**
**A1.** We address this concern from three aspects:
1. In our experiments, EMA does not significantly impact the fine-tuning accuracy on ImageNet. When fine-tuning ViTs without EMA, using the same hyperparameters as mentioned in [1], we observed similar accuracy as those obtained with EMA. For instance, when pre-trained for 300/800 epochs, ViT-S-54 achieved accuracies of 84.41%/84.79% with EMA and 84.34%/84.86% without EMA. Incorporating EMA requires adaptive adjustments to the learning rate and warm-up epochs to avoid overfitting.
2. The reason we employ EMA is to enhance tuning performance on small datasets consisting of only a few hundred training samples, such as private industrial datasets. When tuning on these limited-scale datasets, we gravitate towards loading weights from the ImageNet fine-tuned model instead of the MIM pre-trained model because the MIM lacks semantic features. In this context, using the EMA-fine-tuned models yields better tuning accuracy on tiny datasets, especially when picking a non-final checkpoint, compared to counterparts without EMA.
3. The reason EMA doesn't significantly impact performance could be attributed to the sufficiently long training period (e.g., spanning 800 pre-training epochs + 200/100 fine-tuning epochs). This allows models with or without EMA to likely converge to the same optimum. Nonetheless, EMA remains crucial for training models from scratch, as indicated in [1] (e.g., ViT-B achieves 82.3% accuracy with EMA and 82.1% without EMA).
**Q2. Lack of intuitive explanation of residual learning. Why learning residual can alleviate this problem?**
**A2.** We explain the intuition behind our method as follows:
Upon observing that deeper layers pre-trained by MIM underperformed compared to those with random initialization, we inferred that the weight parameters of these deeper layers had indeed been updated during MIM pre-training, but in an unfavorable direction.
In contrast, the shallower layers demonstrated improved performance after MIM pre-training. This led us to speculate that the depth of the layers could be the root cause of the degradation problem. By introducing shortcut connections between the shallower and deeper layers, we allowed the back-propagation to affect both the deeper and shallower layers simultaneously, cooperating to restore the masked content. This MIRL design tightly couples the deeper layers with the shallower ones, implying that MIRL should either guide both the deeper and shallower layers towards a beneficial direction or lead both astray. The results suggest that the shallower layers have, in essence, steered the deeper layers towards a more favorable direction. We will include this explanation in the revised version of the manuscript.
**Q3. Observation I and II cannot be reflected in Figure 2 by using a single plot named "truncated MAE". If my understanding is correct, models of these two observations are pre-trained in different ways. Specifically, the model of Observation I should be pre-trained by vanilla MAE while the model of Observation II should be pre-trained by truncated MAE. Therefore, these two observations cannot be reflected in Figure 2. It is better to plot three curves in Figure 2 ... .**
**A3.** We apologize for the confusion caused. In Figure 2, we included the results for vanilla MAE, vanilla MAE with truncated fine-tuning, and truncated MAE. Within the curve for vanilla MAE in Figure 2, the initial point indicates the result of an MAE model that has been pre-trained and subsequently fine-tuned without any random initialization. This specific point aligns with the fine-tuning result of vanilla MAE. Meanwhile, the rest points in the vanilla MAE curve indicate the results of vanilla MAE with truncated fine-tuning. We will provide further clarity on this point in the revised version.
**Q4. Typo in Eq. (6). It should be $\| | \mathbf{x}^i - \hat{\mathbf{x}}^i_g - \hat{\xi}_g^i | \|^2_2.$ The setting of Table 2d is confusing. Its caption says "multi-encoder" while in lines 203-212 is "multi-decoder"**
**A4.** We appreciate the reviewer for pointing out the typos. We correct them accordingly. Specifically, in the caption of Table 2d, "multi-encoders" will be corrected to "multi-decoders".
**Q5. It is better to combine Table 1 in the main submission with Table 4 in the supplementary material to better compare the configuration ... .**
**A5.** Thanks for the suggestion from the reviewer. We will combine Table 1 from the main submission with Table 4 in the supplementary material in the revised version.
**Q6. It is encouraged to plot vanilla MAE with ViT-B and ViT-L in Figure 6 to demonstrate the scaling behavior of the proposed method.**
**A6.** We agree with the reviewer's suggestion that including curves for MAE models can highlight the scaling capabilities of our method. In the revised version, we will add the curves of vanilla MAE with ViT-B and ViT-L in Figure 6.
**Q7. It is better to compare deep ViTs pre-trained with vanilla MAE.**
**A7.** Reviewer Pj1d raised a similar concern. Kindly refer to our response to Reviewer Pj1d's Question 5, and the table with results, for clarification. Across various model scales, MIRL consistently outperforms MAE.
**Q8. The configuration of Table 2a is not clear. Is residual learning performed on both pixel and feature?**
**A8.** The feature-level loss is only calculated at the end of the encoder, which means that the "residual learning " is only performed at the pixel level, as mentioned in Appendix B.1 from the supplementary material.
[1] He, Kaiming, et al. "Masked autoencoders are scalable vision learners." Proc. of the IEEE/CVF CVPR, 2022.
---
Rebuttal 2:
Title: Post Rebuttal Comments from Reviewer DfMD
Comment: I appreciate the rebuttal and most of my concerns are addressed.
However, I found that the author did not compare the proposed method with some CVPR'23 works, e.g., [A] and [B]. Moreover, [B] seems to be very similar to the proposed method. The main difference lies in the reconstruction targets. The proposed method reconstruct residuals while [B] reconstructs raw RGB pixels. It is suggested to compare some new state-of-the-art methods.
[A] Haochen Wang et al. Hard patches mining for masked image modeling. In CVPR 2023.
[B] Haoqing Wang et al. Masked Image Modeling with Local Multi-Scale Reconstruction. In CVPR 2023.
---
Rebuttal Comment 2.1:
Comment: Thanks for the reviewer's response.
We are truly pleased to hear that our rebuttal has helpfully addressed some concerns of the reviewer.
We noticed the two CVPR'23 papers[1,2] mentioned by the reviewer. Given that the CVPR 2023 conference took place after our initial submission, we did not have a chance to cite them earlier. However, we are happy to cite these papers in our revised version.
Having read the paper [2], it is evident to us that LocalMIM [2] is fundamentally different to our own work.
We would like to share a couple of key differences:
1. Our motivation is distinct from LocalMIM's. We initiated our research by uncovering a negative optimization issue in the deeper layers of networks during MIM pre-training. To the best of our knowledge, our work is the first to expose this problem, underscoring the novelty of our paper — a sentiment echoed by Reviewer 6uey.
Following this, our primary focus has been on tackling this degradation challenge.
Conversely, LocalMIM is designed for local multi-scale reconstruction where the lower and upper layers reconstruct fine-scale and coarse-scale supervisions respectively in order to accelerate the training converge speed. It is crafted with hierarchical architectures like Swin[3] in mind, which inherently contain less low-level information in the higher layers. However, the effectiveness of LocalMIM on plain Transformer architectures appears to be limited. In essence, the issue that LocalMIM tackles may not even be present in a plain Transformer architecture, as ViTs contain no downsampling operations, ensuring that low-level information is preserved in the deeper layers.
To shed light on this, our ablation study titled "MIRL vs. simple multi-decoders" (see Lines 203-212) provides an insightful perspective. The ablation results summarized in the table below indicate that LocalMIM (represented as multi-decoders) does not improve accuracy when used with a plain encoder architecture
| model | multi-decoders | vanilla MAE | MIRL|
| -------- | :-------: | :-------: | :-------: |
ViT-B-48 | 84.5 | 84.5 | 85.3 |
Additionally, based on the results provided in [1], which are listed in the table below, LocalMIM significantly improves the Swin encoder, but yields only a 0.1% accuracy increase for the ViT encoder when compared to the baseline MAE.
|method| ViT-B | Swin-B |
| -------- | :-------: | :-------: |
MAE / GreenMIM [4] | 82.9 | 83.2 |
LocalMIM-Pixels [2] | 83.0 | 83.7 |
LocalMIM-HOG [2] |83.3 | 83.8 |
The core feature of our framework, distinct from LocalMIM, is the image residual learning we propose. This methodology alleviates the degradation problem. The idea is straightforward yet has proven to be a highly practical solution to the degradation problem without bells and whistles.
2. The other distinction is that we focus on scaling deeper Vision Transformers. By leveraging MIRL, we unleashed the potential of slender ViT variants after addressing the degradation problem. As highlighted by Reviewer igwC, this is an under-explored area of research, one that LocalMIM [2] has not ventured into.
[1] Wang, Haochen, et al. "Hard patches mining for masked image modeling." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[2] Wang, Haoqing, et al. "Masked Image Modeling with Local Multi-Scale Reconstruction." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[3] Liu, Ze, et al. "Swin transformer: Hierarchical vision transformer using shifted windows." Proceedings of the IEEE/CVF international conference on computer vision. 2021.
[4] Huang, Lang, et al. "Green hierarchical vision transformer for masked image modeling." Advances in Neural Information Processing Systems 35 (2022): 19997-20010. | Summary: In this paper, the authors propose a new mask image modeling method, which is helpful for deep ViT model pretraining.
Strengths: Please refer to Questions
Weaknesses: Please refer to Questions
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: ### strength
1. The paper is well-written and easy to follow
2. The proposed method starts from an interesting observation and is novel.
3. The experiment results seem good.
### Weakness
1. The experiment lacks some necessary reuslts. For example, ViT-L is deeper than ViT-B and has 24 layers by default, it may benefit from the proposed method. This would be a fair comparison and further prove the effectiveness of the proposed method.
2. Following 1, there is no baseline for the deeper models except Table.2(b). So when comparing with MAE, the gain of the proposed method is not clear.
3. I'm confused about the design of ViT-S-54, why use 54 layers? if it uses 48 layers, its params and FLOPs will be more similar to ViT-B and the comparison would be more fair.
4. I also have a question about the method design, why the segments are divided into two groups to predict content and residual? Take Fig4 as an example, denote input as x, if H1 predicts x0, H2 predicts x1 = x-x0, H3 further predicts x2 = x-x0-x1 and so on, such a sequential design seem more consistent with the model architecture, will it perform better?
5. A question about detail, MAE uses the pixel-normed patch as the prediction target, which is critical for its performance. Is this also used in the proposed method and the reported MAE baselines(300/800)?
6. Some baseline methods are missed. For example, MaskFeat[1] for ViT-B, Data2Vec[2], PeCO[3], BootMAE[4] and CAE[5] for ViT-B/L.
7. A small typo in Table.2(d), should it be multi-decoders?
[1] Masked Feature Prediction for Self-Supervised Visual Pre-Training
[2]data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language
[3]PeCo: Perceptual Codebook for BERT Pre-training of Vision Transformers
[4]Bootstrapped Masked Autoencoders for Vision BERT Pretraining
[5]Context Autoencoder for Self-Supervised Representation Learning
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to Questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for finding our observations interesting and method novel. In the following, we address the concerns of the reviewer.
**Q1. The experiment lacks some necessary results. For example, ViT-L is deeper than ViT-B and has 24 layers by default, it may benefit from the proposed method. This would be a fair comparison and further prove the effectiveness of the proposed method.**
**A1.**
As stated in Lines 153-157 from the paper, training ViT-L is very unstable due to the adoption of a large hidden dimension size.
While performing MAE pre-training with ViT-L, we encountered the issue of "NaN" values. Each time when NaN occurs, we have to restart the training from the beginning epoch, as checkpoint resumptions could not solve the problem. This instability has also been reported in [1]. These training failures resulted in substantial waste of computational resources. Due to this instability, the actual computational cost for training was 2 to 3 times higher than the ideal cost, as we had to complete the pre-training of ViT-L after encountering failure in three separate attempts while using MAE.
Nonetheless, we assure that the experimental setup, theoretical derivations, and conclusions presented in this paper adhere to rigour and reproducibility. We hold the belief that the performance trends observed for ViT-B and ViT-S can be extended to ViT-L despite the encountered instability.
**Q2. Following 1, there is no baseline for the deeper models except Table.2(b). So when comparing with MAE, the gain of the proposed method is not clear.**
**A2.** The baselines for deeper models can be found in our response to Reviewer Pj1d's Question 5, including a table with results. According to these results, across various model scales, MIRL consistently outperforms MAE.
**Q3. I'm confused about the design of ViT-S-54, why use 54 layers? if it uses 48 layers, its params and FLOPs will be more similar to ViT-B and the comparison would be more fair.**
**A3.** The 54-layer ViT model features a slender structure, generally deemed challenging to train. Experimenting with such a slender model can further demonstrate our approach's ability in facilitating training deeper ViT models, aligning with the paper's theme of scaling deeper vision transformers.
We have presented a fair comparison between MAE and MIRL in Table 3 from the paper, where both employ the same encoder, specifically ViT-B.
**Q4. I also have a question about the method design, why the segments are divided into two groups to predict content and residual? Take Fig4 as an example, denote input as x, if H1 predicts x0, H2 predicts x1 = x-x0, H3 further predicts x2 = x-x0-x1 and so on, such a sequential design seem more consistent with the model architecture, will it perform better?**
**A4.** For insights into the intuition behind our method's design, kindly refer to our response to Reviewer DfMD's Question 2.
The reviewer's suggestion seems to parallel a design concept where one learns the residual of another residual. Although we haven't experimented with this specific design, we harbour reservations about such a "residual-of-the-residual learning" scenario. The residual of the residual may approach zero, potentially stalling weight updates.
**Q5. A question about detail, MAE uses the pixel-normed patch as the prediction target, which is critical for its performance. Is this also used in the proposed method and the reported MAE baselines(300/800)?**
**A5.** Yes. Our models as well as the reported MAE baselines used the normalized pixel values.
**Q6. some baseline methods are missed. For example, MaskFeat for ViT-B, Data2Vec, PeCO, BootMAE and CAE for ViT-B/L.**
**A6.** The result for MaskFeat[2] is included in Table 3, from page 7 of the paper. However, due to space constraints in the submission, the results of the other mentioned methods are not presented in Table 3. We will include comparisons with these methods in the revised version.
**Q7. A small typo in Table.2(d), should it be multi-decoders?**
**A7.** Thanks for pointing out the typo. Yes, it should be multi-decoders.
[1] He, Kaiming, et al. "Masked autoencoders are scalable vision learners." Proc. of the IEEE/CVF CVPR, 2022.
[2] Wei, Chen, et al. "Masked feature prediction for self-supervised visual pre-training." Proc. of the IEEE/CVF CVPR, 2022.
---
Rebuttal Comment 1.1:
Comment: The rebuttal solved my concerns partly. So I keep my score as Borderline accept.
---
Reply to Comment 1.1.1:
Comment: Thanks for the response. We are glad to have resolved some of the concerns raised. | Summary: This paper delves into the degradation issue encountered in the deeper layers of Vision Transformer (ViT) and proposes a self-supervised learning framework called Masked Image Residual Learning (MIRL). MIRL reformulates the learning objective to recover the residual of the masked image and makes scaling ViT along depth a promising direction for enhancing performance. Experimental results demonstrate promising improvements on various downstream tasks with increased depth, highlighting the effectiveness of the MIRL method.
Strengths: 1.Tackling the degradation issue in training deeper vision-Transformers (ViT) in the context of MIM pre-training is a practical problem and is also a under-explored area of research.
2.This paper presents several interesting observations and shows the MIM introduces negative optimization in the deeper layers of ViT that would hinder further pre-training performances. Thre proposed method is simple and effective for solving this problem.
3.The authors conduct extensive and comprehensive experiments that show the effectiveness and efficiency of the proposed method.
Weaknesses: 1.While the proposed method has shown promising results, it still lacks an in-depth explanation, e.g., choosing the training objective for encouraging deep layers to recover the image residual. The rationale behind dividing different segments and providing the proposed reconstruction terms could benefit from offering additional theoretical justifications, such as utilizing tools like Mutual Information.
2.It would be beneficial to provide further clarity on the mechanism to ensure precise reconstruction of the main component and residual of the image in the shallower and deeper parts, respectively. Given that they share the same training objective, disentangling them poses a nontrivial challenge.
3.An ablation study could be added to investigate whether assigning different regularization weights to different segment loss parts is beneficial, whether the MIRL approach incurs additional training time and other costs.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Training the vision-Transformers with varying scales often requires distinct hyper-parameters. For instance, the optimal hyper-parameters for training ViT-Large and ViT-Small typically differ. It would be interesting to investigate if the proposed MIRL approach can alleviate this phenomenon when training vision Transformers with varying scales.
Overall, this paper addresses an important problem and provides some insights through observations and the proposed solutions. While it leans more towards empirical findings, it would be beneficial to provide additional theoretical evidence to support the claims.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See the above comments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their appreciation of our work, considering the area addressed in our paper as an important under-explored area of research. In the following we address the concerns of the reviewer.
**Q1. While the proposed method has shown promising results, it still lacks an in-depth explanation, e.g., choosing the training objective for encouraging deep layers to recover the image residual. The rationale behind dividing different segments and providing the proposed reconstruction terms could benefit from offering additional theoretical justifications, such as utilizing tools like Mutual Information.** ...
**Overall, this paper addresses an important problem and provides some insights through observations and the proposed solutions. While it leans more towards empirical findings, it would be beneficial to provide additional theoretical evidence to support the claims.**
**A1.** Indeed, as the reviewer mentioned and as we ourselves noted in Section 4.6, an in-depth theoretical explanation for our method remains elusive.
An intuitive explanation for our method can be found in our response to Reviewer DfMD's Question 2.
Additionally, we have offered some insights in Lines 259-264 from Section 4.6, entitled "Limitation and discussion," from the paper, and have included a gradient analysis in Appendix C from the supplementary materials associated with the paper. This analysis illustrates that our method could stabilize gradient flows and enhance the learning dynamics for deeper layers in ViTs.
We plan to provide a thorough theoretical justification in a subsequent work.
We appreciate the suggestions of the reviewer. We will consider analyzing the mutual information to provide more insights of MIRL.
**Q2. It would be beneficial to provide further clarity on the mechanism to ensure precise reconstruction of the main component and residual of the image in the shallower and deeper parts, respectively. Given that they share the same training objective, disentangling them poses a nontrivial challenge.**
**A2.** In Line 213, the ablation study provided compares MIRL with "coarse and fine" separation, examining whether a precise reconstruction of both the main component and image residual can enhance performance. The "fine and coarse" separation resembles the main component and residual separation but with two unique training objectives. Such a precise reconstruction is not as effective as our default design.
In Appendix B.2 of the supplementary material submitted with the paper, we also investigated whether explicitly imposing a precise reconstruction can be beneficial by modifying the loss definition as follows:
$ \mathcal{L}_g^\dag = \frac{1}{|\mathcal{M}| }\sum _{i\in \mathcal{M}} \frac{1}{2P^2C} \big(\| \xi_g^i \|^2_2 + \| \xi_g^i - \hat{\xi}_g^i \|^2_2 \big). $
This design also falls short compared to our default design. While scaling term $\| \xi_g^i \|^2_2$ with a small value such as 0.1, we achieve the same outcome as the default design.
**Q3. An ablation study could be added to investigate whether assigning different regularization weights to different segment loss parts is beneficial, whether the MIRL approach incurs additional training time and other costs.**
**A3.** Following the suggestion of the reviewer, we introduce weight coefficient $\lambda_g$ into our loss definition for each segment, shown in the equation below,
$ \mathcal{L}_g = \frac{1}{|\mathcal{M}| } \lambda_g \sum _{i\in \mathcal{M}} \frac{1}{P^2C} \| \xi_g^i - \hat{\xi}_g^i \|^2_2 .$
We will add an ablation study for $\lambda_g$ in the revised version.
MIRL does not lead to an increase in training time. Instead, throughout the paper, we emphasize the high efficiency of our method. It is not that it only delivers superior results but it also requires shorter training times than the baselines. For detailed information on training time, please refer to our response to Reviewer Pj1d's Question 2, where we also provide a table with results.
**Q4. Training the vision-Transformers with varying scales often requires distinct hyper-parameters. For instance, the optimal hyper-parameters for training ViT-Large and ViT-Small typically differ. It would be interesting to investigate if the proposed MIRL approach can alleviate this phenomenon when training vision Transformers with varying scales.**
**A4.** We appreciate the insightful suggestions. Across all ViT variants, we've maintained consistent hyper-parameter settings, which encompass learning rate, batch size, weight decay, among others, during the pre-training phase. Tweaking these hyper-parameters has a minimal impact on the performance, according to our experiments.
During the fine-tuning phase, we found our models to be insensitive to the change of droppath configuration, as mentioned in Appendix A.1 from the supplemental material submitted with the paper.
However, as we have found empirically, using a larger layer-wise learning rate decay for deeper ViTs is currently unavoidable.
We will incorporate more details in the supplementary material of the revised version.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's efforts in providing responses, which have addressed most of my concerns. Overall, I find this work to be of good quality and interesting, and I am willing to increase my rating.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: We appreciate the recognition of our work. Thanks again for the constructive feedback. | Summary: This paper clarifies that MIM pretraining can induce negative optimization in deeper layers of ViT through comparison between vanilla MAE and truncated MAE. Based on this observation, this paper proposes a MAE-based framework named Masked Image Residual Learning (MIRL) for alleviating the degradation problem and training deeper ViT. In MIRL, The objective of MIM is decoupled to recovering the main component of images from the features of shallower layers, and recovering the residual from the features of deeper layers. Ablation experiments shows that MIRL outperforms MAE and MIM baselines with multiple decoders, and help deeper ViTs (e.g., ViT-S-54 and ViT-B-48) achieve better performance compared to previous methods on ImageNet.
Strengths: 1. In this paper, the degradation problem in the MIM pretraining of ViT is illustrated by well-designed experiments.
2. Training deeper ViT with MIM has not been widely studied in previous works.
3. The idea of recovering the residual of images is interesting, and ablation experiments show the effectiveness of MIRL.
Weaknesses: 1. Although this paper has cited some previous works that provide supervision to different layers of ViT (like deepMIM[1]), there is no comparison between MIRL and these previous methods in this paper.
2. Using multiple decoders in MIRL will significantly increase training time compared with MAE. A comparison of training time between MIRL and MAE should be included.
3. On the dense prediction tasks, MIRL does not show significant improvement (even weaker performance) compared to MAE with a similar number of parameters.
4. This paper says that MIRL alleviates the degradation problem in deeper layers of ViT. However, This paper does not provide evidence on whether the phenomenon in observation II still exists in the proposed method.
5. The baseline results of pretraining deeper ViTs (like ViT-S-54 ViT-B-48) with MAE are not provided in this paper.
[1] Sucheng Ren, Fangyun Wei, Samuel Albanie, Zheng Zhang, and Han Hu. Deepmim: Deep supervision for masked image modeling. arXiv preprint arXiv:2303.08817, 2023.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The results in Table 2(d) show the comparison between MIRL and MIM baselines with multiple decoders. Did the baseline method here use DID? If not, can "residual learning + DID" significantly outperform "multi-decoders + DID"?
2. A comparison of training time between MIRL and MAE should be included.
Other questions have been mentioned in weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the limitations of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for considering our idea interesting. We address the reviewer's concerns as follows.
**Q1. Although this paper has cited some previous works that provide supervision to different layers of ViT (like deepMIM[1]), there is no comparison between MIRL and these previous methods in this paper.**
**A1.** Given that DeepMIM [1] utilizes pre-trained MAE models, drawing a direct comparison with our approach may not be meaningful. We were unable to utilize the code available on DeepMIM's GitHub page. However, the DeepMIM-MAE model bears resemblance to the simple "multi-decoder model" discussed in our ablation study, the results of which are presented in Table 2(d) on page 6 of the paper. In Lines 203-212 of the paper, we discuss the results and argue that multi-decoders are not as effective as MIRL.
**Q2. Using multiple decoders in MIRL will significantly increase training time compared with MAE. A comparison of training time between MIRL and MAE should be included.**
**A2.** In MIRL, each decoder is quite shallow, ensuring that there isn't a significant increase in the training time. We assessed the training time per epoch on a single V100 GPU, and the total pre-training (pt) time was determined by multiplying the epoch time by the number of training epochs. As illustrated in the table below, when achieving comparable fine-tuning (ft) accuracy, MIRL requires roughly 6 times less training time than MAE. This highlights the better efficiency of the proposed methodology.
| method | backbone | min/ep | #pt epochs | total pt time (hours) | ft acc. (%) |
| -------- | :-------: | :-------: | :-------: | :-------: | :-------: |
MAE | ViT-B | 64 | 1600 | 1706 | 83.6|
MIRL | ViT-B | 57 | 300 | 285 | 83.5 |
**Q3. On the dense prediction tasks, MIRL does not show significant improvement (even weaker performance) compared to MAE with a similar number of parameters.**
**A3.** For the semantic segmentation task on ADE20K, MIRL yields an mIOU that is 0.4 lower than MAE's. However, our pre-training schedule is only half the duration of MAE's (i.e., 800 epochs vs. 1600 epochs). Given these closely matched results, the greater efficiency of MIRL deserves emphasis.
Moreover, when pre-training for a longer schedule, we expect our model to deliver even better performance, as analyzed in Figure 6 from the paper. Additionally, even with fewer pre-training epochs, our method outperforms other approaches in both image classification and object detection tasks.
**Q4. This paper says that MIRL alleviates the degradation problem in deeper layers of ViT. However, This paper does not provide evidence on whether the phenomenon in observation II still exists in the proposed method.**
**A4.** To address this concern, we devise an additional model named "truncated MIRL". The concept is akin to the truncated MAE depicted in Figure 1(b) from the paper. It involves pre-training the early encoding blocks using MIRL, while the subsequent blocks are randomly initialized. As detailed in the table below, MIRL outperforms truncated MIRL (in which 3 blocks are not involved in pre-training, and the 5th block solely focuses on recovering the masked content) by 0.3%. This demonstrates that MIRL effectively pre-trains the deeper layers, outperforming random initialization. This also suggests that the phenomenon observed in Observation II does not exist in our method. Due to the space limitation for the paper, an additional diagram to depict truncated MIRL will be provided in the supplementary material of the revised version.
| encoder | method | ft acc. (%) |
| -------- |-------- | :-------: |
| ViT-S | MIRL | 82.3 |
| ViT-S | truncated MIRL | 82.0 |
| ViT-S | MAE | 81.0 |
| ViT-S | truncated MAE | 81.7 |
**Q5. The baseline results of pretraining deeper ViTs (like ViT-S-54 ViT-B-48) with MAE are not provided in this paper.**
**A5.** We have performed the relevant experiments and in the table below we provide comparisons between MAE and MIRL when using deeper ViTs.
| encoder | method | pre-training epochs| ft acc. (%) |
| -------- |-------- | :-------: | :-------: |
| ViT-S-54 | MAE | 300 | 82.7 |
| ViT-S-54 | MIRL | 300 | 84.4 |
| ViT-B-48 | MAE | 300 | 84.5 |
| ViT-B-48 | MIRL | 300 | 85.3 |
By tackling the degradation problem, MIRL consistently outperforms MAE and offers superior model scaling capabilities.
**Q6. The results in Table 2(d) show the comparison between MIRL and MIM baselines with multiple decoders. Did the baseline method here use DID? If not, can "residual learning + DID" significantly outperform "multi-decoders + DID"?**
**A6.** To address this concern, we quantize our method as MIRL = multi-decoders + DID + "residual learning".
After accounting for each of these components, as well as their various combinations, we provide the following results:
| model | multi-decoders | DID | residual learning |Top-1 (%) |
| -------- | :-------: | :-------: | :-------: | :-------: |
ViT-B-48 |✗| ✗| ✗ | 84.3 |
ViT-B-48 |✔| ✗| ✗ | 84.5 |
ViT-B-48 |✔|✔| ✗ | 84.5 |
ViT-B-48 |✔|✗| ✔ | 85.0 |
ViT-B-48 |✔|✔| ✔ | 85.3 |
We clarify that in Table 2(d) from the paper, the multi-decoder implementation utilizes DID. However, as indicated in rows #2 and #3 of the above table, DID does not enhance the model's performance when "residual learning" is excluded. As we mentioned in Lines 116-120 from the paper, "residual learning" represents the core of the proposed framework. Still, DID boosts the "multi-decoders" accuracy by ~0.1% when the encoder is ViT-B-24.
[1] Ren, Sucheng, et al. "DeepMIM: Deep Supervision for Masked Image Modeling." arXiv preprint arXiv:2303.08817 (2023).
---
Rebuttal Comment 1.1:
Comment: Thanks for authors’ effort in the rebuttal. The responses have addressed most of my concerns. I will keep my rate and recommend accepting this paper.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: We are glad to have resolved some of the concerns raised. Thanks for the recognition of our work. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: * This paper propose a Masked Image Modeling pretraining framework, which boost the pretraining performance in deeper deeper layers of ViT architecture.
* Deeper vision transformers are hard to train, so authors introduce a new pretraining objective for mim, which can alleviate the degradation problem in deeper vit.
* MIRL segment the encoding blocks into several blocks(such as 4) and use different decoders to reconstruct the target.
* various experiments are conducted, with less pre-training time, MIRL can achieve competitive performance compared to other mim approaches especially in deeper vit.
Strengths: * well motivated and well written, training and pretraining in deeper vit architecture is important, experiments show that pretraining in deeper vit-base can get similar result with vit-base.
* The downstream task performance(in table 4) show superior performance than MAE framework, +2.5 bbox mAP, +1.8 mask AP.
Weaknesses: * Since MIRL with deeper layers is different from the standard vit architecture(more deep than original vit), the fully supervised performance of the the same layers should be reported, the compare is a little in Table 3.
* I am interested that would this framework be also work in convnet mask image modeling framework(such as simmim, spark) ?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: please refer to the weakness
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Question1 (Q1). Since MIRL with deeper layers is different from the standard vit architecture(more deep than original vit), the fully supervised performance of the the same layers should be reported, the compare is a little in Table 3.**
**Answer 1 (A1).** We provide an additional comparison between ViT-B and ViT-S-54, shown in the below table, illustrating that a deeper ViT does not exceed a standard ViT when they do not experience the masked unsupervised training. We opt to not present these results in our original submission because we want readers to put more attention to the degradation problem that happened in the Masked pre-training process.
We use the same hyperparameters as provided in [1].
| Encoder | Method | ImageNet (%) |
| -------- | -------- | :-------: |
| ViT-B | supervised | 82.3|
| ViT-S-54 | supervised | 82.2|
**Q2. I am interested that would this framework be also work in convnet mask image modeling framework(such as simmim, spark) ?**
**A2.** Since our code was originally implemented for the ViT arch, we have not tested our MIRL framework by using pure ConvNets. However, we instead experiment with the hybrid architecture, ConvViT[2, 3], which is made up of both conv layers and transformer layers. When using ConvViT-B as an encoder, using a 300 epoch-pretraining scheme, MIRL generates 85.1\% finetuning accuracy higher than the baseline (84.3\%).
[1] He, Kaiming, et al. "Masked autoencoders are scalable vision learners." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[2] Gao, Peng, et al. "Convmae: Masked convolution meets masked autoencoders." arXiv preprint arXiv:2205.03892 (2022).
[3] Dosovitskiy, Alexey, et al. "An image is worth 16x16 words: Transformers for image recognition at scale." arXiv preprint arXiv:2010.11929 (2020).
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your detailed explanation of my concerns. I keep my initial score.
---
Reply to Comment 1.1.1:
Comment: Thanks for the response and recognition of our work. | null | null | null | null | null | null |
Generative Modeling through the Semi-dual Formulation of Unbalanced Optimal Transport | Accept (poster) | Summary: The authors propose to use semi-dual formulation of unbalanced optimal transport for generative modelling. Via the experiments on image datasets, the authors illustrate the advantages of the proposed method, namely outlier robustness, stability and fast convergence, and extensively compare with other methods.
Strengths: The authors have spent significant effort on the experiments, including extensive comparison with other competing methods, abalation study.
The literature review is also good. To this extent, I'm convinced that UOTM is indeed an efficient method for generative modelling under the presence of outliers.
Weaknesses: However, this work is a direct implementation/application of [67] with no additional improvement or modification. The training procedure is also standard in GAN. For this reason, I would say the overall contribution is quite incremental, and this puts me on the borderline decision, where I'm fighting between the novelty/originality of the work and the effort that the authors spend on the experiments.
- While the authors compare UOTM with various methods, I don't see the comparison with [76]. Is there any reason that [76] is not compared or considered in the experiments?
- Given the fact that unbalancedness implies robustness, it's not new nor surprising to see UOTM outperforms OTM under the presence of noise. Probably it's worth discussing more about the comparison with other unbalanced-based methods (e.g. [8] or [17]), rather than balanced-based methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I'm not very clear about the order of magnitude of $\tau$ in the experiment. Is it $0.5, 1.2$, or $0.5 \times 10^{-3}, 1.2\times 10^{-3}$. The smaller it is, the more we are forced to respect the marginal constraint, i.e. approaching the balanced OT problem.
- What is the definition of Softplus?
- Regarding the theoritical justification of outlier robustness of UOT, the authors may consider to add reference [17]. I see that it is already cited, but not for that purpose.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do not discuss the limitations, but do discuss the potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for acknowledging that "UOTM is indeed an efficient method for generative modeling under the presence of outliers". Also, we agree with the reviewer that, in terms of method, our model is a direct parametrization of the semi-dual form of UOT [67]. Nevertheless, we would like to emphasize the contribution of our work.
- Our work is the first attempt to introduce the semi-dual form of UOT into the generative modeling task. **The semi-dual form enables 2 network parametrizations of UOTM, while the other UOT-based approaches [1, 2] require 3 networks (Line 188-196).** We believe this factor simplifies the training process, contributing to the achievement of competitive results.
- As the reviewer said, we demonstrated UOTM is an efficient method for generative modeling under the presence of outliers (Sec 5.1).
- Moreover, we showed UOTM can be a competitive model for clean datasets as well. The soft constraint on distribution matching of UOT induces a concern about its performance on clean datasets. **To address this concern, we provided an upper-bound of the divergences between the marginals in UOT (Theorem 3.3). We evaluated UOTM on the image benchmark datasets, such as CIFAR-10 and CelebA-HQ (Sec 5.2).**
- Furthermore, we provided **explanations for the observed experimental results where UOT-based model (UOTM) outperforms OT-based model (OTM) in terms of distribution matching, despite the soft constraint of UOT**. These explanations are in two folds:
- First, we suggested a theoretical explanation based on Theorem from [19]. UOT objective minimizes not only the gradient error $\\| \nabla (v^{c}-v^{\star c}) \\|$, but also the function errors $\\| v^{c}-v^{\star c}\\|, \\| v-v^{\star}\\|$ (Line 148-157). In contrast, OT objective only minimizes the gradient error $\\| \nabla (v^{c}-v^{\star c})\\|$. We hypothesize this property enables the neural network training easier for UOTM, leading to better distribution matching than OTM.
- Second, we provided some intuitive explanations from the engineering point of view in Appendix C.1. Compared to OTM, UOTM enables adaptive updates for each sample, with weaker updates on well-performing data and stronger updates on poor-performing data (Line 649-650).
$ $
------
> **W1.** While the authors compare UOTM with various methods, I don't see the comparison with [76]. Is there any reason that [76] is not compared or considered in the experiments?
**A1.** We conducted experiments of [76] on CIFAR-10. Despite extensive efforts in exploring various hyperparameters, [76] provided non-competitive results (with architecture "Small"), with FID scores around 80. Note that UOTM achieves a FID score of 12.86 on CIFAR-10 with the same Small backbone network.
$ $
------
> **W2.** Given the fact that unbalancedness implies robustness, it's not new nor surprising to see UOTM outperforms OTM under the presence of noise. Probably it's worth discussing more about the comparison with other unbalanced-based methods (e.g. [8] or [17]), rather than balanced-based methods.
**A2.** We appreciate the thoughtful advice. In the Outlier Robustness experiment (Sec 5.1), **our goal was to assess the robustness of the UOT-based model (UOTM) by comparing it with its OT-based counterpart (OTM).** Note that both UOTM and OTM are derived from the semi-dual form of each problem. Moreover, we aimed to provide a **better understanding of how the UOT-based model works** compared to the OT-based model.
Instead, to demonstrate UOTM's superior performance among UOT-based models, **we compared UOTM with another UOT-based model (Robust-OT [8]) on image benchmarks such as CIFAR-10 (Table 1)**.
$ $
------
> **Q1.**
I'm not very clear about the order of magnitude of $\tau$ in the experiment. Is it $0.5, 1, 2$ or $0.5 \times 10^{-3}, 1 \times 10^{-3}, 2 \times 10^{-3}$. The smaller it is, the more we are forced to respect the marginal constraint, i.e. approaching the balanced OT problem.
**A3.** We apologize for the confusion. In the ablation study on $\tau$, $\tau$ is chosen among $\\{0.1 \times 10^{-3}, 0.5 \times 10^{-3}, 1 \times 10^{-3}, 2 \times 10^{-3}, 5 \times 10^{-3}\\}$. For clarity, we revised the description of the experimental setup in the manuscript as follows:
- In Fig 7, our model maintains a decent performance of FID($\leq 5$) for $\tau = \\{0.5, 1, 2\\} \times 10^{-3}$.
$ $
------
> **Q2.**
What is the definition of Softplus?
**A4.** Thank you for the comment. To ensure self-containedness, we added the definition of Softplus in the appendix. The definition of Softplus is as follows:
$$
\operatorname{Softplus}(x) = \log ( 1+ \exp (x) )
$$
$ $
------
> **Q3.**
Regarding the theoretical justification of the outlier robustness of UOT, the authors may consider adding a reference [17]. I see that it is already cited, but not for that purpose.
**A5.** Thank you for the valuable advice. We added a reference [17] to support the outlier robustness of UOT as follows:
- Line 87-88: Second, UOT can address the sensitivity to outliers, which is a major limitation of OT [17].
- Line 212: One of the main features of UOT is its robustness against outliers [17].
$ $
**References**
[8] Yogesh Balaji, Rama Chellappa, and Soheil Feizi. Robust optimal transport with applications in generative modeling and domain adaptation. NeurIPS, 2020.
[19] Thomas Gallouët, Roberta Ghezzi, and François-Xavier Vialard. Regularity theory and geometry of unbalanced optimal transport. arXiv preprint arXiv:2112.11056, 2021.
[76] KD Yang and C Uhler. Scalable unbalanced optimal transport using generative adversarial networks. ICLR, 2019.
---
Rebuttal Comment 1.1:
Comment: We thank the reviewer for the efforts in reviewing our paper. We would appreciate it if the reviewer let us know whether our response was helpful in addressing the reviewer's concerns. If there are additional concerns or questions, please let us know. | Summary: The paper derives a semi-dual formulation for the unbalanced optimal transport problem (using the known dual reformulation) and solves it with neural networks. The most notable contribution of the paper is that this formulation is applied to generative modeling and achieved nearly SOTA results on several standard datasets. It is also claimed that proposed method is more robust, stable and converges faster than existing OT methods.
Despite the inspiring practical performance, it seems to me that the central theoretical result of the paper (which justifies the entire method) is incorrect. Overall, the paper lacks mathematical rigor and most derivations are done rather sparingly, raising the question whether they are actually correct and making it hard to check them. Plus there are some concerns regarding the experimental evaluations which I think should be more explicitly discussed.
Strengths: - Novel extension of dual (semi-dual) optimal transport methods to the unbalanced setup.
- Good image generation results, code is available.
- Faster/more stable convergence than prior OT methods; various studies present on the effects of different parameters (phi, psi, tau, etc.).
Weaknesses: - Incorrect main theoretical results, lack of mathematical rigor in the presentation.
- Insufficient clarity, vague experimental comparisons, limited details of main baselines.
- One may argue that the algorithmical/theoretical part of the paper is incremental as c-transform based approaches are rather widespread nowadays.
I discuss some weaknesses in detail below.
*Wrong results.* Overall, the paper lacks mathematical rigor. This is a very big issue as it seems to me that informal statements of theorems (such as “under the suitable assumptions”, etc.) have led to incorrect theoretical results included in the paper. To be precise, I think the main theoretical result which forms the basis for the proposed algorithm is incorrect. I mean Theorem 3.3. One of the results claimed there is that $T_{v^*}$ is the optimal map between the rebalanced distributions $\tilde{\mu}$ and $\tilde{\nu}$.
First, I do not understand here why the maximizer $v^*$ of the dual form actually exists (there is “sup” in all the dual forms, not “max”). This is not a trivial fact and should be proved if used. Second, I do believe that it is not true that $T_v^*$ is necessarily an optimal map even when $v^*$ exists. Let me try to provide a counterexample. Let $\mu=\delta_0$, $\nu=\delta_1$ and let $\phi_1=\phi_2$ for symmetry and consider cost $c(x,y)=|x-y|$. Due to symmetry, it is clear that $\tilde{\mu}=m\cdot delta_0$ and $\tilde{\nu}=m\cdot delta_1$ for some positive mass $m>0$. Hence optimal $v$ should be the optimal potential for the balanced transport problem between $\delta_0$ and $\delta_1$ (multiplied by mass $m$), here I follow eq. (28). The cost here is $1-0=1$ ($\times m$), and I suppose that $v(x)=x$ is an optimal potential: $v^c=-v$ and $v^c(0)+v(1)=0+1=1=$cost ($\times m$). At the same time, consider $T_v(x)=x+10000$. Then $|x-T_v(x)|-T_v(x)=-x=-v(x)=v^c(x)$, i.e., it is a minimizer in eq. (7). Clearly, $T_v$ maps $0$ to $10000$, so it is not an optimal transport map. Thus, I think the main theoretical result provided in the paper is not correct or may require additional assumptions.
I tried to check the proof to see where there may be a mistake but encountered some difficulties. There the authors rather sparingly use the calculus of variations on the space of function without any formal definitions of what are these “derivatives”, whether they actually exist and skip some related questions. I believe this lack of details (and, to some extent, informalism) may lead to even more mistakes or gaps in their results; maybe this part should be checked by an expert in this particular type of mathematical apparatus. Even assuming this part is true, I am sufficiently sure that there is a mistake in lines (29)-(31). I do not understand where from $T_{v^*}$ appears there (this is not explained); explanations in line 568-569 seem to be wrong.
Overall, this mistake which I pointed out seems to be very crucial for the paper: it turns out to be simply incorrect that the proposed method learns an OT map between distributions $\tilde{\mu}$ and $\tilde{\nu}$. Hence, it remains questionable what it actually learns and are there any guarantees.
One additional minor comment here: the function T is introduced in section 3.2 but there is no evidence, explanation or motivation explaining what this function is and how it is related to OT map (the answer only appears in the next section and seems not correct). The clarity of the exposition could be improved.
Taking into account my comment about the incorrectness of the theoretical results, I also do not understand how Theorem 3.4 of [19] explains better stability or performance. To be precise, the theorem is entirely about the dual problem (c-transform-based, function $v$ only), while the authors solve the saddle point problem with $v$ and $T$. Obtaining the better value for the dual necessarily means a better dual variable. Yet for me it is not clear how this relates to the stability as the actual problem is min-max and involves two functions ($v$ and $T$, and we are finally interested in $T$, yes?). While I think there is some meaningful idea in the discussion in lines 148-157, it still seems a little bit speculative and should be made more convincing.
*Limited details of comparisons.* The section with the comparisons is rather dense but still leaves some essential details in Appendix or even skips them. First, it was not evident for me how the authors applied their method to generative modeling (which transport cost has been used between the noise and real data). This (rather important) information seems to appear only in Appendix B.1 and, as far as I understand, the authors used the strategy similar to OTM* paper by Rout et. al. This could have been explained earlier. Second, it remains unexplained how the numbers in Table 1 are obtained (whether the authors reproduce all these methods or just took the metrics from the respective papers). Third, comparison with OT methods is somewhat vague. For example, the authors write that they compared with the OTM paper by Fan et. al. I have checked this paper and did not even find any experiments in image generation. Hence, I wonder how this comparison is done. Due to this, I can not judge how fair the comparisons are. I suggest the authors provide more details in the main text about all this.
*Related work.* The semi-dual neural OT methods (a.k.a. c-transform-based) already existed for several years. The authors cite only rather recent papers in the field and do not mention the early papers where these approaches appeared, e.g., [1,2,3].
[1] Henry-Labordere, P. (2019). (Martingale) Optimal Transport And Anomaly Detection With Neural Networks: A Primal-dual Algorithm. arXiv preprint arXiv:1904.04546.
[2] Nhan Dam, Q. H., Le, T., Nguyen, T. D., Bui, H., & Phung, D. (2019). Three-player wasserstein gan via amortised duality. In Proc. of the 28th Int. Joint Conf. on Artificial Intelligence (IJCAI) (pp. 1-11).
[3] Korotin, A., Li, L., Genevay, A., Solomon, J. M., Filippov, A., & Burnaev, E. (2021). Do neural optimal transport solvers work? a continuous wasserstein-2 benchmark. Advances in Neural Information Processing Systems, 34, 14593-14605.
*Misprints.*
- Line 99. Should be $R^d$ instead of $R$ (twice)?
- Line 556. Should be $C(X)$, $C(Y)$ I guess?
- Eq. (26). It seems like one has to assume that $\Psi^*$ and $\Phi^*$ are differentiable everywhere?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - What is the point of solving generative modeling with optimal transport maps?
- Why is there no testing of U-OTM in the image generation with fixed first marginal (noise)? Can such an experiment be conducted? It would be interesting if there is an effect of relaxing the first marginal in your method, or it suffices to relax the second one only (for simplicity).
- How did the authors perform comparison with OTM (see weaknesses section)?
- It is known that the quality of the learned generative model significantly depends on the hyperparameters used. From remark 3.1 I see that the method OTM/OTM* is a particular case of the proposed unbalanced method with $phi^\star(x)=psi^\star(x)=x$. How did the authors perform the comparison with this method? Did the authors use the same code (e.g., your own) with the same hyperparameters (except for phi, psi)?
- Is T_v in lines 116-117 a measurable function?
- How do the authors measure KL divergence by using the samples (table 3)?
- Could you please somehow (empirically) show that your method indeed learns the unbalanced OT map?
- In lines 183-196, there is a discussion of some robust OT methods. Why is there no comparison with them? As far as I understand, this seems to be relevant (not absolutely sure).
- In the proof of Theorem A.1 I do not understand why the strong duality holds (max dual=min primal)? Could you please detail this?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
**Update:** *reject* -> *borderline accept*
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer providing valuable advice. We think answering the reviewer's comments has significantly improved our work. We hope our replies to be helpful in addressing the reviewer's concerns.
### **1. Theoretical concerns**
$ $
### **1.1 Concerns already discussed in the manuscript**
---
> **W1.** The paper lacks mathematical rigor. There are informal statements of theorems such as “under the suitable assumptions”.
**A1.** As mentioned in Line 51-52, **we presented the detailed assumptions of probabilistic space and measure in Appendix A (Line 523-525).** Specifically, $\mathcal{X}$ and $\mathcal{Y}$ are assumed to compact complete metric spaces which are subsets of $\mathbb{R}^{d}$, and $\mu,\nu$ be positive Radon measures of mass 1 on $\mathcal{X}$, $\mathcal{Y}$, respectively.
$ $
---
> **Limited details 1.** First, it was not evident to me how the authors applied their method to generative modeling (which transport cost has been used between the noise and real data).
**A2.** As mentioned in Line 59, we adopt the **quadratic cost** $c(x,y)=\frac{\tau}{2}\\|x-y\\|_{2}^{2}$ throughout this paper. To enhance clarity of implementation, we included the **training algorithm of UOTM** in Line 125-126 of the manuscript. Moreover, we provided a detailed explanation of the connection between OTM* (Rout et al.) and our UOTM in Line 178-179.
$ $
---
> **M3.** Eq 26, $\Psi^*$ should be differential everywhere.
**A3.** As mentioned in Line 106 and 524, we assumed $\Psi^*$ s are differentiable.
$ $
### **1.2 Corrections to Theoretical Components**
---
> **W2.** There is no discussion on the existence of optimal $v$ and $T$. If exists, is $T_{v^*}$ an optimal map between the rebalanced distributions? The main theoretical result provided in the paper is not correct or may require additional assumptions.
**A4.** We sincerely appreciate the reviewer for providing valuable feedbacks. We believe that these feedbacks have been a considerable help in revising our manuscript to provide a more mathematically rigorous statement. We added the additional assumptions on measures $\mu, \nu$ and $\Psi^{*}$ to ensure the existence of solutions, and corrected Theorem 3.3 as follows (We removed Eq (29)-(31) accordingly):
$ $
> **Theorem 3.3** Suppose that $\mu$ and $\nu$ are probability densities defined on $\mathcal{X}$ and $\mathcal{Y}$. Given the assumptions in Appendix A, suppose that $\mu, \nu$ are absolutely continuous with respect to Lebesgue measure and $\Psi^{\*}$ is continuously differentiable. Assuming that the optimal potential $v^{\star}=\inf\_{v\in\mathcal{C}}J(v)$ exists, $v^\star$ is a solution of the following objective
$$
\tilde{J}(v)= \int_{\mathcal{X}}{-v^c(x)d\tilde{\mu}(x)}+\int_{\mathcal{Y}}{-v(y)d\tilde{\nu}(y)},
$$
> where $\tilde{\mu}(x)={\Psi_1^*}'(-{v^\star}^c(x))\mu(x)$ and $\tilde{\nu}(y)={\Psi_2^*}'(-{v^\star}(y))\nu(y)$. Note that the assumptions guarantee the existence of optimal transport map $T^{\star}$ between $\tilde{\mu}$ and $\tilde{\nu}$. Furthermore, $T^\star$ satisfies
$$
T^{\star}\in\mathcal{T}:=\\{T_{v^\star}:\mathcal{X}\rightarrow\mathcal{Y}\mid T_{v^\star}(x)\in\arg\min_{y\in\mathcal{Y}}\left[c(x,y) - v^{\star}(y)\right]\textit{ for measurable }T_{v^\star}\\}.\hspace{20pt} (a)
$$
> In particular, $D_{\Psi_1}(\tilde{\mu}|\mu)+D_{\Psi_2}(\tilde{\nu}|\nu)\leq\tau\mathcal{W}_2^2(\mu,\nu)$ where $\mathcal{W}_2(\mu, \nu)$ is a Wasserstein-2 distance between $\mu$ and $\nu$.
$ $
We assume $\mu,\nu$ are absolutely continuous and $\Psi^*$ is continuously differentiable. **These additional assumptions ensure the existence of the optimal potential $v^\star$ and optimal transport map $T^\star$ for the OT problem between $\tilde{\mu}$ and $\tilde{\nu}$.** The continuity of ${\Psi^*}'$ gives that $\tilde{\mu},\tilde{\nu}$ are absolutely continuous with respect to Lebesgue measure and have a finite second moment (Note that $\mathcal{X},\mathcal{Y}$ are compact. Hence, ${\Psi^*}'$ is bounded on $\mathcal{X},\mathcal{Y}$, and $\mathcal{X},\mathcal{Y}$ are bounded). These properties of $\tilde{\mu},\tilde{\nu}$ gives the existence of $v^\star$ and $T^\star$ [79].
Moreover, thanks to the reviewer, through a rigorous investigation, we realized that $T_{v^\star}$ might not serve as a transport map. Specifically, **the optimal transport map $T^\star$ between $\tilde{\mu}$ and $\tilde{\nu}$ is contained in the set of introduced parametrization $\mathcal{T}$ in Eq (a) ([71], Remark 5.13)**. However, we cannot ensure the uniqueness of the introduced parametrizations. The suggested counterexample $T_{v}(x)=x+10000$ is also a problem of this uniqueness. Both optimal transport $T^\star (x)=x+1$ and $T_v (x)$ are included in the parametrization set, i.e., $T_{v},T^{\star}\in\mathcal{T}$. **Nevertheless, note that the introduced assumption of absolute continuity excludes the suggested counterexample of $\mu=\delta_0,\nu=\delta_1$. Moreover, the assumed cost function $c(x,y)=\|x-y\|$ is different from our case.**
$ $
---
> **W3.** It remains questionable if the proposed method learns OT map between $\tilde{\mu}$ and $\tilde{\nu}$.
**A5.** In this part, we would like to advocate the $c$-transform-based parametrization (Eq (a)). **To the best of our knowledge, in the $c$-transform-based literature, various works [8, 37, 56, 76] employed this same parametrization of the OT maps $T$.** However, as discussed in [3, 4] and A4, the above parametrization does not provide a formal guarantee of convergence towards $T$.
Nevertheless, as reported in [56], the suggested parametrization converges to $T$ in practice. We agree with the reviewer that this part remains a gap in the literature, and further investigation is required. However, our contribution is not limited to the theoretical parts. We kindly ask the reviewer to take into account the experimental contributions of our study as well.
$ $
---
**Other concerns will be addressed in additional Official Comments.**
---
Rebuttal Comment 1.1:
Title: Additional Response (1/2)
Comment: ### **1.3 Other Concerns**
---
> **W4.** Taking into account the comment about the incorrectness of the theoretical results, how does Theorem 3.4 of [19] explain better stability or performance? The paper actually solves the min-max problem of $v$ and $T$.
**A6.** **The main objective of this paper is to propose the UOT-based generative model and analyze its various perspective as a generative model through experiments. We referenced Theorem 3.4 of [19] to explain a better target distribution matching of UOTM compared to OTM.** Considering that UOTM relaxes the hard constraint of OTM, we thought the better distribution matching of UOTM is an interesting phenomenon. Therefore, in this work, we suggested some possible hypotheses (explanations) that may contribute to the higher performance of UOTM:
- The high performance could potentially be attributed to the stability of the potential $v$, as implied by Theorem 3.4, which "may" facilitate favorable convergence for UOTM. We carefully "hypothesized" (rather than "asserted") that it would be a reason for better convergence properties. Accordingly, we validated that UOTM shows better and faster convergence (Tab 1, 2, 3, and Fig 5).
- The higher performance could potentially be attributed to additional stability of the potential $v$, as implied by Theorem 3.4. We hypothesized that this stability of potential $v$ "may" provide favorable convergence in the saddle point problem of UOTM by stabilizing one factor. Thus, we empirically validated that UOTM indeed shows better and faster convergence (Tab 1, 2, 3, and Fig 5).
- We posit that real-world datasets could contain outliers, leading to UOTM exhibiting a more robust convergence (Fig 2,3)
- In Appendix C, we also discussed that UOTM might have some engineering advantages.
Although the suggested explanations are hypotheses rather than solid theoretical justifications, we believe that our work is meaningful in offering insights into UOT-based generative modeling. Moreover, we believe these insights have the potential to greatly benefit future researchers in the field. We kindly request the reviewer to contemplate our contribution from this perspective.
$ $
### **2. Questions and Minor Corrections**
$ $
### **2.1 Concerns already discussed in the manuscript**
---
> **M1.** Line 99, should it be $R^d$ instead of $R$?
**A7.** No. $R$ is correct. Note that the convex conjugate is taken for the entropy function $f:[0,\infty)\rightarrow [0,\infty]$, and the domain of the entropy function is a subset of $R$.
$ $
---
> **Q2.** Why is there no testing of UOTM in the image generation with fixed first marginal?
**A8.** No, we tested UOTM with fixed first marginal on CIFAR-10. The result is presented in Tab 3 under the label Fixed-$\mu$.
$ $
---
> **Q6.** How do authors measure KL divergence by using the samples Table 3?
**A9.** As cited in Line 245 and discussed in Line 626-628, we adopted the k-Nearest-Neighbor based estimation [9].
$ $
---
> **Q8.** In lines 183-196, there is a discussion of some robust OT methods. Why is there no comparison between them?
**A16.** No, **we made a comparison with Robust OT [8] on CIFAR-10.** The result is presented in Tab 1. We conducted experiments of [76] on CIFAR-10. Despite extensive efforts in exploring various hyperparameters, [76] provided non-competitive results, with FID scores around 80. Note that UOTM achieves a FID score of 12.86 on CIFAR-10 with the same Small backbone network. Other UOT-based models [8, 10, 17, 44, 53] are discrete UOT algorithms. These discrete algorithms find transport plans between existing samples. Hence, these algorithms are not appropriate for generative modeling.
$ $
---
Rebuttal Comment 1.2:
Title: Additional Response (2/2)
Comment: ### **2.2 Other Concerns**
---
> **W5/Q3/Q4.** How did the authors perform a comparison with OTM (see weaknesses section)? What parameter did it use? I wonder how this comparison is done. Due to this, I can not judge how fair the comparisons are.
**A10.** We thank the reviewer for the detailed comment. **All the scores of other models on CIFAR-10 and CelebA-HQ (Tab 1,2) are taken from their original papers, except for Fan et al. [76]** As the reviewer commented, Fan et al. [76] did not report image generation results. Hence, we implemented the algorithm of Fan et al. on our own, using the Large backbone model of our work. (Note that the Large backbone model is the same as ScoreSDE and DDGAN.) The training hyperparameters of Fan et al. are finetuned based on the hyperparameters of UOTM. **To ensure fairness of comparison, we would like to emphasize that UOTM (small) employs the same architecture as Robust OT [8] and OTM [56], and UOTM outperforms both methods.** We added $\dagger$ to indicate the results conducted by ourselves in Tables, and included descriptions to captions as follows:
- $\dagger$ indicates the results conducted by ourselves.
$ $
---
> **W6.** Missing related works related to $c$-transform.
**A11.** Thank you for the careful comment. We added the suggested related works into the manuscript as follows in Line 116:
- We introduce $T_v$ to approximate $v^{c}$ following previous works [16, 56, 80, 81, 82].
$ $
---
> **M2** Line 556. $\mathcal{C}(X)$, $\mathcal{C}(Y)$.
**A12.** Thank you for the corrections. We would incorporate corrections into the manuscript.
$ $
---
> **Q1** What is the point of solving generative modeling with optimal transport maps?
**A13.** As discussed in Lines 14-31, OT theory has been widely exploited in the field of generative modeling. In the beginning, WGAN introduced OT-based Wasserstein distance as a loss function. Recently, several works proposed employing the optimal transport maps between source and target distributions as a generative model.
A thorough understanding of the benefits of OT map-based generative model remains elusive. Nevertheless, we think that these models have some strengths over GAN, which shares a similar adversarial training. In particular, we discovered OT-based models offer additional advantages in terms of stable convergence and mitigation of mode collapse in GANs.
As discussed in Line 315-316 and Appendix C, **we believe the cost function encourages the generated images $T(x)$ to disperse by aligning each image to the corresponding noise $x$, thereby preventing the mode collapse problem.** We consider that investigating the precise benefits of OT maps would be promising future work.
$ $
---
> **Q5** Is $T_v$ in lines 116-117 a measurable function?
**A14.** Yes, there exists such measurable $T_v$ by Prop 7.33 in [83].
We revised our manuscript in line 117.
- Note that $T_{v}$ is measurable ([8], Prop 7.33).
$ $
---
> **Q7** Could you please somehow empirically show that your method indeed learns the unbalanced OT map?
**A15. To the best of our knowledge, there is no explicit solution for the UOT objective (even for a 1D case). Consequently, we evaluated the validity of the transport plan through (i) monotonicity of the transport plan (Fig 2 and 4) (ii) KL divergence (Tab 3), and (iii) FID score.** For instance, the monotonicity of the transport plan (Fig 4) with low KL divergence result (for low $\tau$, Tab 3) shows that our transport plan is nearly optimal. Note that $T^\star$ of OT problem should be monotone increasing in Fig 2\&4 by Theorem 2.9 of [84]. Furthermore, given that $\Psi^*(x)=e^x$ (in Figures 2\&4), ${\rm{supp}}(\mu)={\rm{supp}}(\tilde{\mu})$ and ${\rm{supp}}(\nu)={\rm{supp}}(\tilde{\nu})$ (Theorem 3.3). Thus, $T^\star$ of UOT problem should also be monotone.
$ $
---
> **Q9** In the proof of Theorem A.1, I do not understand why the strong duality holds.
**A17.** With assumptions in Appendix A (such as convexity, monotonicity, and differentiability of $\Psi^*$) and assuming the existence of $v^\star$, the dual form of UOT satisfies the assumption of the Fenchel-Rockafellar theorem. The theorem implies that the strong duality holds. We added additional explanations in the proof of Theorem A.1 as follows (After Eq 23):
- In other words,
$$
\sup_{(u,v)\in \mathcal{C}(\mathcal{X})\times \mathcal{C}(\mathcal{Y})} \left[\int_{\mathcal{X}} -\Psi_1^*(-u(x)) d \mu(x) + \int_{\mathcal{Y}} -\Psi_2^* (-v(y)) d \nu(y) - \imath (u+v \leq c) \right],
$$
where $\imath$ is a convex indicator function. Note that we assume $\Psi_1^*$ and $\Psi_2^*$ is convex, non-decreasing, and differentiable. Since $c\geq 0$, by letting $u \equiv -1$ and $v \equiv -1$, we can easily see that all three terms in the above equation are finite values. Thus, we can apply Fenchel-Rockafellar's theorem, which implies that the strong duality holds.
---
Rebuttal Comment 1.3:
Title: Response Reference
Comment: **References**
[56] Rout, Litu, Alexander Korotin, and Evgeny Burnaev. Generative modeling with optimal transport maps. ICLR, 2022.
[71] Villani, Cédric. Optimal transport: old and new. Vol. 338. Berlin: springer, 2009.
[72] Wang, Qing, Sanjeev R. Kulkarni, and Sergio Verdú. "Divergence estimation for multidimensional densities via $ k $-Nearest-Neighbor distances." IEEE Transactions on Information Theory 55.5 (2009): 2392-2405.
[76] KD Yang and C Uhler. Scalable unbalanced optimal transport using generative adversarial networks. ICLR, 2019.
[78] Vacher, Adrien, and François-Xavier Vialard. "Semi-Dual Unbalanced Quadratic Optimal Transport: fast statistical rates and convergent algorithm." ICML, 2023.
[79] Villani, Cédric. Topics in optimal transportation. Vol. 58. American Mathematical Soc., 2021.
[80] Henry-Labordere, Pierre. "(Martingale) Optimal Transport And Anomaly Detection With Neural Networks: A Primal-dual Algorithm." arXiv preprint arXiv:1904.04546 (2019).
[81] Nhan Dam, Quan Hoang, et al. "Three-player wasserstein gan via amortised duality." IJCAI, 2019.
[82] Korotin, Alexander, et al. "Do neural optimal transport solvers work? a continuous wasserstein-2 benchmark." NeurIPS, 2021.
[83] Bertsekas, Dimitri, and Steven E. Shreve. Stochastic optimal control: the discrete-time case. Vol. 5. Athena Scientific, 1996.
[84] Santambrogio, Filippo. "Optimal transport for applied mathematicians." Birkäuser, NY 55.58-63 (2015): 94. | Summary: In this paper, the authors propose a novel model (UOTM) that uses (as an objective function) a semi-dual form of the unbalanced optimal transport (UOT) problem. Since UOT relaxes the hard constraint of OT on distribution matching, they also provide the theoretical upper bound of divergences between marginals. The experiments conducted show that UOTM outperforms existing OT-based generative models (OTM) on CIFAR-10 and CelebA-HQ-256, and achieves comparable results to the other state-of-the-art baselines. Moreover, UOTM is more robust to outliers than other OTMs and provides stable and fast convergence.
Strengths: 1. The proposed UOTM model has a solid theoretical background and appears original and significant.
2. The experimental setup is appropriate, and the obtained results prove the superiority of the proposed solution concerning the state-of-the-art.
3. UOTM is probably the first OT-based generative model that achieves state-of-the-art performance on real-world large-scale image datasets.
Weaknesses: 1. The presentation needs some improvement.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Minor comments:
The captions in Tab. 1, 2, 3, 4 and Fig. 5, 6, 7 could be more informative.
Some figures (such as Fig. 5) are not referenced in the text.
The order of tables/figures needs to be changed (e.g., in the text, Table 1 is referenced after Table 3).
l. 52: "satisfies" --> "satisfy".
l. 67: "couplings" --> "coupling".
Eq. (4): ',' --> '.'.
Eq. (5): "inf" --> "sup" (already corrected in the supplement).
l. 107: ".)" --> "):".
l. 115: "sem-dual" --> "semi-dual".
l. 298: '.' --> ':'.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are deeply grateful to the reviewer for reading our paper and offering thoughtful feedback.
$ $
---
> **Minor comments**
**A.** Thank you for the valuable advice regarding the presentation of our work. Following the advice, we would revise our manuscript as follows:
- The order and captions of tables/figures are revised.
- Table 1: Target Distribution Matching Test. UOTM achieves a better approximation of target distribution $\nu$, i.e., $ T\_{\\#} \mu \approx \nu$.
- Table 2: Image Generation on CIFAR-10.
- Table 3: Image Generation on CelebA-HQ.
- Table 4: Ablation Study on Csiszar Divergence $D_{\Psi_{i}}(\cdot | \cdot)$.
- Fig 5: FID Scores during Training on CIFAR-10.
- Fig 6: Ablation Study on Regularizer Intensity $\lambda$.
- Fig 7: Ablation Study on $\tau$ in $c(x,y)=\frac{\tau}{2} \\|x-y\\|_2^{2}$.
- We included a reference to Fig 5 in Section 5.3 (Line 283).
- In CIFAR-10, UOTM converges in 600 epochs, whereas OTM takes about 1000 epochs (Fig 5).
$ $
---
> **Typos**
**A.** We appreciate the reviewer's thoughtful comment. We would incorporate the recommended corrections into the manuscript.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: I'm satisfied with the authors' rebuttal. I keep my score unchanged.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for the response. We thank the reviewer for the comments and suggestions. We are glad to hear that the reviewer is satisfied with our rebuttal. | Summary: Standard optimal transport (OT) problem aims at comparing two probability distributions by finding an optimal coupling that achieves a minimum geometrical cost. A major bottleneck of standard OT is the equality of total transported mass between the underlying distributions. This restraints based-OT data analysis on real machine learning pipelines among many other domains. To alleviate this problem, this constraint is relaxed and gives unbalanced optimal transport (UOT). This paper investigates based-UOT frameworks on generative modeling through semi-dual form of UOT.
Strengths: - The paper is well-written and the approach is mostly well-presented. Addressing generative modeling through the UOT toolbox is novel up to my best knowledge. Numerical experiments in this paper show competitive performance of UOTM (UOT+generative modeling). A noted advantage of UOTM compared to OTM is its robustness to outliers and faster convergence.
- Semi dual formulation of UOT is given with a theoretical guarantee of its stability compared to other OT objectives (see Theorem 3.4).
- The code is attached in the supplementary and results are reproducible.
Weaknesses: ### Weakness/Questions
- In the primal formulation of UOT (see Eq 4), I noticed that in the divergence terms $D_{¨\Psi_1}$ and $D_{¨\Psi_2}$ there are not penalized by some positive tuning parameters as well considered in many UOT problem, I mean that one can expect the following formulation:
$C_{ub}(\mu, \nu) = ... + \lambda_1D_{¨\Psi_1} + \lambda_2D_{¨\Psi_2}$ where $\lambda_1, \lambda_2$ control how much mass variations are penalized.
- I guess the $\tau$ hyperparameter has great importance in the given formulation of UOT via the cost $c(x, y) = \frac \tau2 \||x - y\||_2^2$:
- (i) In the experiments $\tau$ should be as small as possible to get a good performance. Is the robustness of UOT is inherited from penalization of the cost or from properties of the divergence penalizations?
- (ii) Let's consider the same cost function $c(x, y) = \frac \tau2 \||x - y\||_2^2$ with the dual formulation rather than the semi-dual. Can we expect that UOT with dual has an approximate performance like semi-dual UOT?
- In L227 the mapping $T^\star$ should be monotone increasing (as shown in Figure 2, b). Could you please explain this fact?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: ### Minor typos/suggestions
- L51: X and Y are compact complete spaces
- L78: Csiszar divergences (add a reference)
- L115: sem-dual --> semi-dual
- L122: in Eq (9) $v$ should be indexed by $\phi$
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for spending time reading our manuscript carefully and providing thoughtful feedback. We hope our replies to be helpful in addressing the reviewer's concerns.
$ $
----
> **Q1.** Introducing tuning parameters $\lambda$ to the divergence terms, i.e., $\lambda_{1} D_{\Psi_{1}} + \lambda_{2} D_{\Psi_{2}}$.
**A.** From the primal form of UOT below, **adjusting parameters $\lambda$ of the divergence terms is equivalent to adjusting the parameter $\tau$ in the cost function $c(x, y) = \frac{\tau}{2} \\| x-y \\|_{2}^{2}$.** Specifically, multiplying a constant $k>0$ to $\tau$ is equivalent to dividing both $\lambda_{1}$ and $\lambda_{2}$ by $k$. Instead of introducing tuning parameters to the divergence terms $D_{\Psi_1}, D_{\Psi_2}$, we controlled $c(x, y) = \frac{\tau}{2} \\| x-y \\|\_{2}^{2}$ by the parameter $\tau$. **The ablation study on $\tau$ is presented in Fig 7**.
$$
C\_{ub}(\mu, \nu) := \inf\_{\pi \in \mathcal{M}\_{+}(\mathcal{X}\times\mathcal{Y})} \left[ \int_{\mathcal{X}\times \mathcal{Y}} c(x,y) d\pi(x,y) + D_{\Psi_1}(\pi_0|\mu) + D_{\Psi_2}(\pi_1|\nu) \right].
$$
$ $
----
> **Q2(1).** In the experiments, $\tau$ should be as small as possible to get a good performance. Is the robustness of UOT inherited from cost or divergence penalization?
**A.** **We consider that the robustness of UOT is inherited from the properties of divergence penalizations $D_{\Psi_1}, D_{\Psi_2}$.**
In Sec 5.1, we evaluated the outlier robustness of UOT by comparing it with OT-based Models (OTM).
Here, note that **OTM has the cost penalization term only, not the divergence terms.**
Hence, if the robustness were due to the cost penalization, OTM would also exhibit robustness.
However, the experimental results (Fig 2,3) contradicted this assumption by showing that OTM did not exhibit outlier robustness.
$ $
----
> **Q2(2).**
Let's consider the same cost function with the dual formulation rather than the semi-dual. Can we expect that UOT with dual has an approximate performance like semi-dual UOT?
**A.** We appreciate the insightful comment. In our opinion, **the dual form of UOT would be a more challenging problem in terms of neural network training** (As a reminder, we included the equation below). If we adopt the dual form and parametrize two potentials $u, v$ by two neural networks, we would not possess a direct parametrization of transport map $T$ (by neural networks). Moreover, the training process would need to handle an inequality constraint $u(x)+v(y) \leq c(x,y)$, which appears to be challenging,
$$
C\_{ub}(\mu, \nu) = \sup\_{u(x)+v(y) \leq c(x,y)} \left[ \int\_{\mathcal{X}} -\Psi^{\*}\_{1} (-u(x)) d \mu (x) + \int\_{\mathcal{Y}} -\Psi^{\*}\_{2} (-v(y)) d \nu (y) \right].
$$
$ $
----
> **Q3.**
In L227 the mapping $T^{\star}$ should be monotone increasing (as shown in Figure 2, b). Could you please explain this fact?
**A.** Thank you for the comment. This is because, **for the 1D distribution case, the optimal transport map $T^{\star}$ takes the explicit form** for the quadratic cost function $c(x,y) = \frac{\tau}{2} \\| x-y \\|\_{2}^{2}$. Intuitively, **$T^{\star}$ maps the $i$-th largest source sample to $i$-th largest target sample.** Formally speaking, if we denote the CDFs (cumulative distribution function) of $\mu$ and $\nu$ as $F_{\mu}$ and $F_{\nu}$, $T^{\star} = F_{\nu}^{-1} \circ F_{\mu}$ ([78], Theorem 2.9). Therefore, $T^{\star}$ should be monotone increasing. We agree with the reviewer that this statement requires further explanation. We would revise our manuscript accordingly.
- Note that the optimal $T^{\star}$ should be a monotone-increasing function. For the 1D distributions $\mu$ and $\nu$, the optimal $T^{\star}$ is a map that transports the $k$-percentile source sample to $k$-percentile target sample ($0 \leq k \leq 1$) for $c(x,y) = \frac{\tau}{2} \\| x-y \\|_2^{2}$ ([78], Theorem 2.9).
$ $
----
> **Minor typos/ Suggestions**
**A.** Thank you for the careful advice. We would correct the manuscript accordingly.
**References**
[1] Santambrogio, Filippo. "Optimal transport for applied mathematicians." Birkäuser, NY 55.58-63 (2015): 94.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: I thank the authors for their efforts in the rebuttal.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for the response. We appreciate the reviewer for reviewing our paper. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
A State Representation for Diminishing Rewards | Accept (poster) | Summary: The authors study a problem of DMU in RL, e.g. if visiting the same state may result in a smaller reward. They introduce a \lambda R representation that considers a particular form of diminishing rewards and provides convergence results. They extend it to continuous domains and perform preliminary experimental studies.
Strengths: - Typically, such problems having non-markovian rewards are hard, but by having assumptions of \lamda rewards representation, the authors are able to recover some near-optimality convergence results.
- Extension to the continuous domain
Weaknesses: I think the assumption of decay with \lambda is quite strong (very restrictive), and it's hard to imagine practical problems satisfying this. E.g. Authors have motivated the problem with examples of ice cream, finance and neuroscience, but it is hard(impossible) that a decay of visiting again is exactly a factor of 0<\lambda<1. Arguably a more interesting setting to consider would be:
- \lambda R for is a lower or upper bound on the true rewards that one may earn after visiting the states multiple times, i.e., the reward will decay at least by that amount or at max by a certain amount if visited again.
- I think another interesting setting will be if \lambda changes with state visitation frequency and thus needs to be estimated.
Both these settings will consider quite a large part of DMU-type problems.
Lemma 3.1 is perhaps not the right way to impose Impossibility/hardness results, and the proof needs to be rigorous. I think it can be interpreted in multiple ways, and also, the proof considers a specific format in which the reward is written and a specific definition of the value function. For the lemma statement to be true, the proof shall eradicate all the ways in which value function and rewards can be defined. For e.g. consider a reward function $r(s) = F(\tau \cup s) - F(\tau)$, where $F(\tau)$ is a general set function, $\tau$ is the agent's trajectory. $F(\tau)$ denotes the cumulative value earned for a trajectory $\tau$ as per the reward function (eqn 3.1). This can be written as Bellman recursion with $r(s_i) = F(s_{0:i}) - F(s_{0:i-1})$ (Imagine $\tau = \{s_0, s_1, s_2 ..... \}$). I would suggest not having it as a lemma but just in the text mentioning it is difficult to solve it with the Bellman equation directly.
Experiments: I think authors should consider more diverse reward functions, e.g., coverage functions, experiment design, and informative path planning objectives. For e.g., consider a camera-type measurement model, and if we observe the same region again, I will not get much information (there can be some noise model to establish \lamda type of decay). Similarly, in experiment design/ informative planning objectives: Information gained on the nearby states will decrease if I gather/collect measurements at my current location.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can you mention the assumptions behind Proposition 4.1? It is counterintuitive for me. $\lambda=0$ (FR) should be a much harder problem as compared to $\lamda=1$ (SR). However, error bound is better for $\lambda=0$?
I would suggest authors add an algorithm block for a high-level understanding of the algorithm.
I did not fully understand the experiments.
- Are your environments deterministic?
- In fig 5.2 b), why does \lambda =1 perform well? I would expect it to stay at one of the blue cells forever since it assumes that it gets the same reward always, and on the plot, it shall have an exponential decay.
- Is there an assumption of finite state-action spaces, or can you extend it to continuous dynamics as well?
Typos: line 268 the the, 156 when when,
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: no societal negative impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your detailed review and helpful feedback! We are glad that you appreciated the convergence results and the extension of the $\lambda$R to continuous domains. We aim to address your concerns below:
- Assumptions: This is a thought-provoking point! We completely agree that examining modifications of the specific DMU assumptions represent an interesting avenue for future work. We’d like to note that the neuroscience, foraging, and economics works studying DMU that we cite in the paper all use a fixed value for the diminishing rate (and in fact use the equivalent of a fixed value across states, an assumption which our framework can relax). Previous work [1,2] that applies standard RL methods to DMU problems also assumes a constant diminishment rate. In our work we show that it is possible to construct principled and tractable dynamic programming algorithms to solve DMU problems in RL and use these algorithms to achieve better performance than standard methods. Therefore, we believe that our work is an important first step towards RL problems with more general reward diminishment schemes, both theoretically and empirically. Additionally, we present the continuous control results in Attachment Figures L3 and L4 as an interesting example case where the “true” $\lambda$ is actually 1, but picking $\lambda < 1$ can actually result in a performance gain due to the tendency towards value overestimation in deep RL. In this case, $\lambda$-SAC estimates $\lambda$ online to maximize performance. More detail can be found in the general response above and in Appendix H.
- Lemma 3.1: Thank you for alerting us to this! We agree that the lemma is unclear in its current form. Perhaps a more rigorous statement would be the following: “There does not exist a contractive operator $T^\pi$ that has fixed point $V^\pi$ and does not depend on $\Phi_\lambda^\pi$.” A sketch of the proof would be as follows: By the Banach fixed point theorem, $V^\pi$ is a unique fixed point, and so $T^\pi$ must be of the form given in the proof of Lemma 3.1. Hence, $T^\pi$ has a term containing $\Phi_\lambda^\pi$. For $T^\pi$ to not depend on $\Phi_\lambda^\pi$, we would need a function $f$ such that $\Phi_\lambda^\pi = f(\bar r, V^\pi)$. However, because $V^\pi = \bar r^\top \Phi_\lambda^\pi$, such a function cannot exist. In this sense the $\lambda$R is necessary for this problem setting, unlike the Markovian case in which using the SR is optional. We are happy to rewrite this as a remark rather than a lemma if you think it would be more appropriate. Regarding your concerns about our definition of reward and value being too narrow–as noted above, (1) our approach is consistent with studies of DMU in other fields, and (2) we believe theoretical results for specific settings are useful starting points for more general results in the future. Your suggestion to define reward as a difference of set-valued functions is interesting–thank you! However, can you please clarify your final point about writing a Bellman recursion for $r(s_i) = F(s_{0:i}) - F(s_{0:i-1})$?
- Alternative Reward Functions: Thank you, this is an interesting point! In this paper, we’re primarily concerned with a specific setting as a starting point for studying DMU using RL. We believe a detailed examination of these alternative reward functions is beyond the scope of our work, but they represent an exciting opportunity for future study, and we will add a discussion to the paper.
- Q1 (Proposition 4.1): Thank you for highlighting this! We agree that one of the most interesting, and perhaps broadly valuable, aspects of the $\lambda$R (and by extension, for $\lambda=0$, the FR) is that convergence is faster for lower values of $\lambda$. The assumptions are exactly those stated in the text, and the proofs can be found in Appendix B. We also show empirically in Figure E3/Attachment Figure L1 that dynamic programming indeed converges faster for lower $\lambda$. Intuitively, a lower $\lambda$ can be seen to speed convergence in a similar way to a lower discount factor in standard RL.
- Q2 (Algorithm Block): Good point! Indeed, there are algorithm pseudocode blocks for $Q_\lambda$-Learning (Algorithm 1), Fitted $Q_\lambda$ Iteration (Algorithm 2), and $\lambda$O Forward-Backward Learning (Algorithm 3) in the Appendix, but can certainly add more details.
- Q3 (Experiments): (i) While the experiments presented in the initial submission all use deterministic environments, the theoretical analysis does not depend on this property, and we present new policy evaluation and composition results demonstrating that strong performance is maintained with stochastic dynamics in Attachment Figures L1 and L2. (ii) This is a subtle detail! It’s because, as we note in the conclusion, all of the representations we consider are prospective in nature and therefore don’t naturally account for previous visits. The $\lambda=1$ policy moves on because the amount of reward is decreasing–that is, $\vec r(s)$ is going down. The source of its suboptimality is that at every step using $\lambda = 1$ assumes that the current level of reward will persist, while using the correct $\lambda$ accounts for future decay. (iii) The $\lambda$F and $\lambda$O representations we introduce (Section 4.1, Appendix H, and Figures 5.4 and H.1) are designed to handle the continuous case.
- Typos: Thank you for pointing these out! We’ll fix them.
Thank you very much once again for your comments and questions! We will certainly integrate your suggestions into the paper. We hope our response has provided clarification and addressed your concerns. If that’s the case, we would greatly appreciate it if you would consider raising your score. If not, we would be more than happy to continue discussion!
[1] https://openreview.net/forum?id=a0T3nOP9sB
[2] https://proceedings.neurips.cc/paper/2020/hash/da97f65bd113e490a5fab20c4a69f586-Abstract.html
---
Rebuttal Comment 1.1:
Title: Works that solve the same problem and a few follow up questions
Comment: Thank you for the response,
I would like to bring the authors' attention towards non-markovian rewards in RL, which deals with a similar problem.
Submodularity is a known property of a set function and is an equivalent characterization of diminishing returns property. A good read for this topic could be Krause A. and Golovin D., "Submodular Function Maximization". Chekuri and Pal, "A Recursive Greedy Algorithm for Walks in Directed Graphs" study planning on graphs providing both theoretical upper and lower bounds, and there are works which deal with planning under tabular MDP's under submodularity, Wang, et al. "Planning with Submodular Objective Functions" and, in fact also in the reinforcement learning setting with scalable policy gradient method, Prajapat, et al. Submodular reinforcement learning.
There is also very related work which deals with convex RL where the objective is defined over state visitation distribution induced by the policy, Zahavy et al "Reward is enough for convex MDPs", general utilities RL, Kumar et al "Policy Gradient for Reinforcement Learning with General Utilities".
A few follow-up questions:
- Can you please point me to a specific application and a section/line in the reference paper (or from neurology, economics) that deals with constant decay rate rewards?
- Regarding Proposition 4.1, Did I understand correctly that $\lambda=1$ corresponds to the usual RL setting where the rewards are fixed? So $\lambda=0$ shall be a hard problem. In fact, there is quite some research demonstrating that hardness stems from a non-repeatable reward structure; please see Blum et al. Approximation Algorithms for Orienteering and Discounted-Reward TSP. I am not sure why you are getting a faster convergence for $\lambda=0$, while the TSP type of problems are known to be NP-hard. Do you also recover optimality guarantees for $\lambda=0$, or Maybe I missing something?
- Regarding Lemma 3.1, sorry, it is difficult to parse for me. Are you proving it by contradiction? If you are confident, you may have it as a proposition for being an independent, interesting result. (feel free to make multiple posts below to explain the proof below)
Thanks,
---
Reply to Comment 1.1.1:
Title: Response Part 1/2
Comment: Regarding Non-Markovian Rewards: Thank you for the references. Indeed, there is a connection to submodular functions, which is partly what motivated our discussion of the $\lambda$ Operator (Section 4.1 and Appendix G). In particular, we note that submodularity is not as well-defined for continuous spaces, in that definitions which are equivalent in the finite case are no longer so in the infinite case. We appreciate the pointer to Wang, et al. "Planning with Submodular Objective Functions." It is certainly related, though differs in that it focuses solely on planning and not the specific form of diminishment we study. Similarly, we appreciate the pointer to Prajapat, et al. “Submodular Reinforcement Learning,” though we note that as it was only uploaded to arXiv a few weeks ago (after the submission date for this conference) we don’t think it affects the novelty of our work. However, we would be more than happy to include a discussion of this work as well. With respect to other examples of non-stationary RL, such as convex MDPs, we will absolutely include a discussion, as we have noted to Reviewer jk48. Something that we would like to emphasize is that, separate from DMU itself, we view the introduction of the $\lambda$R (and by extension, the $\lambda$F and $\lambda$O) as a useful contribution in that it unifies the SR, FR, and FB representations from the broader RL literature.
Q1: Here is a list of several references with exponentially decaying utility (with the equivalent of constant $\lambda$), either in discrete or continuous time [1, 2, 3, 4, 5, 6]. One interesting note is that it is common in economics/decision theory papers to use one (constant) $\lambda$ for positive outcomes and a different $\lambda$ for negative outcomes. All of the rewards we study in our experiments are positive, so this specific situation does not directly apply in our settings, but extending our results to this setting would be an interesting direction for the future!
Q2: Indeed, this is a subtle but important point. Proposition 4.1 is a convergence rate for policy evaluation under DMU (which includes the stationary case and the $\lambda=0$ case), not optimal control. Indeed, [7], which introduces the FR, describes the connection to TSP problems, and introduces a method “FR Planning” which finds provably shortest paths to a given goal given a fixed set of policies. In the case that there are multiple goals, this devolves to a TSP and is indeed NP-hard. However, computing the FR/$\lambda$R is more like computing the distance of a fixed route (policy) rather than finding the optimal policy. Thank you for bringing this up—we will emphasize it in the paper.
[1] Pine, A., Seymour, B., Roiser, J. P., Bossaerts, P., Friston, K. J., Curran, H. V., & Dolan, R. J. (2009). Encoding of marginal utility across time in the human brain. Journal of Neuroscience, 29(30), 9575-9581. Figure 3b.
[2] Wispinski, N., Butcher, A., Mathewson, K. W., Chapman, C. S., Botvinick, M., & Pilarski, P. M. (2022). Adaptive patch foraging in deep reinforcement learning agents. Transactions on Machine Learning Research. Figure 3c.
[3] Chateauneuf, Alain, and Michèle Cohen. "Risk seeking with diminishing marginal utility in a non-expected utility model." Journal of Risk and Uncertainty 9 (1994): 77-91. Corollary 4.
[4] Pratt, J. W. (1978). Risk aversion in the small and in the large. In Uncertainty in economics (pp. 59-79). Academic Press. Section 4.
[5] Pine, A., Shiner, T., Seymour, B., & Dolan, R. J. (2010). Dopamine, time, and impulsivity in humans. Journal of Neuroscience, 30(26), 8888-8896. Eq. 2.
[6] Rachlin, H. (1992). Diminishing marginal value as delay discounting. Journal of the Experimental Analysis of Behavior, 57(3), 407-415. Eq. 1.
[7] Moskovitz, T., Wilson, S. R., & Sahani, M. (2021, October). A First-Occupancy Representation for Reinforcement Learning. In International Conference on Learning Representations. | Summary: This paper studies the phenomenon of diminishing marginal utility in RL, where the a state-based reward r(s) decays as the agent visits it more often. To solve this problem setting, this work introduced a novel state representation, named the λ representation (λR). The author showed that we fail to define a Bellman equation in the λR setting and introduced a recursive relationship.
Strengths: - The probelm setting of DMU in RL is interesting.
- The writing is clear and easy to follow.
- The paper provides theoretical analysis of the λ representations.
Weaknesses: - Typos and grammar error: line 156 (when when ...), line 156 (Instead of keeping ...), line 177 (The mechanism)
- Missing baselines: the current experiments do not compare to other SOTA exploration baselines with intrinsic rewards. So it's hard to evaluate the effectiveness of the proposed methods.
- Experiments for continuous control is only conducted on Halfcheetah, a toy example that most RL baselines can easily solve. Experiments on more complex tasks, i.e., humanoid-run, hopper-hop, acrobot-swingup, fish-swim, from dm_control would be more persuasive.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Since exploration is the natural setting for λR, why there is no comparisons to different exploration baselines, i.e., RND, state entropy, pseudo-count ...?
- I am curious of how will the λR perform in more complex pixel-based tasks.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: - The application of the proposed method seems to be narrow, and there is a lack of comparisons to SOTA exploration baselines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you very much for your detailed review and feedback! We are glad that you appreciated the problem setting, found the writing to be clear, and valued our theoretical analysis. We aim to address your concerns below:
- Typos: Thank you for pointing these out! We’ll correct them in the updated paper.
- Baselines: We think there may be a misunderstanding of our experiments. While both the FR and the SR have been used as exploration bonuses, we don’t examine the $\lambda$R’s application to exploration in this paper, but rather its use for accurate policy evaluation, control, and composition in settings where agents experience diminishing marginal utility from repeated exposure to rewarding stimuli. We completely agree that had we used the $\lambda$R as an exploration bonus that these would be valuable baselines to compare against, but they aren’t as applicable in the experimental settings we do consider, which are primarily focused on establishing the usefulness of the $\lambda$R for supporting policy evaluation, control, and composition for problems with DMU. We completely agree that exploration is another interesting application area–we believe it is out of the scope of the current work, but would absolutely like to pursue it in the future.
- Continuous Control Domains: Yes, this is a good point! We have added Hopper-v2 as an additional domain (Attachment Figures L3 and L4) and found that $\lambda$-SAC again matches SAC’s performance while being much less computationally demanding. Please see the general response above for a more detailed description.
- Q1 (Exploration): See response above re: baselines. We are primarily concerned with problem settings involving diminishing marginal utility and show that in order to do RL using dynamic programming in such settings, the $\lambda$R is required. We didn’t offer comparisons to exploration baselines because we didn’t perform any experiments focused on exploration, but rather focused on establishing the $\lambda$R as useful for evaluation, control, and composition for settings with DMU.
- Q2 (Further Pixel-Based Experiments): We certainly agree! We think the fact that the $\lambda$F was effective at facilitating pixel-based navigation is an encouraging first step, as successfully performing with 128 x 128 RGB observations is a good sign that the approach scales well. We hope to pursue further experimentation in the future!
Thank you very much once again for your comments and questions! We hope our response has provided clarification and addressed your concerns. If that’s the case, we would greatly appreciate it if you would consider raising your score. If not, we would be more than happy to continue discussion!
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I have updated the score.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for your feedback and time! Please let us know if any other questions arise. | Summary: The paper considers the case of non-Markov rewards, where rewards at states decay over time (motivated by the diminishing marginal utility phenomenon). While successor representations (SR) have been used in standard MDPs, the decaying of rewards means that they cannot be used here, and so the paper develops a "successor-like" representation that can handle this case, which generalises SR and similar other representations. Theoretical results ensure that these representations can be learned using a Bellman backup (as in the case of SR), which can be used to evaluate and improve policies. Further extensions to the function approximation case (where features are required) are also demonstrated, and bounds related to composition (similar to the successor features bound) are presented. Results in tabular and pixel-based domains showcase how the representation can be used with model-free RL algorithms to solve this class of tasks, where taking into account the decaying rewards (instead of assuming rewards are stationary) is shown to be beneficial.
Strengths: The paper considers an interesting extension to the typical MDPs considered in reinforcement learning. The motivation regarding natural behaviours is an interesting one, but I think on a more fundamental level, any approach that attacks decision problems in RL that aren't Markovian in some way is of interest to the field.
This paper packs a lot into it - it develops many different ideas, ranging from the $\lambda$ representation in the tabular case to the function approximation "feature" case. It also shows how these can be used to evaluate and learn policies, compose existing value functions, and be applied in a deep RL context.
Aside from the extensive theory presented, the paper also does a good job of relating the approach to existing work in successor representations and demonstrating under what conditions that approach here simplifies to existing work. This helps to unify these otherwise disparate methods.
Weaknesses: As presented, the experiments demonstrate that the method can be applied in both tabular and function approximation settings, that using the right value for $\lambda$ is necessary to achieve the best performance, and that composition can be achieved just as in the SR/SF case. While these experiments confirm the utility of the representation and go hand in hand with the theory, it would have been nice to see the performance of the approach compared to different baselines. This could answer questions like how it compares to SR/FR/SM under a variety of conditions: when $\lambda < 1$ or when $\lambda =1$ or when states are only ever visited once (because it's continuous/very high dimensional).
The pixel domain was helpful to showcase that the approach can be applied in high-dimensional domains like this, but it would have been nice to see a more "compositional" domain as has been demonstrated in previous work to investigate composition (e.g. the domains where the agent collects objects of different colours/shapes with different priorities) [1,2,3].
I realise space is an issue, but it would have been helpful to see a very brief discussion on related work that focuses on non-Markov rewards (outside of the context of successor representations). e.g. [4]
Minor:
Some of the drawbacks of composition with SF are also inherited by this approach. For instance, the bound in Theorem 5.1 is pretty loose (see [2]), but I understand the point is to show it's the same bound with an extra term that is positive only when the $\lambda$ estimates are wrong, so it's not very important
While I found the motivation for considering these forms of non-Markovian rewards interesting, it may be the case that many RL practitioners who are not interested in modelling animal/real-world behaviour will not adopt this since for many practical applications (or popular benchmarks like Atari or MuJoCo), such a formulation is not natural.
Some of the contractions used made it hard to parse sentences (e.g. L74). I'd recommend replacing these with the full words.
[1] Barreto, André, et al. "Fast reinforcement learning with generalised policy updates." Proceedings of the National Academy of Sciences 117.48 (2020): 30079-30087.
[2] Nangue Tasse, Geraud, et al. "Generalisation in lifelong reinforcement learning through logical composition." International Conference on Learning Representations. 2022.
[3] Alver, Safa, and Doina Precup. "Constructing a Good Behavior Basis for Transfer using Generalised Policy Updates." International Conference on Learning Representations. 2022.
[4] Gaon, Maor, and Ronen Brafman. "Reinforcement learning with non-markovian rewards." Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 04. 2020.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: On line 291, it states that there is not yet a measure-theoretic version of the representation. Having said that, while there may not be strict theoretical justification, would there be any issue in practice with applying this approach to continuous state/action domains?
I'm slightly confused about the composition in the Four Rooms domain. There are four base policies but three goals. Is each policy to reach the centre of each room (and in three of the rooms, a goal exists)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations are well described in Section 7, including many that I would not have even considered.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your detailed feedback, and we’re glad that you liked the paper! We hope to address your concerns below:
- Baselines: We completely agree, and identifying appropriate baselines was a challenge for us, simply because (as far as we know) DMU has not been addressed in an RL context before. Experiments where we set $\lambda = 1$ are meant to compare our approach to standard RL, but if you have any specific suggestions for additional baselines, we’d very much appreciate any pointers!
- Compositional Domains: Thank you for pointing this out! We agree, one direction that we’d like to push in the future is to explore both more compositional tasks (e.g., environments where the agent has to pick up a key and proceed to a locked door) as well as novel ways to use the $\lambda$R to compose policies (e.g., the FR can be used to perform a form of shortest path planning over a set of base policies [1]).
- Discussion of Non-Markov Rewards: Yes, we certainly agree–thank you for pointing this out! We will add a section in the Appendix with a more detailed discussion.
- Looseness of the Bound: Yes, indeed it is a rather loose bound, but as you note, our hope was to draw the connection with the previously established bound for GPI :). We have slightly tightened the bound recently, replacing the $L_1(r)$ term with $L_\infty(r)$.
- Interest to the Broader Community: We believe this is an important point, and one that we’ve given careful consideration. It is certainly the case that the most natural area of application for DMU is in studying natural behavior. In that lens, we hope this paper establishes a formal framework for DMU within RL that can be applied in the future more extensively within this context. However, we also believe that addressing non-stationarity in environmental rewards is also an important topic for the design of artificial agents. If agents are designed with the implicit assumption that rewards are ever-present, it could lead to value overestimation and suboptimal behavior. While this theme is present in all of our experiments, we’d like to highlight $\lambda$-SAC (Appendix H and Attachment Figures L3 and L4) as an interesting example case.
- Contractions: Thank you! We will update the paper accordingly.
- Q1 (Measure/Set-Theoretic Formulation in Practice): Good question! The only continuous-domain experiments that we performed were using $lambda$-features (Fig. 5.4), but we do not anticipate any issues with using the $\lambda$-operator in practice, even though it isn’t entirely theoretically justified. Continuous-domain experiments with the $\lambda$O are an exciting direction for future work that we intend to look at, especially because we could adjust FR planning (as mentioned above and detailed in [1]) to be compatible with the first-occupancy operator (i.e. $\lambda$O with $\lambda=0$) to construct a continuous-domain planning scheme. We can mention this in the paper.
- Q2 (FourRooms): Yes, that’s correct! We’ll make it more clear in the text.
[1] https://openreview.net/forum?id=JBAZe2yN6Ub
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. While there are still some fair questions about the motivation for the setting considered, and (as others have pointed out) many ways that the paper can be extended to incorporate other kinds of DMU-type problems, I think the paper packs a lot into it as it currently stands and am happy to stand by my original rating.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We appreciate your feedback, and thank you for the time spent reviewing the paper! Please let us know if any additional questions arise. | Summary: The authors explore diminishing marginal utility in the context of reinforcement learning. Specifically, they study reinforcement learning when reward obtained at a state diminishes with each visit to that state (following a particular mathematical form). The authors show that, under such diminishing rewards, agent policies can be evaluated by using a novel state representation introduced in the paper. This representation generalises earlier state representations in the literature, such as the successor representation.
Strengths: Originally: The concept of diminishing rewards is well known and explored in other fields. The paper presents new results within the context of reinforcement learning.
Quality: The authors provide a thoughtful and useful analysis under the assumptions they make. This includes the discussion of how the proposed representation relates to existing state representations in RL. The recursion they have identified is interesting and potentially useful.
The paper is well structured. The writing is clear.
Weaknesses: The problem is not strongly motivated: why is studying diminishing marginal utility within the context of RL, with the specific assumptions made in this paper, important? Reasoning that goes beyond "people have diminishing marginal utility" would be useful. Why and how is this concept, in the form studied in the paper, useful for reinforcement learning agents?
The writing should clearly distinguish between two cases: (1) an objective quantity in the environment (e.g., available food) gets smaller, (2) the subjective value attached by the agent to an outside quantity gets smaller (e.g., the second ice cream cone does not taste as good).
In Section 6, the authors write that naturalistic environments often exhibit diminishing reward and give the example of foraging with diminishing food patches. I see a discrepancy between this example and the problem addressed in the rest of the paper. In the foraging example, the available food decreases because other agents consume it, and the amount of reduction should generally be a function of the time that passed between visits to the site. In contrast, the core assumption in the paper is that the reduction in reward in a given state is a function of how many times the agent itself visited that particular state, regardless of how much time passed since its last visit to that state.
Additional comments: Forward-backward representation should be explained in Section 2 (Preliminaries).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I do not have any particular questions but, as I noted above, I do not find the paper particularly well-motivated in its present form. Why and how is diminishing marginal utility, in the form studied in the paper, useful for reinforcement learning agents?
I have read the rebuttal and thank the authors for their additional thoughts on the subject.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the limitations have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your detailed review and careful consideration of our paper! We are glad that you appreciated the originality of the work, found it to be of high quality, well-structured, and clearly written. We aim to address your concerns below:
- Motivation: We believe that there are two central questions that can be asked regarding the usefulness of DMU and RL. (1) Does incorporating DMU into standard RL paradigms/agents improve their performance on problems of interest in machine learning? We believe the following three areas are of particular importance in answering this question: (i) faster convergence for policy evaluation with lambda < 1 (Proposition 4.1 and Attachment Figure L1), reducing value function overestimation (e.g., $\lambda$-SAC, described in Appendix H and with additional results presented in Attachment Figures L3 and L4), (iii) greater understanding of non-stationary rewards in RL, and (iv) the set-theoretic formulation of the FR (lambda = 0)--suitable for continuous spaces–is novel. The FR facilitates navigation along shortest paths between goal states, which is of interest in robotics and other areas of machine learning. 2) Does modeling DMU within an RL framework provide new avenues of study for understanding natural behavior? This is the question for which we are most hopeful to provide an affirmative answer. The SR and FR are already of interest to neuroscientists [1,2,3] for the connections between the ways they encode spatial information and hippocampal activity. By generalizing these representations (and the problem settings for which they are suited) to DMU, we believe that the $\lambda$R can guide both further study in this area and offer new connections to other phenomena associated with DMU (e.g., foraging). Previous work on the SR and FR has already been grounded within the formalism of RL. When neuroscientists use these representations, then, they have clarity on what it is they can formally claim regarding behavior. We also believe these results may be of interest to economists interested in DMU. Our primary goal in this area is to provide the same type of formally sound framework for DMU as has been established for other problem areas, so that the $\lambda$R can act as a solid foundation on which to build. We view the experimental results in Section 6 as a proof of concept for the $\lambda$R as a useful tool in this setting. We will emphasize these motivations for the work in the paper.
- The Nature of Diminishment: This is indeed a very good point, and one which we should emphasize and explore in more depth in the paper! It is especially interesting because in natural foraging, both the objective depletion of resources in the environment and subjective diminishment of perceived utility play a role in behavior (e.g., an animal may leave a berry patch before all berries have been consumed if another resource’s appeal supersedes the consumption of more fruit). One especially important point that you bring up is that of utility replenishment–i.e., that utility should be a function of the amount of time since the agent has last visited a particular state rather than just how many times it’s visited that state overall. As we note in our discussion of limitations in Section 7, the current framework we present models this replenishment process as being linked to the episodic nature of the environment, but in an ideal case, replenishment would take place gradually in a continuing environment. We explore this case in a fair amount of detail in Appendix I, providing some possible representations in line with such a framework as well as preliminary simulations. We agree this is the natural next step for studying DMU and foraging within RL!
- FB Representation Background: Yes, that is a good point–we will make the necessary adjustments.
Thank you very much once again for the helpful feedback! We hope we have adequately addressed your concerns. If not, we’re more than happy to continue the discussion!
[1] https://www.nature.com/articles/nn.4650
[2] https://www.jneurosci.org/content/38/33/7193
[3] https://www.cell.com/neuron/pdf/S0896-6273(23)00230-1.pdf | Rebuttal 1:
Rebuttal:
We’d like to sincerely thank all reviewers for their time and for their helpful feedback and suggestions for the paper. We believe that incorporating this feedback will make the paper stronger, and we look forward to a constructive discussion! We are glad that reviewers consistently appreciated the originality of the paper, the theoretical analysis provided, the consideration of both tabular and deep RL instantiations of our approach, and found the writing to be clear. Overall, we view this as exploratory work—DMU is a prominent factor in human and animal decision-making. RL is likely the most dominant approach to modeling sequential decision-making in machine learning and a common approach for doing so in neuroscience and other fields, yet until now (as far as we are aware) has not contained a framework that accounts for (and formally characterizes) DMU. The goal of this paper is to provide a basic approach for closing this gap.
We have also attached a pdf file with several additional experimental results:
- In Figure L1, we demonstrate the accelerated convergence of dynamic programming—in this case, policy evaluation—for lower values of $\lambda$ both with and without stochastic transition dynamics in the environment (in this case, FourRooms). This result augments the one in Figure E3.
- In Figure L2, we show that the usefulness of the $\lambda$R for supporting policy composition is maintained with stochastic transition dynamics.
- In Figure L3, we augment the results in Figure H1 in the following ways: (1) we add five additional random seeds to our HalfCheetah-v2 results, and (2) we repeat our experiment on Hopper-v2, a more challenging environment (also for eight seeds total). We found that, consistent with our initial results, $\lambda$-SAC is able to match the performance of SAC while only using a single critic (rather than two). It does this by using an online bandit approach to select values of $\lambda$ used for training the critic that result in higher reward (see Appendix H for more details). In the rightmost panel, we can see how the choice of $\lambda$ changes over the course of learning for each environment. Interestingly, in [1], it was found that strong performance in HalfCheetah-v2 is associated with optimistic/high value estimates, while strong performance in Hopper-v2 is associated with pessimistic/lower value estimates (i.e., it benefits more from the pessimistic evaluation performed when SAC takes the minimum across its two critics). Consistent with this finding, we observed that $\lambda$-SAC learns to select high values of $\lambda$ on HalfCheetah-v2, and lower values on Hopper-v2. We believe this is a promising example of a possible area of benefit that the $\lambda$R can bring even to standard RL problem settings where the reward is stationary.
- In Figure L4, we plot the average final performance of SAC and $\lambda$-SAC versus the FLOPS used by each agent (calculated using the `flopth` Python package) as a means of emphasizing that $\lambda$-SAC is able to match SAC’s performance while substantially saving on computational cost (~32% for each task).
We aim to address specific concerns in the responses below. In these responses, when we refer to the additional figures described above, we will name them as “Attachment Figures.” Thank you all once again!
[1] https://openreview.net/forum?id=a4WgjcLeZIn
Pdf: /pdf/b5c068dc3095e933e38d1515524b4e93ef6dd898.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
On student-teacher deviations in distillation: does it pay to disobey? | Accept (poster) | Summary: The paper performs a careful analysis of the discrepancy between the predictions made by the student and teacher in the context of knowledge distillation. Prior works has shown that (1) distillation improves student performance, so that it can sometimes outperform the teacher, and (2) the predictions made by the student are quite different from the predictions of the teacher in the end of distillation. The paper provides new insights into these observations, specifically:
- Student exaggerates the predicted probabilities of the teacher, i.e. it's less confident where the teacher is not confident and (sometimes) more confident where the teacher is confident
- The authors argue theoretically, and show empirical evidence that knowledge distillation exaggerates the inductive bias of gradient descent
- The authors show that knowledge distillation can hurt when the teacher is not interpolating the training data
Strengths: **S1.** The authors perform careful high-quality analysis with targeted experiments highlighting specific conclusions. For example, the observation that the low-end of the confidence distribution for the student underfits the teacher (section 3) is demonstrated very thoroughly on a range of image and text problems, with nice figures, such as Fig. 2.
**S2.** The theory in Section 4 is presented very clearly and easy to follow.
**S3.** The experiment in Fig 3(a) is quite interesting, showing that the student is less confident than the teacher on almost all of the mislabeled datapoints.
Weaknesses: **W1.** Some of the confidence scatter-plots in the appendix are less clean / interpretable than the ones in the main text. For example, Figure 12. Even in some panels in Fig. 5 the effect is not completely obvious.
**W2.** It would be very interesting to see how the training length impacts the observed confidence distribution. One of the main conclusions of [1] is that distillation performance keeps improving with longer training.
**W3.** [1] is actually reporting successful distillation results on ImageNet, while the authors argue in section 5.2 that distillation is known to hurt on ImageNet. What's causing the difference in the results?
**W4.** The theory is fairly simplistic, and only applies to linear models, and infinitesimal learning rates. However, the authors verify that some conclusions transfer to neural networks. But it is not very clear why they translate in the way they do. In particular, it's unclear why the authors measure the projection of the weights in the first layer on the eigenvalues of the covariance matrix of the data? Intuitively, I would expect the analogy of the result for linear models would be some result on converging along the eigenvectors of the Hessian of the loss instead (although, it is not constant wrt weights, so it's not clear what that would mean exactly)?
**References**
[1] [_Knowledge Distillation: A Good Teacher Is Patient and Consistent_](https://openaccess.thecvf.com/content/CVPR2022/html/Beyer_Knowledge_Distillation_A_Good_Teacher_Is_Patient_and_Consistent_CVPR_2022_paper.html)
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: **Q1.** Why do you think you see more student overcondfidence in language tasks compared to vision (line 140)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations addressed adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for appreciating the breadth of our experiments and the clarity in our theory. Also, thanks for the very detailed review!
----
> Why do you think you see more student overconfidence in language tasks compared to vision (line 140)?
Indeed, this is a curious phenomenon — thanks for paying close attention to our observations! We suspect this may have to do with there being only two to four classes in language tasks unlike the more complex image tasks that have 10s or 100s of classes. This would change the magnitude of the probabilities of various classes, and thus change the dynamics of learning qualitatively (while interestingly, still preserving the exaggeration phenomenon in some way or the other).
----
> Some of the confidence scatter-plots in the appendix are less clean / interpretable than the ones in the main text. For example, Figure 12. Even in some panels in Fig. 5 the effect is not completely obvious.
First of all, thanks again for examining all our results carefully!
For Fig 5, we can quantitatively say that underfitting is prevalent here barring a couple of exceptions (e.g., Mobile-net self-distillation CIFAR100): In Table 3 and 4, we show that if we were to fit the bottom 25% of the teacher’s confident points, the slope is consistently > 1 in nearly all these plots (across test and train), except a few.
For Fig 12, we completely agree with you — we even note this in the caption. **However, this is a cross-architecture setting, where capacity mismatch can induce confounding deviations that are not covered by the bias exaggeration theory!** We reported cross-architecture settings for the sake of completeness.
We mention these exceptions in the main paper (lines 150 and 154) and discuss all of them clearly from line 661 in Appendix.
----
> It would be very interesting to see how the training length impacts the observed confidence distribution. One of the main conclusions of [1] is that distillation performance keeps improving with longer training.
This is a valuable ablation. **We have done this ablation in C.5, Lines 718-730.** We find that training longer still doesn’t fix the exaggeration. This is in line with Stanton et al., (“Does KD really work?”, Fig 6a) who find that significantly training longer only increases the student-teacher agreement on training data by a meager 2%! Possibly, the initial eigenspace regularization provided by KD traps the model in a local minimum of KD loss in the non-convex regime. This is an exciting fundamental question about distillation theory that our paper gives rise to.
> [1] is actually reporting successful distillation results on ImageNet, while the authors argue in section 5.2 that distillation is known to hurt on ImageNet. What's causing the difference in the results?
Our point was that **the _standard_ distillation recipe doesn’t work on ImageNet, as Cho and Hariharan note**. But there are certainly other sophisticated recipes that can get it working. For example, [1] uses augmentations, mixup and training for an extensively long time. We will make this clearer.
---
> The authors verify that some conclusions transfer to neural networks. But it is not very clear why they translate in the way they do. In particular, it's unclear why the authors measure the projection of the weights in the first layer on the eigenvalues of the covariance matrix of the data?
You’ve raised an interesting subtlety. There are multiple possible ways to informally generalize the theorem to the non-convex setting. In your interpretation, the theorem says: “there is an exaggerated bias when we project the weights onto the fixed Hessian of the loss landscape”.
Our interpretation is: “there is an exaggerated bias when we project the weights onto the (fixed) eigenspace of the data”. **To us, this was the most natural interpretation as it follows from the manner of the proof.** To extend this to an MLP, we made some intuitive choices:
1. We consider each layer as an individual set of weights, and plot the eigenspace trajectories specific to those weights (e.g., D.5 & D.6 for first layer, and D.7 for intermediate layer)
2. We consider the “data” as the previous layer’s output. For the first layer, the data is the raw features themselves. For an intermediate layer, this would be the previous layer’s representation. But since we want this data to be fixed over time, we simply choose the representation computed at the end of teacher’s training. (Note that both the student and teacher start from the same initialization, so these representations become compatible.)
3. Since the weights are matrices, ideally, we would have to plot the eigentrajectory of each row of weights in this matrix. This is infeasible. So, we perform an l2 norm reduction so that we can visualize a single trajectory.
Hopefully, this clarifies how we employ a reasonable informal generalization of the theorem. Note that in this framework, we still find a clear and neat difference in the two trajectories as predicted by the theorem.
--------
References
[1] Knowledge Distillation: A Good Teacher Is Patient and Consistent
[2]: Cho and Hariharan, On the Efficacy of Knowledge Distillation
----
**We hope you find our answers to your questions satisfactory. If so, we sincerely hope you’re able to view the paper in a more positive light and re-evaluate its score! Thank you again for your time and detailed review.**
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal!
Comment: Dear authors, thank you so much for the detailed rebuttal and providing new intuitions and clarifications! I am fully satisfied with the response. Given that I already vote for accept, I will keep my score, as I think it best describes my evaluation of the paper. | Summary: This paper explores the paradox in knowledge distillation where a “student” network deviates from the “teacher” network’s probabilities but still outperforms the teacher. The authors found that the student network exaggerates the teacher’s confidence levels across various architectures and data types. They also discovered that distillation amplifies the implicit bias of gradient descent, leading to faster convergence along top data eigendirections. This exaggerated bias is suggested as the reason for the student’s improved performance. The study bridges theory and practice, offering insights for future work to enhance distillation benefits by inducing careful deviations.
Strengths: - The paper's introduction is excellently crafted, presenting the problem, hypothesis, structure of the work, and results in a clear and engaging manner.
- The overall quality of the research paper is exceptional.
- The paper is highly coherent, presenting complex ideas in an understandable way.
- The contributions made in this paper are innovative and original, as far as I am aware.
- The findings significantly enhance our understanding of knowledge distillation behavior.
- The visual aids in the paper are straightforward and easy to comprehend.
- The experimental evaluation is comprehensive and thorough.
- The mathematical notation used in the paper is succinct, clear, and well-selected.
- The supplementary material provides a wealth of additional experimental evaluations.
- The paper is beautifully composed, making it a pleasure to read.
Weaknesses: I generally dislike scatterplots when the density of points is as high as in the presented figures. The region of highest density is saturated and it becomes impossible to distinguish between densities in saturated regions. Consider an *actual* density plot with a heatmap or a contourplot as choice of visualization instead.
This is purely subjective, but I feel that the authors tends to overuse italics. While it can be useful to highlight key terms, its frequent use in every sentence or every other sentence diminishes its effectiveness.
I found only a single typo on line 300 “similarities [17, 36], Several” -> “similarities [17, 36]. Several”.
Figure 1. (a): Why is this the only figure with a rasterized instead of a vectorized graphic?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors address the limitations of their work in Appendix A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are pleased to hear that you have enjoyed reading the paper. Thanks in particular for examining our supplementary results and for raising many positive points about work.
> Consider an actual density plot with a heatmap or a contourplot as choice of visualization instead.
You’re right that it is better to use a density plot. Currently we use a scatter plot, although with a transparency factor which is somewhere in between a scatter plot and a density plot. **We’ve provided some sample plots in our global response PDF. Please let us know if you think they can be improved further.**
> Figure 1. (a): Why is this the only figure with a rasterized instead of a vectorized graphic?
Note that we rendered all our scatter plots as rasterized png files since it was faster to load. The vectorized pdf made it significantly harder to scroll over the pages, since they try to render 1000s of points.
> tend to overuse italics
Absolutely a fair point -– we will fix this.
And thanks for spotting the typo!
**Given your accurate and detailed summary of the paper, if you feel comfortable about increasing your confidence score, we would highly appreciate that! Thanks again for your positive review.**
---
Rebuttal Comment 1.1:
Comment: > You’re right that it is better to use a density plot. Currently we use a scatter plot, although with a transparency factor which is somewhere in between a scatter plot and a density plot. We’ve provided some sample plots in our global response PDF. Please let us know if you think they can be improved further.
IMO the density plot is much more informative -- well done.
> Note that we rendered all our scatter plots as rasterized png files since it was faster to load. The vectorized pdf made it significantly harder to scroll over the pages, since they try to render 1000s of points.
Yes, I recall having this issue when rendering vectorized scatterplots.
Having read the other reviews and rebuttals, I want to thank the authors again for their contributions! I found the work to be highly interesting. I will stand with my rating and increase my confidence from 3 to 4.
---
Reply to Comment 1.1.1:
Title: Thank you for valuing our contributions!
Comment: Dear reviewer,
**Thank you for going over all the reviews/rebuttals and updating your confidence. We are pleased to hear that you find the work highly interesting!** Once again, thanks for your valuable feedback. | Summary: This paper aims to understand the counter-intuitive phenomenon that the
student can sometimes outperform the teacher in terms of generalization
performance even when it deviates from the teacher's soft-labels during
training. Using a linear regression model, the authors provide a
theoretical result that explains this behavior. That is, through early
stopping, the student further exaggerates the top eigendirections and
suppresses the lower eigendirections than the teacher. Thus, the student
may deviate from the teacher's soft labels but attain a more favorable
implicit bias on the top eigendirections. The paper then relies on
numerical results to demonstrate that the similar behaviors occurs in
non-linear neural networks beyond regression.
Strengths: 1. The regime where student deviates from the teacher is less explored.
The paper is among the first to provide theoretical understand of the
improved student performance in this regime.
2. The paper provides both theoretical results and ample numerical
results for the phonomenon.
Weaknesses: 1. The theoretical result is for linear regression. It is unclear how
the analysis can be exended to classification problems.
2. Compared to the implicit bias in linear regression (with early
stopping), there are other ways that neural network training for
classification problems can produce an implicit bias. Thus, it is
difficult to tell whether the insight revealed by the theoretical result
is the most prominent factor.
3. The de-emphasis of lower eigen-directions could also be a double-edge
sword. That is, if the second and third eigne-directions correspond to ``useful''
parameters to learn, then the de-emphasis reported by the theoretical
result may also hurt the student.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I wish to hear what the authors' thoughts are regarding the weakness
points above, i.e., (i) whether the theoretical insights can be
generalized to classification problems; (ii) the relevance of other
types of implicit bias; and (iii) the setting where de-emphazing lower
eigen-directions may hurt.
2. Can the authors also comment on the use of temperature in
student-teacher training? Using temperature also alters the teacher's
soft-label, and therefore it can be seen as another form of student not
trying to perfectly match the teacher. It has been demonstrated to
improve the student performance.
Post rebuttal phase:
The reviewer wishes to thank the authors for their response, which clarifies the "double-edge sword" effect shown in some of the experiments. However, I feel that overall the impact of the implicit-bias/top-eigenvalue effect is not conclusive: it sometimes helps, but sometimes doesn't. It is unclear how these explanations will guide student-teacher training. Thus, I think I will keep my review score.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Some discussion on the limitation in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to provide your feedback on our paper. You’ve raised some interesting questions, some of which the paper addresses. We explain why below.
------
> whether the theoretical insights can be generalized to classification problems;.
We’d like to note that **we have provided empirical proof of the theorem’s claim for 3 different classification settings spanning 3 different models (linear, MLP, CNN) in Fig 1b and Appendix D**. Here, our theoretical assumptions break in many different ways (we apply cross entropy + finite learning rate + multiclass + a different optimizer). Yet, we find a stark difference in the trajectory of the student and the teacher in the eigenspace — exactly as predicted by the theorem. We agree that it’s unclear how to extend the precise technicalities of our proof to these settings. But we hope you are convinced that these experiments consistently highlight the generality of our insights.
-----
> The de-emphasis of lower eigen-directions could also be a double-edge sword. That is, if the second and third eigne-directions correspond to ``useful'' parameters to learn, then the de-emphasis reported by the theoretical result may also hurt the student.
Absolutely! Thanks for raising such an insightful point! **Sec 5.2 is meant to address this double-edgeness**, but we will rephrase it to make this clearer.
Sec 5.2 says that, due to certain confounding factors, regularization may not always be a good thing i.e., it can be excessive, or as you say, a double-edged sword. You bring up one such confounding factor: usefulness of intermediate eigendirections. We highlighted another: the teacher’s top 1 train accuracy.
In fact, we can phrase our factor in terms of yours. One way a teacher may have low training accuracy is via aggressive early-stopping; hence the teacher may have used only the top eigendirection and not relied as much on, say, a useful 2nd or 3d direction. When a student learns from this imperfect teacher, it may further de-emphasize these useful 2nd or 3d directions and thus suffer even more in accuracy. Regularization is thus counter-productive here. **We hope you see that this is not a weakness of our theory, but rather a universal fact about regularization that we have prominently noted in the paper.**
---
> Can the authors also comment on the use of temperature in student-teacher training? …it can be seen as another form of student not trying to perfectly match the teacher.
Agreed! To phrase this in terms of our theory, the top eigenvector can be thought of as an approximate classifier that assigns fuzzy class memberships. Thus, to fit de-fuzzified, spikier memberships, we need to rely on lower eigenvectors. Hence, a reasonable hypothesis is: “learning softer version of the teacher's memberships => more reliance on top eigenvectors => more imperfect fit of the teacher”.
But again, this is a double-edged sword: if we try to learn overly-softened versions of the teacher's class memberships, we over-emphasize some “noisy” tail classes that the teacher picked up by happenstance. This may again demand the use of lower eigenvectors. Finding the sweetspot level of softness and relating it to our theory requires a rigorous analysis and is certainly a very interesting question for future work.
**Having said all that, our work focused on the same-temperature case because this is the setting where it’s hardest to explain why there is any student-teacher deviation at all!**
----
> There are other ways that neural network training for classification problems can produce an implicit bias. Thus, it is difficult to tell whether the insight revealed by the theoretical result is the most prominent factor.
This is a thought-provoking question!
First, we re-iterate that in our MLP and CNN settings, we do see a clear difference in the eigenspace trajectories of the model (Appendix D).
But it is indeed right to wonder if distillation also exaggerates other sorts of biases in neural networks. We believe our work has empirically/theoretically identified the first exaggeration of this type in GD-trained models. It seems reasonable to say that finding other such exaggerations is a great open question for future work to tackle based on our insights.
—-----
**We hope that our answers convince you as to how our paper addresses some of your questions already; and for the other questions hopefully our response gives you a satisfactory new insight.** We are eager to know if you’ll be able to re-evaluate our paper's score (and contribution score) in light of this discussion. Thanks again for the insightful questions! Please let us know if you have any further ones.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. The discussion in Section 5.2 does read more like "a double-edge sword" now, and thanks for pointing out this interpretation. However, I feel that overall the impact of the implicit-bias/top-eigenvalue effect is not conclusive: it sometimes helps, but sometimes doesn't. It is unclear how these explanations will guide student-teacher training. Thus, I think I will keep my review score.
---
Reply to Comment 1.1.1:
Title: Thanks for acknowledging our response!
Comment: Dear reviewer,
We are grateful for your prompt response to our rebuttal!
**Thank you for acknowledging that we addressed your question regarding the "double-edgedness". We sincerely hope you also found our response to your other two concerns helpful!**
We also understand that it is unclear to you as to how our findings can end up providing concrete guidance for knowledge distillation (KD). While we believe that this work has the potential to give rise to practical guidance in the long-term, we acknowledge that evaluating this can certainly be subjective.
Nevertheless, we jot down some of our thoughts on this for the curious readers:
1. Without our finding, future empirical research may dedicate significant efforts into forcing the student to _precisely_ fit the teacher. **Our work warns that being pedantic about a precise fit would be counterproductive, thus potentially shaping empirical research.**
2. Our theoretical model opens the door to analyzing the many variations of KD. E.g., it is not hard to extend it to "progressive KD" with intermediate teacher checkpoints (https://arxiv.org/abs/2110.08532). An exciting possibility is that **one could (easily) brainstorm new KD objectives due to the simplicity of our theoretical framework**, with the hope of a *provably* better exaggerated bias. One could then empirically explore those variants in the deep learning setting. In this sense, our theory has the potential to inspire new practice.
3. As you note in the strengths, this is a first step towards bridging theoretical intuition and a remarkable practical phenomenon about KD. Such theory-practice work is always a challenging endeavor: deep learning practice is almost always messy, and any tractable theory almost always requires great simplifications. **We have gone great lengths to bridge this theory-practice disconnect:** we report our observations on a wide range of image/language settings, and we verify our theory in multiple settings where assumptions break.
We hope this provides a glimpse of what we believe are _long-term_ value in our findings. But to reiterate, we understand that the value/confidence one may assign to these possibilities is subjective.
Thanks again for engaging with our work and for your constructive feedback!
Regards,
Authors | Summary: This work delves into the understanding of knowledge distillation by studying the deviations between teacher and student models during the distillation process. It reveals two primary observations: students often underfit points that teachers find challenging, and the initial training phase is not crucial for distillation benefits as similar results can be achieved by switching from hard labels to teacher's soft predictions mid-training. To explain these observations, the authors propose two theoretical viewpoints: distillation acting as a regularizer in the eigenspace and as a denoiser of gradients. Empirical evidence supporting these theories was provided through experiments on various settings. Overall, the paper enriches our understanding of knowledge distillation, bridging the gap between theory and practice, while offering insights that could enhance the efficiency and effectiveness of knowledge distillation processes.
Strengths: 1. This paper conducts experiments across multiple model architectures and both vision and language tasks.
2. The observation is clearly presented and further explained from interesting perspectives.
3. This paper thoroughly reviews relevant literature and successfully positions itself within the context of existing research.
Weaknesses: 1. While Section 5.2 has presented the effect of teacher's interpolation level in CIFAR-100, a synthetic dataset might solidify such findings further. It would be intriguing to explore whether varying the teacher's interpolation would correspondingly impact the degree of the student's exaggeration.
2. This paper focuses on the setting of distilling from logits, it would augment the paper's breadth if some analysis concerning feature distillation were incorporated.
3. Not a weakness but the authors might find connection in this paper: Zhang et al., "Do Not Blindly Imitate the Teacher: Loss Perturbation for Knowledge Distillation". I understand the authors may not yet have had the opportunity to consider its relevance to this study given its recent release, but it appears to echo this paper's claim that not matching the teacher probabilities exactly can be a good thing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for appreciating the breadth of our empirical findings and giving our paper a positive rating!
------
> It would be intriguing to explore whether varying the teacher's interpolation would correspondingly impact the degree of the student's exaggeration.
Interesting question! Based on our theory, the student’s exaggeration would arise independent of the teacher’s interpolation level. It is only the student’s generalization that is confounded by the teacher’s interpolation level.
**Indeed, as evinced by Figure A2 in our global response PDF, under both the interpolated and non-interpolated CIFAR-100 teacher, the confidence exaggeration appears.**
We’d also like to refer you to the plots of ImageNet in Fig 7 (where there’s no interpolation) where we do find confidence exaggeration.
We're afraid we are not sure why a synthetic dataset would be necessary: even in CIFAR100, we are already able to manipulate the teacher's interpolation level by training it to different extents. Did you have something else in mind? Please let us know! Thanks!
----
> 2. it would augment the paper's breadth if some analysis concerning feature distillation were incorporated.
This is an exciting follow-up exploration. There may be many interesting effects that may arise when the exaggerated bias is induced at a feature level rather than at the logit level.
But to gently push back on characterizing this as a weakness: given the existing poor understanding of distillation, we followed the spirit of work such as [1] in rigorously understanding at least the most widely used form of distillation. We believe this in itself is a significant first step. Furthermore, please note that, unlike prior works, we simultaneously attack this from both a theoretical and empirical viewpoint (across a breadth of settings) bridging the gap between the two.
-----
3. Not a weakness but the authors might find connection in this paper: Zhang et al., "Do Not Blindly Imitate the Teacher: Loss Perturbation for Knowledge Distillation". I understand the authors may not yet have had the opportunity to consider its relevance to this study given its recent release,...
Thank you for bringing up this (contemporaneous) work. Interestingly, their rationale for not fitting the teacher is complementary to ours. Their rationale is that overfitting to the teacher may be bad since the teacher can be inaccurate. In our viewpoint, even if the teacher is 100% accurate, it helps to not overfit to the teacher. Our theory says that this will help ignore certain lower eigendirections picked up by the teacher.
-------
[1] Stanton et al., Does Knowledge Distillation Really Work?
------
We hope our responses provide satisfactory answers to your questions. If so, we hope you will re-evaluate the paper in a more positive light. Thank you for your time! | Rebuttal 1:
Rebuttal: The response PDF contains Fig A2 as requested by Reviewer vVNQ (R2) and Fig A1 as requested by Reviewer zo9s (R4).
(Note: In case a link to the PDF is not visible, clicking on `Revisions` above should lead to a link.)
Pdf: /pdf/bf7de493c6d19241c3db46e2002fe20f8db756fa.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper investigates one of the surprising findings in the field of knowledge distillation, which is that often times, the student deviates from the teacher in the process of mimicking it, and that some times this results in the student performing even better than the teacher (e.g., self-distillation). The authors claim that the reason this happens is that the distillation process will systematically try to exaggerate the confidence of the teacher: if the original teacher's confidence was low, the distilled student will have an even lower confidence, and if the original confidence was high, the student's confidence will be even higher. The paper then explains that there is another exaggeration which happens in the distillation process - exaggeration of bias - and how this can be understood as a cause for the former exaggeration (of confidence). The authors conduct experiments on many kinds of datasets (e.g., CIFAR, ImageNet) where they verify their findings. The takeaway from this work are explanations of some of the unintuitive behaviors of the distillation process, as well as some tips for practical purposes (to achieve better performance through the distilled student).
Strengths: The paper is well written; the problem well motivated (why certain observations about knowledge distillation are surprising).
The authors have used the results of prior work properly to put forth a hypothesis which connects and expands on them (e.g., extending the result of Mobahi et al. to GD trained models). The mathematical proof connecting the two distinct properties - exaggeration in implicit bias in GD trained models and exaggeration in confidence is properly explained.
Results are shown which confirm their hypothesis on multiple different datasets.
Weaknesses: Even though the individual sections of the paper are well written (see strengths), the paper as a whole does appear to be an amalgamation of many different sections, and it is a bit difficult to figure out what the overarching theme is. A better narrative would have been if things were progressively going from one point to the other - for example, exaggerating of the implicit bias is the root cause --> which then causes an exaggeration in confidence level --> which then explains why students perform better than the teacher sometimes. Right now, it is not clear where the narrative is leading to for the most part, until the conclusion section.
One of the important conclusions of the paper (line 259-260) - "distillation can hurt the student when the teacher does not achieve sufficient top-1 accuracy on the training data." - seems to be contrary to many of the other results that people have observed. For example, the paper that the authors themselves cite, Cho and Hariharan [1] mentions that the bigger (more accurate, as measured by top-1 accuracy) are often not the most appropriate models for performing knowledge distillation. The student's achieve better performance using smaller (less accurate teachers). How do the authors reconcile their results with [1]'s observations? Plus, I don't think this conclusion helps explain why self-distillation improves the performance of the new student, when the older student was less accurate (compared to the distilled student).
References
[1] On the Efficacy of Knowledge Distillation. Cho et al. ICCV 2019
**Post rebuttal update:**
I appreciate the rebuttal given by the authors and I'm increasing my rating by 1 point
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It has been shown that the efficacy of knowledge distillation depends a lot on certain practical considerations [2]. It will be good if the authors can expound their experiment setup (augmentations used, number of iterations etc.).
References
[2] Knowledge distillation: A good teacher is patient and consistent. Beyer et al. 2022
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There is no section for limitations, and in fact I think it will be useful to have a section like this. It is particularly more useful in a field like knowledge distillation where often times some unintuitive behavior of the student is emerging. So, if there are certain observations that the presented work cannot explain (e.g., see the weaknesses section), please try to have a discussion about it.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive score on the paper, and for appreciating how we connect the two disjoint lines of research!
--------
> "distillation can hurt the student when the teacher does not achieve sufficient top-1 accuracy on the training data." - seems to be contrary to many of the other results that people have observed.
This is a great question. **We explain why there is no contradiction here:**
1. First, a quick note: Our argument is also supported via orthogonal experiments in other papers [1, 2] who note that inaccurate teachers can hurt distillation. We have mentioned [1] already, but will also cite [2].
2. When one increases teacher size as Cho and Hariharan do, there are two confounding variables which have opposing effects on distillation:
- Effect A: Teacher’s top-1 accuracy: this must help the student
- Effect B: Capacity mismatch, specifically the teacher’s non-target logits become too rich/complex: this must hurt the student
**From Cho and Hariharan, we cannot conclude that Effect A (top-1 accuracy) hurts the student since even the opposing Effect B (capacity mismatch of non-target logits) is present**! In our work, we carefully isolate the effect of A by focusing on self-distillation. In self-distillation, intuitively, B becomes negligible since the student can represent the non-target logits of the teacher. In our controlled setting, we find that higher top-1 accuracy does help the student, thereby verifying the effect of A alone. Thus, we hope you appreciate the value in our controlled experiment which helps clarify an apparent contradiction in prior works.
--------
> it is a bit difficult to figure out what the overarching theme is…it is not clear where the narrative is leading to for the most part, until the conclusion section.
We appreciate your valuable feedback about making the overarching story clearer upfront. We apologize for the lack of clarity on this. To clarify, currently our narrative follows the actual course of inquiry in our research (we began by understanding what deviations exist, and then how they may have emerged.). For now, we have two simpler ideas for fixing the issue you bring up:
- Add a more explicit outline of our claim at the end of the introduction. In short: distillation exaggerates the implicit bias of GD $\implies$ This results in both (a) deviations from the teacher in the form of exaggerated confidence and (b) improved generalization (subject to other confounding factors). This reconciles how (a) can co-occur with (b).
- We will also add a graphical model visualizing the above mechanism.
Do you think this would help address your concern reasonably?
--------
> There is no section for limitations, and in fact I think it will be useful to have a section like this.
**We would like to note that the first appendix section lays out the limitations in detail,** as Reviewer zo9s and tbs9 have noticed. But based on your feedback, we will make sure to refer to this in the main paper clearly.
> It will be good if the authors can expound their experiment setup (augmentations used, number of iterations etc.).
We would like to note that **Appendix C.1 and Table 1 cover all experiment details.** We made sure to use hyperparameters as recommended in prior work, while also providing key ablations in Appendix C.5 (for batch size, learning rate, training time, distillation weight, evaluation metric).
--------
**We hope that our key clarification that there is no contradiction in our findings with prior work, helps you re-evaluate the paper with a more positive score. Thanks again for your feedback!**
[1]: Fotis Iliopoulos, Vasilis Kontonis, Cenk Baykal, Gaurav Menghani, Khoa Trinh, and Erik Vee. Weighted distillation with unlabeled examples.
[2]: Zaida Zhou, Chaoran Zhuge, Xinwei Guan, and Wen Liu. Channel distillation: Channel-wise attention for knowledge distillation. CoRR, abs/2006.01683, 2020.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I thank the authors for their rebuttal. I particularly appreciated their clarification on the apparent contradiction between their theory and Cho and Hariharan's work. I would encourage the authors to somehow include this in their paper, even if it is in the appendix. This is because this is one of the major "surprises" that people have noticed working with knowledge distillation, and this clarification that the authors have provided will help shed some light on it. I also thank the authors for pointing out the relevant details present elsewhere in their paper.
Overall, I am more confident with the submission and I am increasing my rating from 5 to 6.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Dear reviewer,
Thank you for raising your score and acknowledging that our response addresses your concerns satisfactorily. We will certainly add an extended discussion regarding Cho and Hariharan to the paper. Thanks for raising a valuable question in your review!
Authors | null | null | null | null | null | null |
Order Matters in the Presence of Dataset Imbalance for Multilingual Learning | Accept (poster) | Summary: This paper presents a simple and effective multi-task learning strategy of a joint pretraining followed by fine-tuning, where pretraining is on the high-resource task and fine-tuning is on a mixture of high and low-resource tasks. This significantly improves performance on the low-resource tasks, while performing at par or even better sometimes on high-resource tasks. The authors show results on a machine translation task consisting of two high-resource and two low-resource language pairs and a language modeling task trained on five different languages.
Strengths: - The proposed idea is attractive in its simplicity, and offers good validation loss reductions on translation and language modeling tasks.
- The authors provide a fairly extensive empirical analysis of the proposed approach, and offer good baselines to compare against.
- The paper is written clearly and the overall narrative flows well.
Weaknesses: While the proposed approach has been empirically validated with multiple experiments, there remain unanswered questions about the initial pretraining on a high-resource language:
- How important is the pretraining objective?
- What is the influence of the chosen high-resource language on final performance on low-resource languages?
- Why does pretraining longer improve the loss for high-resource NMT tasks but degrade the loss for high-resource language modeling tasks (c.f., Figure 9(a) and 9(b)).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - In Figure 9a, pretraining longer on English seems to benefit Gujarati but not Hindi. Any thoughts on why this might be?
- From line 260, "Longer pre-training helps more in terms of performance than exposure to the tokens of the low-resource task" -- This claim demands further probing. Is this dependent on the choice of high-resource language for pretraining? This claim holds for Gujarati but not Hindi. Are there features of the low-resource task (e.g., type-token ratio in the unlabeled text for the low-resource tasks) that makes transfer from the high-resource task more effective?
- In the many-task setting, as in Section 3.2, would it be beneficial to introduce an intermediate pretraining stage between pretraining and finetuning? Such strategies have been shown to be effective in prior work (e.g., Phang et al., " Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks", 2018). Also, such works on intermediate labeled-data tasks should be cited in the related work section.
- The authors should move up the BLEU results from the Appendix to the main draft. For NMT, it is important to show that validation loss reductions do translate to BLEU score improvements (especially for English->Hindi).
- A suggestion: Since pretraining is usually associated with self-supervised objectives, it would be useful to explicitly clarify that the pretraining and finetuning objectives are both cross-entropy losses for the NMT task.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations have been adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback. We respond to the weaknesses and questions brought up below:
**W:** *Why does pretraining longer improve the loss for high-resource NMT tasks but degrade the loss for high-resource language modeling tasks (c.f., Figure 9(a) and 9(b)).*
**Response:** For all our experiments, when starting the fine-tuning phase of training we reset the cosine learning rate schedule with a warmup. Due to the warmup, the performance of the pre-trained high-resource task performance worsens a bit before improving again (see the panel corresponding to En for Figure 8). Because of this, the high-resource task needs enough training steps to recover its old performance, and to continue improving.
In the language modeling experiment (Figure 9(a)), we kept the total training budget the same, which meant increasing pre-training length resulted in decreased fine-tuning length. Less fine-tuning time means less time to recover for En, hence the worse performance.
On the other hand, for the NMT experiment (Figure 9(b)), we kept the fine-tuning length the same regardless of the pre-training length, which is why the En-Fr performance could keep improving.
The implications of these results is that when there is a strict training budget, it’s better to be conservative and pre-train for a shorter amount of time. However, if the goal is to obtain the best performance and there is no strict compute budget, it’s better to pre-train for as long as possible before fine-tuning. Note that longer overall training is an option for our method (by pre-training for longer) but not for scalarization because scalarization needs to always be training on the low-resource tasks, which will lead to overfitting when training for long.
We will update our final draft to be more clear about this.
**Q1:** *In Figure 9a, pretraining longer on English seems to benefit Gujarati but not Hindi. Any thoughts on why this might be?*
**Response:** In Figure 9a, as we pre-train for longer, we fine-tune for a shorter amount of time. In addition to this, Hindi is a high-resource task given our training budget. Therefore, overall lesser training for Hindi resulted in worse performance. Pre-training still benefits Hindi– Figure 8 shows that pre-training initialized Hi performance at a better position than random initialization, and resulted in more data-efficient training.
As an aside, we did not include Hindi in the pre-training phase because we wanted to keep our method simple. Including Hindi in the pre-training phase would require figuring out good sampling proportions not only for the fine-tuning phase, but for the pre-training phase. We can avoid under-training high-resource tasks by fine-tuning for longer.
**Q2:** *"Longer pre-training helps more in terms of performance than exposure to the tokens of the low-resource task" -- This claim demands further probing. Is this dependent on the choice of high-resource language for pretraining? This claim holds for Gujarati but not Hindi. Are there features of the low-resource task (e.g., type-token ratio in the unlabeled text for the low-resource tasks) that makes transfer from the high-resource task more effective?*
**Response:** We hope our response to Q1 helped answer a part of this question. In general, the relationship between the pre-training and fine-tuning tasks should have an impact on performance, although we are unsure what exactly about the relationship causes better transfer. We believe this is an important question that deserves separate investigation. In our work, we stuck to the simplest method and chose pre-training tasks (En-Fr for MT and En for language modeling) that are (and will always be) available at an abundance, therefore, are perfect to pre-train on.
**Q3:** *In the many-task setting, as in Section 3.2, would it be beneficial to introduce an intermediate pretraining stage between pretraining and finetuning? Such strategies have been shown to be effective in prior work (e.g., Phang et al., " Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks", 2018). Also, such works on intermediate labeled-data tasks should be cited in the related work section.*
**Response:** Thank you for the missed citation– we will include intermediate pre-training works in the related work section of our final draft.
We have not tested with intermediate training in our context. In Phang et al., Intermediate training on a different but related task seemed to help in the monolingual (English) setting, which means it could possibly improve our method further. In our paper, we focus on cross-lingual transfer, but we believe that pre-training to utilize cross-task transfer is an interesting question to be studied in the future.
**Q4:** *The authors should move up the BLEU results from the Appendix to the main draft.*
**Response:** Thank you for the suggestion, we will move the BLEU results to the main section in our final draft.
**Q5:** *A suggestion: Since pretraining is usually associated with self-supervised objectives, it would be useful to explicitly clarify that the pretraining and finetuning objectives are both cross-entropy losses for the NMT task.*
**Response:** Thank you for the suggestion, we will clarify this in the final draft.
We hope that we clarified some of the reviewer’s concerns– we will update our final draft to make our points more clear. Please let us know if there are any further questions.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their detailed responses.
> The implications of these results is that when there is a strict training budget, it’s better to be conservative and pre-train for a shorter amount of time. However, if the goal is to obtain the best performance and there is no strict compute budget, it’s better to pre-train for as long as possible before fine-tuning.
I think this is an important point that the authors should explicitly highlight in the main draft.
I am raising my score to an Accept. | Summary: The paper proposes a method called pretraining and joint-finetuning, which pretrains a model on a high-resource task than finetunes the model with joint high- and low-resource tasks, benefiting from both static sampling method and naïve transfer learning. The method is verified on multilingual translation and multilingual language modelling. Experiments show improvement on the two scenarios in term of validation loss.
Strengths: 1. The proposed method is simple and easy to implement/reproduce.
2. The method can achieve lower validation loss compared to static sampling and naïve transfer learning.
3. The method verified on multilingual (> 2) setting and extremely resource-imbalanced (English is 16745 times larger than the smallest language) setting.
Weaknesses: 1. The algorithm is sensitive to the sampling proportion, resulting a need to search proportion for every model trained. In addition to searching the proportion of dataset size, there is a need for grid search of N1 and N2 (N1 + N2 = N). The hyperparameters (proportion = 0.4 for two tasks, tau = 3.33 for > 2 tasks, and joint-finetuning steps = 50k) are not verified among different tasks.
2. Experiments are done only on multilingual setting, whose scope is a bit different from the title “multitask” learning. Is the proposed method still effective on multitask like NER + POS + sentiment analysis?
3. The improvement on performance is not significant when the evaluation metric is BLUE (from Fig. 26).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Definition of Pareto optimal points. According to the paper, “θ is Pareto optimal if it is not dominated by any other point”. However, for a parameter family for a certain architecture, it is impossible to test all parameters in the whole space. In this case, is it rigorous to say that a parameter θ is not dominated by any other parameters?
2. Inadequate references. There are plenty of papers on the catastrophic forgetting phenomenon in multilingual/multitask learning. I think the claims in this paper might been seen from others. For example, the “order matters” claim has been found in “Overcoming Catastrophic Forgetting beyond Continual Learning: Balanced Training for Neural Machine Translation”. The authors should check previous literature more carefully.
3. Scalarization implementation. In Line 72 of the paper, the static sampling is implementing by adding weights to the losses from different tasks (languages). Will the models trained with different proportions be fed different data? For example, suppose the En->Ro data is fixed, will a model with 0.1 En-Ro proportion be fed more En data than a model with 0.9 En-Ro proportion? If the answer is no (i.e., the data is the same, the only difference is the weights on losses), does the method have a same effect with sampling data in each batch?
4. Why the Figures do not always show all results of the ten proportions (e.g., Fig. 2 Left)?
Minor suggestions (mainly for presentation):
- I think having a separate section about the proposed method will be better.
- For figures of the loss that need a comparison (e.g., Fig. 3 and 8), the illustration will be clearer if the scale of the two subfigures is consistent.
After Author Response:
I strongly encourage the authors to address Questions 1 and 2, specifically the clarification of Pareto dominance and a more comprehensive discussion with previous literature. These enhancements would undoubtedly contribute to the rigor and solidity of the paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors consider an interesting possibility of multiple high-resource languages setting as a future work, which can be seen as a partition of tasks. However, they do not provide a set of hyperparameters effective across tasks, e.g., at least how many steps of joint-finetuning should be taken? Which sampling proportion should we use when training with new data/tasks?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. Here is our response:
**W1a:** *The algorithm is sensitive to the sampling proportion, resulting a need to search proportion for every model trained.*
**Response:**
Our method is not any more sensitive to the sampling proportion than scalarization, which is our baseline. Furthermore, the best sampling proportions will always depend on the task at hand. Therefore, even for scalarization, ideally one would search for a good sampling proportion.
In our experiments, we follow two different schemes for testing sampling rates depending on what we aim to achieve. In our NMT experiments, the purpose of testing sampling rates in a grid was to investigate whether pre-training can improve the Pareto front of scalarization. We emphasize that we tested sampling rates in a grid for the scalarization baseline as well, so that the comparison is fair. In our language modeling experiments, our goal was to model reality as much as possible (by using temperature sampling), while still comparing against the best baseline. This is why we **tuned the temperature parameter for the scalarization baseline**, and used that same temperature for our method.
**W1b:** *In addition to searching the proportion of dataset size, there is a need for grid search of N1 and N2 (N1 + N2 = N).*
**Response:** Our method does indeed introduce an extra parameter to decide on compared to scalarization: scalarization only needs to decide on N, but we need to decide on N1 and N2. Regarding this, we would like to emphasize 3 things:
- **We did not tune or perform a grid search in our experiments.** We chose values that were either the most beneficial to our baseline (scalarization), or seemed reasonable, and we continued using them without further tuning them.
- **Parameters N1 and N2 are precisely how our method can do early stopping.** Let’s imagine using scalarization, and one of the tasks starts overfitting. It’s not ideal to do early stopping since the high-resource tasks can benefit from training longer. If we instead pre-train on the high-resource task, we can get a head-start on the high-resource task such that once the low-resource tasks start overfitting, we can stop training.
- **Based on our experiments, we can recommend how one would decide on N1 and N2.** Our results show that pre-training doesn’t hurt performance, and the longer we pre-train for, the better performance we get. This means that if a practitioner is strictly bound with a compute budget, they should probably pre-train for only a fraction of their total budget to make sure that all tasks are able to converge. On the other hand, for a practitioner who mostly cares about best performance achievable, and does not have a strict computational budget, they should pre-train for as long as possible before joint fine-tuning.
**W1c:** *The hyperparameters (proportion = 0.4 for two tasks, tau = 3.33 for > 2 tasks, and joint-finetuning steps = 50k) are not verified among different tasks.*
**Response:** The performance of any given sampling rate depends on the task. Therefore, just like how it is unreasonable to believe that a single temperature parameter should be recommended for scalarization, we do not recommend a single proportion/temperature to be used for different tasks. We emphasize that we did not tune the sampling rate any more than the baseline scalarization.
The number of steps to fine-tune also depends on the task. Some tasks will take only 50k steps to converge (En-Ro, or En-Hi), while some tasks will take longer (as in the language modeling experiments).
**W2:** *Experiments are done only on multilingual setting, whose scope is a bit different from the title “multitask” learning. Is the proposed method still effective on multitask like NER + POS + sentiment analysis?*
**Response:** Please see to our global response.
**W3:** *The improvement on performance is not significant when the evaluation metric is BLUE (from Fig. 26).*
**Response:** Our improvement on En-{Ro, Fr} is indeed weak in terms of the BLEU score. The improvements are much better for En-{Hi, Fr}, so we expect the amount of improvement to vary depending on the task.
**Q1:** *Is it rigorous to say that a parameter $\theta$ is not dominated by any other parameters?*
**Response:** You are right– we cannot rigorously guarantee that a given point is not dominated. We will update our writing to be more clear about this. However, we do believe that we did a reasonably good job of approximating the Pareto front by testing a grid of sampling rates and learning rates.
**Q2:** *Inadequate references.*
**Response:** Thank you for pointing this out, we will update our paper with more references on catastrophic forgetting.
**Q3:** *Scalarization implementation. In Line 72 of the paper, the static sampling is implementing by adding weights to the losses from different tasks (languages). Will the models trained with different proportions be fed different data?*
**Response:** We apologize for the confusion. We follow the convention in the NMT literature and implement scalarization via proportional sampling, where the average number of data corresponding to task i in a batch is proportional to w_i. So we do not change the loss function, only the way we sample the data. We will update our draft to include this point, thank you for pointing it out.
**Q4:** *Why the Figures do not always show all results of the ten proportions (e.g., Fig. 2 Left)?*
**Response:** This is because we are only plotting the points that were not dominated (considering only the space of parameters obtained through our grid searches). In the existence of low-resource tasks, the validation loss plot does not mirror the training loss plot like in the high-high case (Figure 1), and there are certain sampling rates that are more optimal than others.
Regarding the minor suggestions, thank you for the feedback– we will update our draft to reflect them.
---
Rebuttal Comment 1.1:
Comment: Reviewer V1qx, could you please review the latest response of the authors and indicate whether it addresses your feedback and whether you choose to maintain or change your rating?
---
Rebuttal Comment 1.2:
Comment: Thank you for the detailed explanation provided within the response, particularly in regard to parameter selection. Based on the improvements and clarifications, I am prepared to endorse the acceptance of this paper, increasing my score from 4 to 5. However, I strongly encourage the authors to address Questions 1 and 2, specifically the clarification of Pareto dominance and a more comprehensive discussion with previous literature. These enhancements would undoubtedly contribute to the rigor and solidity of the paper. | Summary: The paper presents a multi-task training approach that involves pre-training on high resource tasks, followed by joint fine-tuning on the full set of tasks. The intuition is that high-resource tasks need a larger number of steps to reach convergence, whereas low-resource tasks will risk over-fitting if trained for that same number of steps.
The authors show that the proposed approach improves the performance for neural machine translation of low-resource languages (en-ro and en-hi, pretrained on en-fr) and similarly improves performance for multilingual language modelling for the case of low-resource languages.
Strengths: * The authors show that _pretraining joint-finetuning_ is a simple approach to boost low-resource task performance in multi-task training. The experiments, which focus on language modelling and machine translation, are clearly designed and presented, with reasonable baselines.
* In this proposed training approach, the pre-training phase ends up taking up most of the training time. Since pre-training is conducted on the subset of high-resource tasks, this makes it an easier optimisation problem compared to the standard approach where all tasks are trained. The potentially trickier joint finetuning step requires overall fewer steps. Hyperparameter sweeps could plausibly be limited just to this final step, making them cheaper and faster compared to the baseline joint training approach.
* Between the paper and the appendices, enough details are given so that reproducing these run should present no issues.
Weaknesses: * I would have loved to see some more realistic MT experimental settings. Pretraining on en-fr for either en-zh or en-hi (or viceversa) feels very unrealistic. There are much closer language pairs that could have been used, which would have made the results more convincing.
* I am concerned about the results of the experiments of section 3.1.1. What's the effect on BLEU scores here? It would have been good to include these in Appendix A5, as was done for the other two language pairs.
Minor:
* Line 61: _if_ is attached to $\forall$
* Line 171: shows -> show
* Nitpick: In the abstract, the wording "experimental settings which range from neural machine translation (NMT) to multi-lingual language modeling" seems a bit out of place. These are exactly the two settings on which the proposed approach is being evaluated, there aren't any other experimental settings that fall within these two.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * Line 128 refers the reader to Figures 21 and 22 in the appendix to show that the proposed approach does not hurt performance in the high-high case. It's not clear to me how the said figures show this. The clearest evidence for me would have been a version of Figures 26 and 27 for the English+Chinese case. Would you be able to provide such a plot?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors do a good job of listing and addressing limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback and positive response. Our responses to the weaknesses and questions brought up are below:
**W1:** *I would have loved to see some more realistic MT experimental settings. Pretraining on en-fr for either en-zh or en-hi (or vice versa) feels very unrealistic. There are much closer language pairs that could have been used, which would have made the results more convincing.*
**Response:** In our MT experiments, we tried to choose task pairs with varying degrees of transfer-ability from En-Fr (Ro is more semantically closer to Fr than Hi and Zh are to Fr). If we only chose pairs that transferred well, it would be unclear whether our method showed improvement only because of the transferability, and whether it will work in settings where the tasks are not similar. We do agree that having more experiments in the realistic setting would make our arguments stronger.
**W2:** *I am concerned about the results of the experiments of section 3.1.1. What's the effect on BLEU scores here?*
**Response:** The key point we emphasize in section 3.1.1 is that for a high-resource task, the best performance achievable is bottlenecked by the amount of data seen for the task. The best En-Zh performance for pre-training (300k) joint fine-tuning (300k) is worse than the best En-Zh performance for joint training (600k) since the latter sees more En-Zh data than is possible for the former. At the same time, the best En-Zh performance for pre-training (300k) joint fine-tuning (300k) is equal to the best En-Zh performance seen by joint training (300k) which saw the same amount of En-Zh data.
The implications of these results is that when there is abundant data available for a task, and overfitting is not an issue, it is important to train on it as much as possible, and that pre-training on a different task cannot act as a proxy for more data on that task.
Lastly, we generated the BLEU score plot for En-{Zh,Fr} (see pdf attached in our global response). From the figure, we can see that the BLEU score version shows similar conclusions as the cross-entropy version, but with a more positive result– best performance for En-Zh is on-par with joint training for 600k steps.
**Q:** *Line 128 refers the reader to Figures 21 and 22 in the appendix to show that the proposed approach does not hurt performance in the high-high case. It's not clear to me how the said figures show this.*
**Response:** We apologize for the confusion. When we said the proposed approach does not hurt performance, we meant that our method does not worsen the best possible En-Zh performance achievable **given the same amount of En-Zh data** available for the model to see.
Elaborating on our response to W2, let’s imagine the Pareto front of pre-training on En–Fr for 300k steps and then joint training on En-{Zh, Fr} for 300k steps, and compare it to just joint training for 300k steps. The pareto front should be improved for the En-Fr task performance since it saw a lot more of the En-Fr task data. For the En-Zh task performance, we can expect one of 3 possibilities:
1. The front is improved due to positive transfer despite seeing the same amount of En-Zh data
2. The front is unchanged
3. The front is worsened due to pre-training on En-Fr possibly resulting in worse initialization and/or slow training for En-Zh.
Figure 1 shows that case 2 is indeed what happens– the best En-Zh performance for pre-training (300k) joint fine-tuning (300k) is equal to the best En-Zh performance seen by joint training (300k). Pre-training on En-Fr cannot act as a proxy for more En-Zh data, because if it could, then the front would be improved. At the same time, pre-training also does not negatively impact En-Zh training– Figures 21 and 22 show that pre-training does not affect the learning efficiency for En-Zh (slope of the curves are similar to one another), and also did not result in a worse initialization for En->Zh.
We hope that our response and our newly generated BLEU score plot (see pdf in global author response) addressed some of the reviewer’s concerns. We will update our writing for section 3.1.1 to make this point more clear.
**Regarding the minor fixes**, thank you for the suggestion and for catching the typos– we will make the changes to the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. Having read this as well as other reviews and rebuttals, I confirm my assessment of the paper. | Summary: This paper presents an analysis of multi-task training with the existence of low-resource tasks (in the context of the paper, they focused on multi languages). The paper argues that it is better to first "pre-train" on a high resource task and then "fine-tune" on a joint of high and low resource tasks. Experiments were conducted in neural machine translation and multilingual language modeling to support the hypothesis.
Strengths: The proposed hypothesis intuitively makes sense and the experiments have been conducted in a careful setting. The proposed solution is simple, but has its effectiveness verified in NMT and language modeling. The paper has a smooth flow and is easy to understand.
Weaknesses: 1. While the proposed solution is effective, the benefits of having a multilingual model (or in general a multi-task model) is that a single model can be used in various situations, without the need of (continue) training on different data splits and producing several models. I feel at the current stage the application of this finding might be limited.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: The paper focuses on a specific multi-task setup of multilinguality, however, the finding/hypothesis doesn't seem to have strong ties with multilingualityThe paper presented some limitations on scaling the tasks and settings to be analyzed., and is therefore not necessarily confined to this setting. I am curious if the authors conducted any experiments on the the general multi-task setup.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper presented some limitations on scaling the tasks and settings to be analyzed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. Our responses to weaknesses and questions are below:
**W:** *[…]the benefits of having a multilingual model (or in general a multi-task model) is that a single model can be used in various situations, without the need of (continue) training on different data splits and producing several models.*
**Response:** We completely agree with the reviewer about the benefits of multitask models, and how one model can be used in various situations. We believe that our proposed method does not deviate from this. Our method can be thought of as a specific scheduling strategy for sampling rates to balance the tasks. While the current convention is to train on all the tasks at once with a fixed ratio, we propose to use a 100% sampling rate for high-resource tasks in the beginning of training, before training on all the tasks with a fixed ratio. Our method yields a single model that can be used on all tasks trained, and does not need to be trained further on different data splits.
**Q:** *I am curious if the authors conducted any experiments on the general multi-task setup.*
**Response:** In our work, we frame multilingual learning as a multi-task optimization problem (as in many prior work [1, 2, 3, 4, 5]), and focus on utilizing cross-lingual transfer. We have yet to conduct experiments on the “general multi-task setup” (multiple NLP tasks that may or may not be multilingual), where the focus would be on utilizing cross-task transfer. We will update our future work section to reflect this.
[1] Luong et al, 2015, Multi-task Sequence to Sequence Learning
[2] Firat et al, 2016, Multi-Way, Multilingual Neural Machine Translation with a Shared Attention Mechanism
[3] Arivazhagan et al, 2019, Massively Multilingual Neural Machine Translation in the Wild: Findings and Challenges
[4] Jean et al, 2019, Adaptive Scheduling for Multi-Task Learning
[5] Wang et al, 2020, Gradient Vaccine: Investigating and Improving Multi-task Optimization in Massively Multilingual Models
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications! I now believe that the finding of this paper is worth exploring on a wider spectrum of setups, including multi-task learning, or multilingal pre-training besides NMT. I am increasing my score from 5 to 6. | Rebuttal 1:
Rebuttal: **Q:** *Reviewers wUUC and V1qx both pointed out that our method focused on the multilingual setup, and were curious whether we ran experiments on the general multi-task setup.*
**Response**: In our work, we frame multilingual learning (NMT, language modeling) as a multi-task optimization problem as in many prior work [1, 2, 3, 4, 5]. Using translation as an example, learning to translate to/from multiple languages, and learning multiple “tasks” (e.g. question answering, and summarization) can both be seen as learning multiple functions at once.
Since our work focuses on utilizing cross-lingual transfer, we have yet to conduct experiments on the “general multi-task setup” (multiple NLP tasks that may or may not be multilingual), where the focus would be on utilizing cross-task transfer. We will update our future work section to reflect this.
[1] Luong et al, 2015, Multi-task Sequence to Sequence Learning
[2] Firat et al, 2016, Multi-Way, Multilingual Neural Machine Translation with a Shared Attention Mechanism
[3] Arivazhagan et al, 2019, Massively Multilingual Neural Machine Translation in the Wild: Findings and Challenges
[4] Jean et al, 2019, Adaptive Scheduling for Multi-Task Learning
[5] Wang et al, 2020, Gradient Vaccine: Investigating and Improving Multi-task Optimization in Massively Multilingual Models
**Lastly**, we attach BLEU score plots for En-{Zh,Fr} as requested by Reviewer 9ZBe.
Pdf: /pdf/2809c41f7644c652202aec625834dcae8291f963.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents an empirical finding: in the context of data imbalance in multi-task learning, pre-training on high-resource tasks then fine-tuning on a mixture of high/low-resource tasks can achieves superior results, compared to standard weighted sampling training. Authors applied the proposed training method *pre-training joint fine-tuning* to neural machine translation (NMT) and multi-lingual language modeling. Both experimental results shows that the proposed method allows the high-resource task to converge faster while preventing overfitting in low-resource tasks.
Strengths: - Achieving a good balance between low-resource and high-resource languages during multitask training is a critical research area, particularly in the context of large-scale language modeling. The proposed approach of pre-training followed by joint fine-tuning is a timely solution that addresses this challenge.
- The results obtained from the empirical study demonstrate the superiority of the pre-training joint fine-tuning approach over the conventional method of weighted sampling during training. The model trained using this approach consistently outperforms existing methods, showcasing its effectiveness in addressing the challenges posed by data imbalance in multi-task learning.
- An additional ablation study showed that the pre-training joint fine-tuning has a regularization effect (to avoid overfiting to low resource language). This improvement cannot be achieved by simply increasing drop-out ratio.
Weaknesses: - In the main paper, authors only reported perplexity metrics in the evaluation results. For multi-lingual language modeling task, I would like to see the evaluation on downstream tasks as in the mT5 paper. It would be helpful to demonstrate the impact of the proposed method if authors can show the improvement in downstream tasks on XQuAD or XTREME benchmark.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Why temperature based sampling is only applied in the language modeling task not the NMT task? What is the advantage of temperature based sampling compared to scalarization in the large scale dataset?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: I am curious to see what if the authors applied this method in a more realistic setting: training LM on the whole mC4 dataset, and then evaluate the performance (low-resource language vs high-resource language) on different downstream tasks as in mT5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. Here is our response to the weaknesses and questions pointed out by the reviewer.
**W1a:** *In the main paper, authors only reported perplexity metrics in the evaluation results.*
**Response:** For the NMT experiments, we report BLEU score evaluations in Appendix A.5.
**W1b:** *For multi-lingual language modeling task, I would like to see the evaluation on downstream tasks as in the mT5 paper.*
**Response:** We agree with the reviewer on the importance of evaluating our proposed method on downstream tasks. We did not evaluate on downstream tasks for our paper because almost all languages included in the downstream tasks for evaluating cross-lingual generalization ability (e.g. XTREME) are high-resource in mC4 given our training budget (with the exception of Swahili and Yoruba). Furthermore, we believe that it will be more meaningful to evaluate on such downstream tasks after pre-training on many more languages for a longer time. We leave the scaling up of our method for future work to study, and we will expand on our limitations section to reflect on the reviewer’s comment.
**Q1:** *Why temperature based sampling is only applied in the language modeling task not the NMT task?*
**Response:** We do not need to do temperature sampling for the NMT tasks because we already search through the possible sampling rates in a grid. In other words, the sampling rates we have tried for NMT already include temperature sampling.
In the NMT experiments, we only have two tasks (language-pairs), which allows us to test sampling ratios in a grid (e.g. For En->{A, B}, we can try sampling rate x for En->A and 1-x for En->B, where x is in {0.1, 0.2, …, 0.9}). Doing so for more than 2 tasks will require testing exponentially many sampling ratios, which is why for the 5 task case in the language modeling experiment, we resort to temperature sampling.
**Q2:** *What is the advantage of temperature based sampling compared to scalarization in the large scale dataset?*
**Response:** Temperature sampling is a heuristic to obtain sampling rates to be used for scalarization (since for higher number of tasks, testing a grid of sampling rates is not feasible). Many prior work used temperature sampling with various temperatures, and its advantage is that it is simple, intuitive, and can be controlled with a single parameter. | null | null | null | null | null | null |
Tempo Adaptation in Non-stationary Reinforcement Learning | Accept (poster) | Summary: This paper studies reinforcement learning in non-stationary environments. The authors propose a proactive tempo-control model-based (PTM) framework to adjust how often the learning agent to adjust its policy to the environment tempo. Two different variants of PTM are considered: The PTM-T variant uses natural policy gradient as the base algorithm and achieves sublinear dynamic regret. The regret bound is further improved when the forecaster assigns different weights to the observed data. The second variant PTM-G is more empirical and is shown to outperform four existing baselines on three different Mujoco tasks.
Strengths: This paper studies an interesting and important problem in reinforcement learning: The quick adaptation of the learning algorithm to a non-stationary environment. The proposed solution of proactive tempo-control is interesting and novel as far as I know. The theoretical derivations look solid and convincing. This paper also includes a decent amount of numerical results with ablation studies in addition to the theoretical contributions, which also looks encouraging.
Weaknesses: 1. While I am sure that the authors have a lot of interesting results to share, the writing of the paper fails to convey some of the important ideas, which prevents me from fully appreciating the contributions of the work. The presentations of the methodology and results in the paper are somewhat unclear and sometimes can be confusing. There are a few things that are unclear to me and can certainly be improved:
a. The main theoretical results are not clearly stated. The authors build up the theoretical results through a series of theorems and propositions, but it is hard to keep track of what the strongest results are, given that there are numerous results presented. The reader needs to combine a few theorems to see the exact form of the dynamic regret bound, which is not reader-friendly. The authors could consider providing the strongest, take-home results of the work in one theorem.
b. The main text of the paper is not very self-contained, which adds to the confusion. The definitions of some notations and even algorithm descriptions (e.g., Algorithms 1 and 2) are deferred to appendices, which really should have been included in the main text. My suggestion is that some of the intermediate results in Section 4 can be moved to the appendices to allow more space for many other sections.
2. The main setup of the problem formulation is a bit unclear to me. The authors mentioned the main “trade-off” between interacting with the environment more often to gather data vs. performing more policy optimization using collected samples. Why is it even a trade-off and why can’t we do both of them at the same time? The only reason that I can think of is computation complexity, which is less critical and was not even discussed in the paper.
3. It is unclear to me how the problem formulation is related to the standard setup in non-stationary RL, e.g., [Cheung et al., 2020]. The definition of the time elapsing variation budget is a bit ad hoc and should be better explained. It is also quite confusing that the formulation considers both a finite-horizon $H$ and a reward discounting factor $\gamma$, while usually people only choose one of the two. The definitions of the values functions are also missing (which are included in the appendix but should definitely have been included in the main text).
4. The assumption that $B_p(\Delta_t) = c^{\alpha_p} B_p(\Delta_t)$ seems a bit ad hoc to me and probably needs more justification. Similarly, it is also unclear what Assumption 1 means and the authors could provide some examples on how it is satisfied.
5. In the simulations, it seems that some of the selected comparison baselines are originally designed for the stationary environment. It would be unfair to compare with these algorithms in non-stationary environments. I wonder how the proposed method compares to some existing solutions specifically designed for non-stationary RL (e.g., some simple sliding-window or restarting-based algorithms).
6. A related work section is missing (and again is included in the appendix but should have been included in the main text to be self-contained). There are also a few important closely related works that are not discussed in the references. Just to name a few that I am aware of:
a. Wei, Chen-Yu, and Haipeng Luo. "Non-stationary reinforcement learning without prior knowledge: An optimal black-box approach." Conference on learning theory, 2021.
b. Mao, Weichao, et al. "Near-optimal model-free reinforcement learning in non-stationary episodic mdps." International Conference on Machine Learning, 2021.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please consider first answering my questions in the “Weaknesses” section. The additional questions listed below are comparably not major.
1. How does the environment tempo formulation deal with the difference between the frequency of environment non-stationarity (i.e., frequent, slow, gradual variations) and the magnitude of non-stationarity (i.e., abrupt variations).
2. In Eq (2.1), the authors measure the transition variations in terms of the infinity-norm, which is different from the more popular metric of 1-norm in existing non-stationary RL works (e.g., Cheung et al., 2020). I wonder why such a difference occurs.
3. Since the authors mentioned the word “meta-algorithm”, I wonder how the proposed method is related to meta-learning?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations are briefly discussed in the conclusions section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: $\textbf{1) W 1,2: clairty of main contribution and meaning of tradeoff}$
Thanks for pointing out the clarity of the main theoretical results and corresponding contribution and the meaning of the “trade-off”. We would like to emphasize our work’s main contribution is interpreting the non-stationary MDP as “$\textit{wall-clock time}$” goes by, not as “$\textit{episode}$” goes by, and shifting the perspective from episode to time is a more realistic setting that matches with non-stationary RL’s motivation: RL for real-world application. Raising this issue recalls our new problem setting stated in the introduction (line 2 $\sim$ 5). This new setting gives rise to solving an additional problem: how to choose the optimal $K-$length time sequence that the agent interacts with the environment?”. With this respect, our main theoretical result is that we determine the optimal K-lenght time sequence by optimal training time, and we propose a PTM framework to find optimal training time.
Now, we would like to address the “tradeoff”. Yes, the tradeoff exists between the agent’s training time and how fast the environment changes, but it's not for the fixed episode(K), but for fixed “time duration”. For better understanding, we would like to remind the example of an introduction with a detailed explanation. Let’s say for a fixed time duration 0[s]$\sim$15[s], robot A executes episodes for every 1 second, [0,1,..,15], and between time $t,t+1$, the agent trains the policy. Robot B executes episodes every 2 seconds [0,2,..,14] and between time $t,t+2$, the agent trains the policy. Let’s assume one policy update takes 1 second. Then, the robot A can update one time, and the robot B can update two times. The robot A interacts with the environment for 15 episodes and updates policy one time, which has more information about the environment but has an uncertain approximate optimal policy. Robot B interacts with the environment for 8 episodes and updates policy two times, which has less information about the environment but has a better approximate optimal policy than robot A.
The above results clearly show the existence of tradeoff (line53~54) and support our main contribution that how to choose optimal training time is significant that should be addressed.
$\textbf{2) W3: necessity of defining time elasping variation budget}$
Thanks for pointing out the necessity of defining time elapsing variation budget compared to the standard variation budget [Cheung et al. 2020]. Aside from the standard definition of variation budget, we need a new definition since the time of the environment and the episode of the agent are not synchronized anymore. Standard setup [Cheung et al. 2020] defines the variation budget as the difference between two consecutive MDP and takes summation over the total K episodes. However, in our setting, how the agent chooses the interaction time $[t_1,t_2,...,t_K]$ brings about different variation budgets even though executing total K episodes (K interaction time instances) since the environment changes as time goes by. Therefore, two different agents that execute with the same K episodes, but different interaction time sequence experience different variation budgets. This is the reason why we need to define variation budget with respect to the environment’s time, which we named “time-elapsing variation budget”.
$\textbf{3) W4: Necessity of the property$B_p(\Delta_t) = c^{\alpha_p} B_p(\Delta_t)$}$
The main reason why we added this assumption is to justify our new definition, time-elapsing variation budget, covers and can represent standard MDPs, so utilizing our practical setting can be compatible with previous non-stationary RL methods. (line 118 $\sim$ line121). $\alpha_p = \alpha_r =0$ represents a stationary environment (typo on the main paper line 120 $\sim$ 121) and $\alpha_p = \alpha_r=1$ is a linear drifiting environment. This validness of our assumption is well-supported by the result of our main theorem, Theorem 3. In theorem 3, as case1: $max(\alpha_r, \alpha_p) =0$ , the $G^*$ is infinity. This makes sense, since in a stationary environment, a large number of NPG gradient steps (iteration numbers of policy updates) guarantee policy closer to the optimal policy.
$\textbf{4) W5: appropriateness of the baseline algorithms}$
We admit MBPO algorithm is built upon a stationary environment. The main reason why we added the MBPO algorithm is to compare the result between MBPO and PTM-G framework. (line 335 $\sim$ 338). Note that the PTM-G framework is a model-based policy optimization framework, but the big deviation from MBPO occurs PTM-G framework predicts the future non-stationary model using the MDP forecaster and the MBPO predicts the stationary-model by past observed trajectories. This structure difference clearly shows comparing MBPO and PTM-G framework can shed light on the forecaster performance of the PTM-G framework. We also have selected left three algorithms as a baseline, since those are algorithms that were proposed to deal with non-stationarity and have been utilized as a baseline algorithm [13].
$\textbf{5) Q1}$ : The environment tempo, which is a time-elapsing variation budget, can capture both properties of the non-stationarity since it’s s a definition that captures the non-stationarity with respect to time, not respect to episode.
$\textbf{6) Q2}$ : We apologize for the typo. The variations of the transition probability should $\sum_{k=1}^{K-1} \sup_{s,a} || P^{t_{k+1}} ( \cdot | s,a) - P^{t_{k}} ( \cdot | s,a) ||$.
$\textbf{ 7) Q3}$ : We first note that our PTM framework is not related to meta-learning. We use the term “meta-algorithm” since the PTM framework 1) first provides an MDP forecaster that creates a future MDP 2) then obtains optimal policy in that model. The term “meta-algorithm” represents we can use any existing RL algorithms that obtain an optimal policy for a given model in process 2) (line120 $\sim$ line 151).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the responses, which helped address some of my concerns. I am still conservative about whether the time synchronization issue is an important one, but I am happy to increase my score to the positive side.
---
Rebuttal 2:
Title: Looking for the feedback!
Comment: Dear Reviewer 3oD2,
We first thank you for the positive feedback on our work's results provided in the initial official review. Since the discussion deadline is approaching, we wonder whether our above response to your initial review was fully addressed to your concerns. We appreciate your comments that a low rating mainly comes from the unclarity of the paper wiring, especially theoretical analysis, and the appropriateness of the baselines in experiments. We believe that we have addressed your doubts and concerns clearly in our rebuttal. Particularly, we also highlight our work's main contribution and motivation for you to accelerate resolving the above concerns.
__We wonder whether our comments are fully addressed to you, or if not, we would love to hear back from you!__
Best regards,
Authors | Summary: The paper introduces a novel framework for handling non-stationary environments in reinforcement learning (RL) called Proactive Tempo-control Model-based (PTM) framework. The authors argue that in non-stationary environments, an additional factor emerges alongside the classical exploration-exploitation trade-off: the tempo of adaptation. The PTM framework is designed to adjust the policy-update tempo (algorithm tempo) to match the environment tempo. The authors propose two instances of the PTM framework, PTM-T and PTM-G, and provide theoretical analysis and empirical evidence to support their claims.
Strengths: - The concept of tempo adaptation in non-stationary RL environments is an interesting approach.
- The proposed PTM framework could have implications for the field of RL, particularly in real-world applications where environments are often non-stationary.
- The paper provides theoretical analysis for the proposed method.
Weaknesses: - Although the author tells an interesting story about "adapt to adapt," the proposed PTM algorithm essentially attempts to learn a gradually changing MDP, which does not seem to have any fundamental difference from some previous works like [1][2].
- There is a significant gap between the theoretical results of the paper and the practical methods. Although the author has used a considerable amount of theoretical analysis in the main text to try to demonstrate that PTM method can achieve smaller dynamic regret with sublinear iterations, only simple performance curves are shown in the experimental section without further demonstrating the relationship between dynamic regret and iterations, which should explain the principle of improvement in the PTM algorithm.
- As cited and discussed by the author (line 29-30), some previous works have also been able to effectively address non-stationarity. However, in the selected baseline, MBPO is just a standard model-based RL algorithm without specific optimization for non-stationarity. Additionally, other baselines seem to be relatively outdated. Why not compare with newer references like [3], etc?
- This paper uses a lot of complex and scattered math symbols, making it confusing for readers when reading the theoretical part. For example, in line 207, the author uses an icon symbol to represent "Alg". In line 264, the author uses diamond and square symbols to represent $r$, $p$ and $\mathcal{P}$, etc. Why not simply use the letters or with some subscriptions directly? In some contextual formulas (e.g. formula 4.2, 4.3, 4.4, 4.6 and line 271-276), the author interchangeably uses these icon symbols and original letters. When readers read through this paper, they may need to repeatedly recall what these complex symbols actually represent.
[1] Deep reinforcement learning amidst continual structured non-stationarity. Xie, et al. (ICML 2021)
[2] Meta-reinforcement learning by tracking task non-stationarity. Poiani, et al. (IJCAI 2021)
[3]Factored adaptation for non-stationary reinforcement learning. Feng, et al. (NeurIPS 2022)
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - In proposition 3, how are the different cases of environment tempos reflected in the experimental setting?
- In Figure b-2, the learning curves of the algorithms seem to be in their early stages. Why not plot the complete curve until convergence?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: - The writing and logic of this paper is not good enough, making it hard for readers to follow.
- There is a gap between theory and practice, and there are no detailed experimental demonstrations for theoretical conclusions.
- Additionally, there is a lack of experiments comparing with the recent SOTA baselines.
- Although the author proposes an interesting perspective on "adapt to adapt," fundamentally it does not differ from works on actively learning changes in MDPs.
I suggest that the author reorganize the use of math symbols and consider simplifying unnecessary theoretical analysis in the main text. Furthermore, provide a more detailed explanation of the PTM-G algorithm in Section 4.2 and supplement with additional experiments as mentioned above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your constructive comments and the followings are our answers to weakness, question that the reviewer has raised.
$\textbf{1) W1: clarity of contribution and difference from related works}$
First, We would like to emphasize that our work’s $\textbf{main contribution}$ is interpreting the non-stationary MDP as “wall-clock time” goes by, not as “episode” goes by, and shifting the perspective from episode to time is a more realistic setting that matches with non-stationary RL(NSRL)’s motivation: RL for real-world application. Raising this issue recalls our new problem setting stated in the introduction (line 2 $\sim$ 5). This new setting gives rise to solving an additional problem: how to choose the optimal $K-$length time sequence that the agent interacts with the environment?”. With this respect, our $\textbf{main theoretical result}$ is that we determine the optimal K-length interaction time sequence by finding an optimal training time of the agent. Within this sense, we propose the PTM framework to 1) find optimal training time by forecasting model-based method and 2) claim our method yields a better online return, which 1) is supported by Fig 3-(a) and 2) is supported by Table 1.
Therefore, we claim $\textbf{we unearth a hidden assumption}$ that the agent and the environment should have independent timelines, and our method finds how to synchronize two timelines theoretically with empirical results, which supports we have 1) different problem-setting and 2) different purposes of the algorithm aside from existing the non-stationary RL methods.
Also, please note that the fundamental goal of NSRL methods is to learn gradually changing MDP [p1]. So if your understanding of our proposed method well matches with learning a gradually changing MDP, then it means the proposed method has solved the standard NSRL goal, which support the credibility of our work.
[p1] Milano et al, 2021, State of the Art on: Non-Stationary Reinforcement Learning
$\textbf{2) W2: gap between theoretical results and a practical method}$
We would like to provide two points that how theoretical results (PTM-T) are closely related to the practical algorithm (PTM-G).
Before, we would like to recommend interpreting theoretical analysis not as “theoretical justification or theoretical proof” of the practical algorithm, but rather as “theoretical grounds or insights”, and experiment on ablation study (subsection [6.2]) supports this. Our theoretical analysis can be grouped into two main contributions: 1) the subsection[4.1.1] highlights the existence of optimal tempo, 2) subsection [4.1.2] highlights improving forecaster accuracy is important in PTM framework.
First, the motivation of theoretical analysis on “the existence of optimal tempo” is to support our new problem-setting: additional factors, tempo, arise in a non-stationary environment, and how to adapt the policy-update tempo to the environment tempo is significant. (line 55 $\sim$ line 61). This “optimal tempo existence” is supported by the experiment result in Figure 3-(a) and line 323 $\sim$ line 329.
Also, the motivation of theoretical analysis on “improving forecaster accuracy” is to emphasize “MDP forecaster” serves a significant role in obtaining accurate optimal interaction time sequence. To be specific, finding the optimal tempo by taking minimum of the upperbound of $\mathfrak{R}_\mathcal{I} + \mathfrak{R}_A$ still leave room for an improvement that tighter upperbound provides a much more accurate optimal tempo.
In proposition 1, let's focus on the factor $\delta$ to provide a compact representation of the dynamic regret of PTM-T (Theorem 1). We emphasize that the approximation gap $\delta$ emerges not only due to finite sample trajectories (Lemma 3) , but also the forecasting error between $\mathcal{M}^{k+1}$ and $\widehat{\mathcal{M}}^{k+1}$. It's straightforward since MDP forecaster yields a lowerbound of $\delta$ as $\delta_{\text{min}}$ and $\delta_{\text{min}}$ provides a lowerbound on $\epsilon$ and subsequently a lowerbound on $\mathfrak{R}_\mathcal{I}$. This shows MDP forecaster serves as a common factor that controls both $\mathfrak{R}_I$ and $\mathfrak{R}_{A}$.
Second, the experiment results of PTM-T which exactly match the setting of theoretical analysis are stated in the introduction (in Figure 1-(c)) to utilize its results for motivation.
$\textbf{3) W3 : appropriateness of baseline algorithms}$
First, we would like to claim that we used MBPO as baseline to compare the result between MBPO and PTM-G. (line 335 $\sim$ 338). PTM-G is a model-based policy optimization framework. The difference is that MBPO predicts stationary model and PTM-G predicts the non-stationary model by MDP forecaster. This structure difference clearly supports that comparing MBPO and PTM-G can highlight the forecaster performance of the PTM-G. Please note that the role of the forecaster is significant as we mentioned in section [4.1.2], and we have come up with theoretical soundness (Theorem3, Remark1) and experimental soundness (Figure 3-(b-1),(b-2)) to support its significance.
Second, We provide $\textbf{why} we have selected the three baseline algorithms: ProOLS, FTML, ONPG. We claim those are appropriate baselines to compare the effectiveness of PTM-G. To be specific, ProOLS predicts future policy evaluation based on past observation data but it is a model-free method. ProOLS and PTM-G both shares the “forecasting” component but ProOLS is a model-free method and PTM-G is a model-based method. Comparing ProOLS and PTM-G can highlight the effect of the "model-based” approach of PTM-G. Other two were also baselines in the non-stationary RL works [13]
$\textbf{4) Q1}$ : The environment tempo's corresponding scenarios are in line 120 $\sim$ 121. $\alpha_r=\alpha_p=0$ is a stationary environment (typo on main paper)
$\textbf{5) Q2}$ : we rolled 200 episodes. The model error except f=ARIMA increases as episode goes by.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's response. Some of my concerns have been addressed. However, I still believe that there are some shortcomings in this work:
- Although the authors attempt to explain the theoretical analysis as "theoretical grounds or insights" in their response, such a lengthy and hard-to-follow theoretical analysis is only used to introduce relatively simple algorithm designs as insights. The actual methods and theoretical analysis do not seem to align well. Additionally, as I mentioned in my review, all experimental results in the paper focus on performance. These indirect results do not effectively demonstrate the rationality and necessity of each component in method design and theoretical analysis. Therefore, I still think it is necessary to add separate indicators specifically for these theoretical claims.
- The authors do not respond to my Weakness 4. I still believe that improvements can be made regarding complex symbol usage in the paper. Furthermore, the author should consider reorganizing the writing by moving some theoretical analysis to the appendix while retaining only the most important parts and adding clearer logical structure. This will help readers better grasp key insights into method design.
For now, I will maintain my current score and continue paying attention to other reviewers' opinions and discussions.
---
Reply to Comment 1.1.1:
Comment: Thanks for the constructive feedbacks again and we appreciate the weakness of our work that you raised. We also thank raising additional concerns that were not fully addressed to the Reviewer WHPE.
The followings are comments that __we want to correct some misunderstandings__ based on the Reviewer WHPE's official comment.
We also welcome further discussions if there exist concerns to be fully clarified.
---
$\textbf{[Bullet 1] Regarding with (1) All experiments are focused on the \textcolor{purple}{performance} (2) So, experiments do not match with \textcolor{teal}{each component in the method design}}$ $\textbf{ (3) This makes the critical gap between the theoretical results and experiments}$
Thanks for re-raising this issue.
We admit that __misunderstanding__ of above (1), (2) can lead to (3), which makes the reader understand the gap between theorems and experiments can be large.
So we want to __correct__ it as follows.
First, all experiments are not focused on the performance between PTM-G and the four baselines.
To be specific,
* __Table1__ in __section [6.1. Performance compare] / Figure 5 $\sim$13__ in __Appendix__ : Focused on the __performance__. This is the numerical results of the average return over the last 10 episodes over 200 episodes between PMT-G and four baselines. Figure 5 $\sim$13 in Appendix shows the whole traning result, where Table 1 is the summary result of the whole training result.
* __Figure 3__ in __section [6.2 Ablation study]__ : Focused on the importance of __each components in method design__ .
- Figure 3-(a) : Focused on the existence of optimal training time $G^* \leftarrow$ supported by Proposition 3 / Section 4.1.1
- Figure 3-(b-1),(b-2): Focused on the importance of MDP forecaster design $\leftarrow$ supported by Theorem 3 /Section 4.1.2
We also mentioned the above points in the above rebuttal as the answer of $\textbf{W2}$, but understood our answer was not fully supportive.
We believe the above bullets are helpful for the reviewer to understand the structure of the paper and take the first step to solve this concern (especially, how experiments are related to theorems / which experiment results show performance and which experiment results show the importance of component)
__We kindly recommend: 1) please take a look above bullet 2) then please go back to our rebuttal comments on [(2) W2: the gap between theoretical results and a practical method].__
We would love to hear back from the Reviewer WHPE if this concern is fully addressed, or if not, we welcome some further discussions.
---
$\textbf{[Bullet 1] Regarding with (1) Understanding the theoretical analysis as ``theoretical grounds or insights"}$ $\textbf{ (2) But complex and hard-to-follow writings on theoretical analysis make it hard to understand.}$
Thanks for raising this writing issue on theoretical analysis. We appreciate that the reader should take the burden to fully understand, and we also admit it's hard to follow at first glance. We will organize the theoretical analysis with fewer notations and make it much more clear in the camera-ready version if we get accepted.
---
$\textbf{[Bullet 2] Regarding with paper writing issue}$
We admit the paper is not reader-friendly, so the reader has to take the burden to collect the theorems and experiments to understand our main contributions.
We would love to say we will organize the paper in a much more straightforward and make it clear (as you suggest, make theorems shorter and straightforward, use fewer notations to make the paper much more readable) in a camera-ready version if we get accepted. (We omit to address your weakness 4 and want to apologize. The same issue was also raised by the reviewer EXsm and we have answered it. )
Title: Correction on misunderstanding and thanks for the feedbacks.
---
Rebuttal 2:
Title: Kindly request your confirmation on a misunderstanding.
Comment: Dear Reviewer WHPE,
We sincerely appreciate your active involvement in clarifying the misunderstandings. The comprehensive feedback provided has greatly facilitated to resolve your concerns at hand. __We kindly request your confirmation whether the aforementioned concern outlined in bullet point 1 was resolved. We believe it's a misunderstanding__ that actually we have both 1) experiments on performance between baselines and our proposed algorithm, and 2) experiments to support our algorithm's component design in the main paper. __We have clearly addressed it as above__ : $\textbf{[Bullet1] : Regarding with (1) All experiments are }\sim$. We also acknowledge that the $\textbf{other two bullet points}$ are commonly associated with the issue in paper writing, namely, a lack of reader-friendliness and the inclusion of intricate mathematical notations that require readers to gather multiple theorems in order to comprehend our contribution. If the paper gets accepted, we are willing to refine the presentation of mathematical notations, aiming to render them more succinct, while concurrently simplifying the theorems to enhance their accessibility in the final version.
Best regards,
Authors of the paper 8588
---
Rebuttal 3:
Title: Kindly reminder that we do have experiments in the paper that the reviewer raise in comments.
Comment: Dear Reviewer WHPE,
We genuinely appreciate your effort and detailed feedback to improve the paper. Since this is the last minute of the discussion period, we still believe that __we__ $\textbf{do have experiments}$ __in the main paper__ $\textbf{Table1, Figure3}$ __and in the appendix__ $\textbf{Figure5} \sim \textbf{13}$ __that the reviewer was concerned in bullet 1__. This is a kindly reminder the aforementioned concern outlined in bullet point 1 was addressed to the reviewer. We would like to hear back whether regarding misunderstanding was solved. Thanks!
Best regards,
Authors of the paper 8588
---
Rebuttal Comment 3.1:
Comment: I appreciate the author's response, but as mentioned earlier, despite my concerns being partially addressed, I still believe that this paper has not fully met the standards for acceptance. Therefore, I will not change my current ratings. | Summary: This work introduces the tempo of adaptation in a non-stationary RL problem. The authors provide Proactive Tempo control Model-based (PTM) framework, and two specific instances PTM-T and PTM-G. By adjusting the tempo of the algorithms, the proposed algorithm can match the tempo of the environment to address non-stationarity. This claimed property is demonstrated by both theoretical analysis and empirical results. The proposed PTM framework helps the RL algorithm to be implemented in real-world settings. The authors show that the PTM framework achieves a higher online return than the existing methods and provides empirical evidence of the existence of an optimal algorithm tempo with the comprehensive experimental evaluation of various Mujoco tasks.
Strengths: (1) Interesting topic for real-world applications: This paper focuses on a practical setting for non-stationary RL by introducing a “time-elapsing variation budget”, which not only measures non-stationarity but also considers actual time taken. This setting is common and has great potential in real-world applications, and this work will encourage more interest in this direction.
(2) Solid theoretical analysis: This paper provides a dedicated mathematical formulation for the tasks and provides a detailed theoretical analysis and discussion of the proposed methods under the tabular settings. They also provide a discussion on more general cases in the supplementary material.
(3) Comprehensive experimental evaluation: This work provides comprehensive experimental results both with sufficient baseline methods and testing tasks to demonstrate the strength of their work.
Weaknesses: (1) Complicated symbolic notation: The author uses too many fancy symbols for notations that are hard to identify, write, pronounce, and memorize. I would be more than happy to see a simplified version, and I believe by reducing the symbolic complexity, this paper will be much easier to follow.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors:
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 4 excellent
Limitations: The authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: $\textbf{1) W1: complicated symbolic notation}$
Thanks for pointing out that the paper contains too many notations and those make readers follow up the paper hard. We will reduce notations and make theorems much more straightforward in the camera-ready version if we get accepted.
$\textbf{2) Regarding with low-confidence score: clarification of main contribution}$
Besides your constructive comments, we also appreciate your low-confidence score and we understand the score since the message of our paper is not fully clear to the reviewer. Within this sense, we would like to clarify the motivation, main contribution and how our theoretical analysis is related, and how our experiments support those.
First, We would like to emphasize that our work’s $\textbf{main contribution}$ is interpreting the non-stationary MDP as “$\textit{wall-clock time}$” goes by, not as “$\textit{episode}$” goes by, and we claim shifting the perspective from episode to time is a more realistic setting that matches with non-stationary RL’s motivation: RL for real-world application. Raising this issue recalls our new problem setting stated in the introduction (line 2 $\sim$ 5). This new setting gives rise to solving an additional problem: how to choose the optimal $K-$length time sequence that the agent interacts with the environment?”. With this respect, our $\textbf{main theoretical result}$ is that we determine the optimal K-length interaction time sequence by finding an optimal training time of the agent. Within this sense, we propose the PTM framework to 1) find optimal training time by forecasting model-based method and 2) claim our method yields a better online return, which 1) is supported by Fig 3-(a) and 2) is supported by Table 1.
Therefore, we claim $\textbf{we unearth a hidden assumption}$ that the agent and the environment should have independent timelines, and our proposed method finds how to adapt two timelines theoretically and also proves empirically, which supports we have different problem-setting and different purpose of the algorithm aside from existing the non-stationary RL methods.
We also brought up our motivation and main contribution on “global response”
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and extra clarification on the contribution. Regarding W1, I agree that it would be beneficial to reduce the notation and make theorems more straightforward. After reading other reviews and the corresponding response, I would love to increase my score in favor of acceptance. | Summary: This paper presents a solution to nonstationary RL issues through a model-based framework called Tempo-Control (PTM). Specifically, the authors identify a new trade-off in nonstationary RL problems: the trade-off between learning an accurate model and learning an optimal policy. Based on this, a new framework is designed to allow the agent to find an appropriate tempo for adaptation. The authors provide theoretical analysis for the tabular version of the framework and a strategy for complex RL scenarios. Empirical evaluation results confirm the effectiveness of the scheme as well as its theoretical optimality properties. Overall, this paper offers insightful analysis on nonstationary RL problems and rigorous theoretical study on the method. I am inclined to give an "accept" rating.
Strengths: [**About the framework's motivation**] The author's analysis of the trade-off between adaptation tempo and environment tempo is quite insightful. Additionally, the critique of the existing three research lines in nonstationary RL could provide inspiration to the field of nonstationary RL or RL in general.
[**About the theoretical analysis**] The paper offers rigorous theoretical analysis for the tabular version, providing solid theoretical guarantees for subsequent applications.
[**About method design**] The modeling method for adaptation tempo and environment tempo is clear and simple, offering strong reproducibility and portability.
[**About experiments**] The experiments use common nonstationary (deep) RL benchmarks, and the results are quite good, with a significant margin over the baselines. Furthermore, the ablation studies also confirm the previous theories on optimality.
Weaknesses: Note: Most of the weaknesses listed below are more like questions, discussion points, or suggestions, rather than outright flaws. Any clarifications provided by the authors would be very welcome and appreciated. Given the limited time for rebuttal, it is not necessary to fully supplement experiments for comparison with these papers. Some clarifications and discussions on these methods would be greatly appreciated.
[**About the latent variable**] The authors assume that the non-stationary variable $\hat{o}_k$ is observable, but in real complex scenarios, it is challenging to observe the non-stationary variable. Additionally, certain specific assumptions are required about this latent variable (such as the function of the latent variable's change with time, dimensions, etc.). Could the authors provide more clarification on this point?
[**About the presentation**]The authors might consider placing PTM-G after the third section and moving Algorithm 3 from the appendix to the main text. This could help readers better understand the overall framework design and flow.
[**About the experiment**] In Fig. 3 (b-2), it seems that convergence has not been reached at the end; consider providing a longer range for the x-axis. Also, the x-axis is not labeled in Fig. 3.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have included the questions in the above section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have given a detailed analysis of the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: $\textbf{[About the latent variables : assumption validity]}$
Thanks for pointing out the validity of the assumption.
First, we would like to mention the biggest reason for the observable $\mathcal{O}$ assumption is not to increase the gap between theoretical analysis (PTM-T framework) and solid experiment results (PTM-G framework). Please note that most relevant existing works train a probabilistic network that encodes the $1 \sim k$ episodes’ trajectories (which represents the $1 \sim k$ episodes’ environment) into the latent space (which represents the non-stationary variables), and learn network parameters by bayesian method, then infer the future non-stationary variable from the latent space [9-12]. The problem with Bayesian inference is that its posterior distribution is intractable. Within this sense, We have not used the existing method since the intractability of the practical algorithm makes the gap between the theoretical analysis and the practical algorithm larger.
Also, we would like to recall that assumption 1 is not necessary for the PTM-T framework, which means we $\textit{estimate}$ the non-stationary variables, then predict the future variables. (please refer to line 186 $\sim$ 187). The practical algorithm, PTM-G framework, exploits the advantage of assumption 1 to forecast the future variables (future MDP model) to fully support the ablation study. Throughout the paper, we stated the theoretical analysis on “improving forecaster accuracy”(section 4.1.2) to emphasize that “MDP forecaster” serves a significant role in PTM-framework, and have done experiments to support its theoretical importance (Figure 3-(b-1), (b-2)). We have thought inferring the non-stationary variables makes the ablation study that checks the performance of the MDP forecaster to be unclear (Figure 3-(b-1), (b-2)) since it takes $w$ past observations.
$\textbf{[About the presentation]}$
Thanks for pointing out the location of the PTM-G algorithm. We will modify it later on if we get accepted.
$\textbf{[About the experiment : Figure 3-(b-2) issue]}$
Thanks for pointing out the issue of convergence. Does to the limited time of the rebuttal, we have not attempted to do additional experiments. However, we can certainly claim that we have rolled 200 episodes, and the model error except f=ARIMA increases as the episode goes by.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. Most of my concerns have been addressed. I think it would be better if the authors could include the explanation of the validity of the assumption in the future version.
Meanwhile, after reading the other reviews and the corresponding rebuttals, I would keep my original rating. As to the experimental part, I think the current baselines and benchmark choices have been already somewhat convincing to show the effectiveness of this approach. | Rebuttal 1:
Rebuttal: We appreciate constructive comments from all reviewers.
We would like to organize our $\textbf{motivation and main contribution}$ on this global response to help reviewers to better understand our paper. We also have elaborated more details on each reviewer's responses.
$\textbf{1. Motivation}$
$\textbf{We unearth a hidden assumption of non-stationary reinforcement learning (RL)}$ that critically holds back its fundamental motivation: real-world applications of RL. We call it a $\textit{time synchronization}$ issue between the agent and the environment. In the real-world, the environment changes as a $\textit{wall-clock time}$ goes by, not as an $\textit{episode}$ goes by, where the $\textit{wall-clock time}$ should independently flow from the episode which is controllable by the agent. The $\textit{time}$ (wall-clock time) of the environment should be uncontrollable by any instances including the agent, training algorithm.
To solve this issue, we consider a $\textbf{practical setting}$ for non-stationary RL where the agent chooses a time sequence when to interact with the environment and fully spends the time interval(the time between $k^{th}$ and $k+1^{th}$ interaction time instance) to train its policy.
$\textbf{2. Main contribution}$
Our main contribution is that finding the optimal time sequence entails an additional factor, $\textbf{tempo}$, and an additional trade-off that should be addressed in addition to the traditional exploration-exploitation trade-off, that is $\textbf{the trade-off between the environment tempo and the agent tempo}$. We propose a $\texttt{PTM}$ framework that finds optimal training time by leveraging how long the agent should train its policy (agent tempo) and how fast the environment changes (environment tempo). Theoretically, this work establishes an optimal training time as a function of the degree of the environment's non-stationarity and achieves a sublinear dynamic regret at the same time. Our experimental evaluation on Mujoco non-stationary environments shows that the \texttt{PTM} framework achieves a higher online return than the existing methods, and it provides empirical evidence of the existence of an optimal algorithm tempo. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes to look at non-stationarity in reinforcement learning (RL) from a new point of view. While previous approaches did not generally consider the practical implications of the time elapsed while learning a policy, the idea of the current paper is to consider exactly these implications.
A trade-off is posed between spending time learning a stronger policy versus collecting more samples and learning a better model. After defining the problem, a practical solution is proposed. A forecaster fits a model of the process driving the non-stationarity, as well as the set a model of Markov decision processes (MDP) conditioned on it. An Rl algorithm, potentially any algorithm, is then plugged to the forecasted MDP and is optimised for several iterations.
A theoretical analysis is conducted to demonstrate that, as anticipated, the regret drastically depends on the number of policy iterations. Fortunately, under certain conditions on the MDP and the algorithm, it is shown that a sub-linear regret can be obtained in the proposed framework. It results that there exists an optimal number of iterations which depends on the MDP itself.
Finally, an empirical study demonstrates that a practical algorithm, based on the previous theory, is able to compete and outperform several baselines, in a range of non-stationary problems. Furthermore, an ablation study is proposed to illustrate the theoretical results.
Strengths: Despite my rating in "contribution", this paper comes with an interesting idea.
In a practical problem, the time spent while learning a policy may be time not spent to sample new data. It is an interesting theoretical perspective. As expected, the authors demonstrate that it is critical to take this time into account. I believe that this theory could have great implications and am not aware of previous works going in the exact same direction.
The practical algorithm follows a simple idea, which I like. It comes with both theoretical and empirical achievements.
The theoretical analysis is helpful and and well conducted. I have not checked demonstrations in detail, but many proof steps are classic in the literature. The conclusion of the theoretical analysis is great: "tempo" in the MDP and in the algorithms should be in harmony.
The experimental section is concise but holds enough value to underline the validity of the framework.
Weaknesses: My rating for the "contribution" results from my doubts about the definition of the framework. Despite originating from an interesting idea, I question the following points:
- The non-stationarity is not defined clearly enough. An agent is placed in an MDP where it can either sample a trajectory or train its policy. Why wouldn't the MDP change during the trajectory? If the agent can choose when to sample data in a stationary MDP, and that the non-stationarity is bounded as in line 119, it seems that too much control over the non-stationarity is offered to the agent.
- The terms B_p(G) and B_r(G) are not given more details in the main paper. I understand that they satisfies the equation of line 119, but what is the meaning of Delta_t=1? This is also important to understand the experiments, is a step of non-stationarity corresponding to Delta_t=1 and therefore to an iteration of NPG?
- I am concerned about the result of Theorem 2. I understand that, for a fixed K, there should be an optimal tempo for the policy iteration. But I miss a result for an agent that would have a budget in terms of time. How does a well trained agent sampling less trajectories compare to a poorly trained agent sampling more trajectories in the same time? The optimal tempo could be impacted by these considerations.
- A related comment about the regret. In the regret formula, the reference is the optimal value function in MDP k, that is, at sampling times synchronised with the algorithm. What if the algorithm is compared to an optimal policy at different tempo ?
- These two previous comments can be gathered as follow: what if the agent is compared to any agent acting in the same amount of time?
- I expect that the training takes more time as more data is collected. Delta_t could refer to the longest time of them all, but how can the agent be sure that Delta_t has elapsed always before sampling? This is related to my comment on the agent having an impact on the non-stationarity of the environment.
- When the agent optimises its policy, as explained in 3.2, paragraph 2), a synthetic trajectory is rolled out. What if the rollout has a comparable time with respect to sampling a trajectory. Shouldn't the number of rollouts add to the time elapsed? In expectation it is irrelevant but it becomes necessary for a high-probability results. Similarly, why is the result of Proposition 3, case 1, suggesting G=\infty? Only a single trajectory is needed?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Why is the framework considering non-stationarity inter-episode and not intra-episode?
- Proposition 1 offers a sublinear regret with a condition that binds H and epsilon, this doesn't seem great for applications where H is fixed beforehand. What do the authors think about it?
- What does \Delta_t=1 correspond to in the experiments?
- Is there a simple way to compute a regret that compares to the best algorithm given a fixed time elapsed instead of number of episodes (K) fixed?
- What would be the optimal number of iterations in that case?
- In the experiments, why is the performance compared over the last 10 episodes only? What about the regret throughout learning?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Extra:
- the text in figures 1.c) and 1.d) i9s too small
- type line 302: "the state-of-the-art model-based model-free algorithm"
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comments and the followings are our answers to the weakness, questions that the reviewer raised.
$\textbf{1-1) W1,Q1: validity of inter-episode changing MDP}$
We have defined the non-stationary as changing across the episode and keeping stationary during the episode (line 99). We have two reasons why we think episode-varying MDP is still reasonable for our paper. First, episode-varying MDP could be regarded as a discrete change for a certain interval for an infinite Horizon case, which supports our work still solves non-stationary environments. Second, most related theoretical works assumed step-varying MDP, but a lot of related empirical works on complex non-stationary environments assumed MDP changes across the episode. For more details, please refer to recent non-stationary RL papers that change inter-episode : [9,11,13,18].
$\textbf{1-2) W1 : Issue of $B_p(c\Delta_t)=c^{\alpha_p}B_p(\Delta_t)$ provides too much control on the agent}$
The environment property (line 119) is the assumption that is given as prior, not a property that the agent can control ($c, \alpha_p$ is given ,not choosen by agent). The only parameter that the agent choose is $\Delta_t$, which represents a real-world sceniro that the agent learns the environment model based on observations only from $K$ interaction times $[t_1,t_2,..,t_K]$ where $\Delta_t = t_{k}-t_{k-1}, \forall k \in [K]$. Different interaction times yields different observations, which leads to different learned model.
$\textbf{2) W2,Q3 : lack of details on time elapsing variation budget, meaning of $\Delta_t=1$}$
$\Delta_t=1$ means the agent executes an episode for every one second, which means (k)th episode and (k+1)th episode’s interval is a second.In Figure 3-(a), $\Delta_t = [1,2,3,4,5]$ means how much the agent skips the environment’s episode (we have added additional information in line 325~326). Also $\Delta_t$ means training time which is proportional to gradient steps of NPG, we set $\Delta_t=1$ corresponds to $38$ iterations.
$\textbf{3) W 3,4,5 : the clarity of theroem2}$
Before, we emphasize our work’s main contribution is interpreting the non-stationary MDP as “wall-clock time” goes by, not as “episode” goes by, and shifting the perspective from episode to time is a more realistic setting that matches with non-stationary RL’s motivation: RL for real-world application. Raising this issue recalls our new problem setting stated in the introduction (line 2 $\sim$ 5). This new setting requires to solve an additional problem: how to choose the optimal K-length time sequence that the agent interacts with the environment?”. With this respect, our main theoretical result is determining the optimal $K-$ time sequence by optimal training time, and we propose a PTM framework to find optimal training time.
Now, we answer your understanding of “fixed K, the optimal tempo exists”. Yes, the optimal tempo exists, but it's not for the fixed episode(K), but for fixed “time duration”. For better understanding, recall the example of the introduction. Let’s say for fixed time duration 0[s]$\sim$15[s], robot A executes episodes for every 1 second, [0,1,..,15], and between time $t,t+1$, the agent trains the policy. Robot B executes episodes every 2 seconds [0,2,..,14] and between time $t,t+2$, the agent trains the policy. Let’s assume one policy update takes 1 second. Then, robot A can update one time, and robot B can update two times. Robot A interacts with the environment for 15 episodes and updates policy one time, which has more information about the environment but has an uncertain approximate optimal policy. Robot B interacts with the environment for 7 episodes and updates the policy two times, which has less information about the environment but has a better approximate optimal policy than robot A.
$\textbf{4) W 6: relationship between training time and the amount of data}$
We want to answer data amount in Buffer does affect training time. We have defined training time as the time that takes one iteration of training, and in theoretical analysis, one iteration of policy update is same as one gradient step of the NPG algorithm. We partially admit that the batch size of the trajectory data could affect the “policy update for one gradient step”, which is same as B_p(1) and B_r(1), but “G” policy iteration does not change if we fix the batch size of the trajectories for policy update.
$\textbf{5) W7: including rollout time to traning time, the meaning of $G=\infty$}$
We have included the sample complexity (in Appendix: Lemma3) and the approximation gap $\delta$ due to a finite data sample. Line 216 $\sim$ 217 shows how Lemma3 interleaves with our Propositoin1. Small $\epsilon$ enables tighter upperbound for $R_{Alg_\tau}$, but also requires the approximation gap $\delta$ to be small. Small approximation gap $\delta$ requires large sample complexity by Lemma3. This supports the point that you raised: “In $\sim$ results.”. However, we have not included “data collection time” in the training time, but want to claim that it does not harm our works since we have defined the training time as the time that consumes one policy iteration.
Also, note that $G=\infty$ for the case of max(alpha_r, alpha_p) =0 matches with our intuition, since max(alpha_r, alpha_p) =0 corresponds to “stationary environment” (line 120 $\sim$ 121: typo on the paper, \alpha_p=\alpha_r=0 is stationary environment) and within the stationary environment, large policy iteration guarantees closer optimal policy.
$\textbf{6) Q2}$
In proposition 1, we think word “given” in line 213 makes misunderstanding, What we meant by was “we choose \epsilon >0 that satisfies H > (terms)”. Therefore, H is fixed ever since the MDP is given, and for given H, we choose \epsilon.
$\textbf{7) Q4,5}$
Definition of dynamic regret requires obtaining the optimal policy at each episode. In complex environment, it is hard to define what is “optimal policy”.
---
Rebuttal 2:
Title: Looking forward for the feedback!
Comment: Dear Reviewer 4AXz,
We first thank you for the positive feedback on our work's idea provided in the initial official review. Since the discussion deadline is approaching, we wonder whether our above response to your initial review were fully addressed to your concerns. We appreciate your comments that a low rating comes from some doubts about the definition of the framework, and we believe we have addressed your doubts and concerns clearly in our rebuttal. Besides, we also highlight our work's main contribution and motivation for you to accelerate resolving your doubts about our framework. We also believe the reviewer raised a constructive issue (trajectory sampling time) which is an interesting point, and we hope to have further discussions on this.
__We wonder whether our comments are fully addressed to you, or if not, we would love to hear back from you!__
Best regards,
Authors
---
Rebuttal Comment 2.1:
Comment: Thank you for the detailed answer.
It resolves some of my doubts but there remain some. I comment all answers.
1-1) I have taken note of the authors answer to my comment on the exact definition of the non-stsionarity. It seems to me that something is missing. I read the authors answer to another reviewer, recalled here: "First, We would like to emphasize that our work’s is interpreting the non-stationary MDP as “wall-clock-time” goes by, not as “episode” goes by, and we claim shifting the perspective from episode to time is a more realistic setting that matches with non-stationary RL’s motivation: RL for real-world application".
I believe that this is a refreshing idea with interesting implications, but it seems that the idea wasn't applied fully.
Why would an infinite horizon process suffer from non-stationary shocks exactly when an episode ends while it remains deterministic during an episode? This doesn't seem realistic to me.
1-2)
It was clear to me that the non-stationary dynamics (directed by c and alpha_p) are not controlled by the agent.
My doubt was related to how the agent, in that setting, could "choose" which non-stationary MDP to sample from. Imagine a scenario where the agent select to spend 1 amount of time to learn a policy for 1 iteration. Then Delta_t should be one. At that point, the agent interacts with stationary MDP M_1 for the duration an episode.
If instead the agent choses to wait an extra 0.1 amount of time before sampling; the stationary environment that the agent is now sampling from is M_{1.1}. This has an impact the resulting return and learned model as well.
2)
It seems that you set Delta_t =1 to be one second, as well as the time it takes to sample an episode as well as 38 iterations of NPG. I think it should be set to one of these and the rest should be deducted.
3) There may be a misunderstanding of the setting on my side. But changing k or Delta_t should change the total time elapsed.
The authors have addressed my concerns in answers 4), 5) and 6).
7) I agree that this is a more complex quantity but do the authors but I believe that this is a quantity that we would like to compare to.
---
Rebuttal 3:
Title: Thanks for the fast response and further classification.
Comment: Dear Reviewer 4AXz,
Thanks for the response and helpful feedback. We made further clarification on your comments.
$\textbf{0) wall-clock time consumption when the agent takes steps in an episode}$
__To the best of our knowledge, we guess the critical misconception comes from "consideration on wall-clock time consumption when the agent takes steps (when rollouts a trajectory in an episode)"__. Throughout the paper, we __have not considered__ the time consumption when the agent takes steps in an episode. Our works are based on the assumption that one step takes an infinitesimal second. Namely, $t=0$, the agent starts 1st episode in $\mathcal{M}[t=0]$ and take $H$ steps. After taking $H$ steps, the agent still locates in $ \mathcal{M}[t=0] $ since we assume rollout time is infinitesimal. Then the agent starts to train the policy. If $\Delta_t=4$, then at $t=4$, the agent starts a 2nd episode at $\mathcal{M}[t=4]$. We believe the reviewer has assumed that the time consumption to collect samples in an episode is proportional to step $H$.
Nevertheless, we claim __constant time-consumption when taking steps does not harm our work__.
This is because the horizon $H$ is given from the environment, then the time to collect sample in an episode is given as constant as prior (let’s assume 2 [sec] to finish a trajectory ($H$ steps =2 secs)). The 2 seconds delay that accumulates for every episode does not harm our theoretical analysis and practical experiments since all we need is resizing policy training time $\Delta_t \leftarrow \Delta_t +2 $.
$\textbf{1-1) How does inter-episode changing MDP matches up with the paper's infinite horizon problem setting?}$
Our problem is based on __finite__ (not infinite) horizon __inter-episode changing__ MDP where horizon is __fixed__ for all MDPs. (We have stated in the section 2 - line 95 $\sim$ 100) .
For further clarification, we emphasize that our analysis is built upon finite horizon $H$.
Specifically, in experiments, $\texttt{PTM-T}$'s tabular environment is based on $H=13$ (line 477 of Appendix A.1-1., and $\texttt{PTM-G}$'s Mujoco environment is based on $H=100$ (lined 1026 of appendix E.2 )
$\textbf{1-2) How agent can ``choose'' which non-stationary MDP to start an episode?}$
__The agent can "choose’’ (or more correctly "pre-determine") the policy training time $\Delta_t$ based on Proposition 3__. The proposition 3 enables agent to pre-determine optimal training time $\Delta^*_t (=G^*)$, then pre-determine optimal interaction time sequence $t^*_k=t^*_1+(k-1) \Delta^*_t$ for all $k$ before starting a 1st episode at $t=t^*_1$. Then, agent starts an episode 1,2,..,K at time $t^*_1,t^*_2,..,t^*_K$.
Intuitively, choosing the interaction time sequence __as prior__ is possible since the agent knew the ``time-elapsing variation budget’’ as prior information. Please note that existing works also assume the agent knows a variation budget as prior [13-17,22,37,38] to compute the dynamic regret. Also, Proposition 3 needs $B_p(1),B_r(1)$ to compute $\Delta^*_t (=G^*)$.
$\textbf{2) Issue of $\Delta_t$ = 1}$
In theoretical analysis, we set policy training time $\Delta_t$[sec] equal to policy iteration number $G$ [EA].
In the experiment, we set 38 policy iterations as equal to one second, which means $\Delta_t = \lfloor G / 38 \rfloor$.
$\textbf{3) Issue of changing $K$ or $\Delta_t$ changes total time elapsed} $
Changing $K$ or $\Delta_t$ __does not change__ the total time elapsed. Please recall the robot example in the above answer to W3. __For fixed time duration [0,T]__, the agent can choose different training time(= differnt $K$). Also, regardless of how the agent chooses $K$, the sublinear regret is guaranteed by Proposition 1,2,3.
$\textbf{4) Showing the whole training procedure as a dynamic regret}$
We first note that most existing empirical non-stationary RL works on __high-dimensional MDP__ $[9,10,11,14]$ shows the algorithm performance by the __average reward return__. Namely, they contain $\textcolor{red}{(1)}$ Figures of whole training time: average reward (y-axis) / total episode (x-axis) and $\textcolor{blue}{(2)}$ A Summary Table: Run multiple experiments with different seeds, then take an average of a return (from last episode) over multiple seeds. Also, some existing empirical works on non-stationary __Bandit setting__ $[13]$ show the algorithm performance by a __regret__.
__Our work's environment is high-dimensional MDP, and we have a result $\textcolor{red}{(1)}$ as Figure 5 $\sim$ 13 in Appendix, and a result $\textcolor{blue}{(2)}$ as Table 1 in the main paper__.
Also, to answer W7 - why take last 10 episodes, Our Table 1 shows result that first takes the average over the final returns of the last 10 episodes, then takes the average over multiple runs. Rather than showing only the last episode's result on a table, we used our metric to show a more reliable result, since its learning continuously in a non-statioary environment.
---
Rebuttal Comment 3.1:
Comment: It seems that we are mis understanding each others. I have not assumed that the time consumption to collect samples in an episode is proportional to step $H$. This is clear to me. However, my doubt is that, in a real non-stationary scenario, the environment could also change during an episode, and that asks some questions about the current framework.
Let me do an example, assume, that in practice, the sampling time for one episode is 1 second. Let $t_1$ be the real time elapsed and $t_2$ be the time elapsed as in your framework. When $t_1=1$, the sampling is done. The training of the policy starts and $t_2$ is still 0. I imagine that equation (2.1) applies only to $t_2$ in your framework but, for a real dynamical system, it would apply to $t_1$. Then, since 1 second as elapsed in the real time, the environment could possibly have changed during sampling. My doubt is: why can we consider the environment stationary during an episode and only non-stationary in between episodes?
we are also misunderstanding on the total time elapsed. In your robot example, I understand that robot B is sampling only half as much. But both robot interact $K$ times. Therefore robot A finishes its sampling and training at time $K$ while robot B finishes at time $2K$. Isn't it?
---
Rebuttal 4:
Title: Further clarification on misundestandings - comment 1
Comment: Dear Reviewer 4AXz,
Thank you for your prompt response!
We genuinely appreciate your effort in clarifying the misunderstandings. Your detailed feedback has enabled us to address the concerns more accurately. We would be grateful if you could confirm whether the issues you raised in points 2), 3), and 4) of your previous comment have been addressed satisfactorily. If not, we are more than willing to provide further clarification.
$\textcolor{red}{\text{For clarity, we broke down your comments into comment 1 (first paragraph) and comment 2 (second paragraph)}}$
$\textbf{[Comment 1-1] Different timelines between real-time and framework. How do you apply to equation 2.1?}$
We recognize the discrepancy between the time elapsed in real-time, denoted as $t_{real}$, and the time within our framework, denoted as $t_{frame}$. Let's define the sampling time for a single episode as $\Delta_{sp}$ and the training time as $\Delta_{t}$. Initiating the system with $t_{real} = t_{frame} = 0$, the agent starts the first episode immediately. When the agent embarks on the second episode, $t_{real}$ is $\Delta_{sp} + \Delta_{t}$ while $t_{frame}$ is $\Delta_{t}$. Extending this logic, for the $k^{th}$ episode, $t_{real}$ is $(k-1)\Delta_{sp} + (k-1)\Delta_{t}$ and $t_{frame}$ is $(k-1)\Delta_{t}$.
This observation is indeed pertinent. To rectify it within our framework, we can adjust the time discrepancy. Specifically, we can modify $\Delta_{t} \leftarrow \Delta_{sp} + \Delta_{t}$. When agent determines $\Delta_{t}$, it should incorporate the constant term $\Delta_{sp}$, which is given as environment information. This assumption is realistic since $\Delta_{sp}$ can be estimated based on horizen $H$ and the duration of each step, including when the agent acts, the environment reacts, and feedback is offered.
An implication of this is in the optimization of the update tempo (e.g., $G(=\Delta_t)$) in Proposition 3. By the definition provided in our paper, which solely considers the training time $\Delta_t$, we should replace $\Delta_{t}$ with $\Delta_{sp} + \Delta_{t}$. This results in a simple modification to the optimal $G$ in Proposition 3, producing a lower bound on $G^*$, that is, $G^*_{Alg} = \min ( \sqrt{k_{Alg} / k_B} , \Delta_{sp})$.
$\textbf{[Comment 1-2]My doubt is: why can we consider the environment stationary during an episode and only non-stationary in between episodes?}$
We first note that our MDP setting (piecewise nonstationary MDP) is also utilized in existing non-stationary RL work $\textbf{[9]}$(Chen et al. 2022). Also, this is an important question that touches on fundamental assumptions that underpin our work. We elaborate as follows:
- Just as continuous signals in signal processing are transformed into discrete steps through quantization, we adopt a similar strategy. There is no difference in terms of accumulated change within a given period, but there may be difference within each sampling interval, where the analog signal changes continuously but the discretized signal is held constant. Our model mirrors this by transferring the accumulated change from one episode to the next, thereby treating individual episodes as quasi-stationary.
- In signal processing, discretization's validity hinges on the sampling frequency outstripping the signal change rate. In our context, this translates to the environment being relatively stationary throughout an episode's real-time span, though changes accrue and carry over to subsequent episodes.
- Our assumption stands because controllability is often feasible. Consider high-frequency trading: with a trading capability of once every second and a MDP horizon of 10, the sampling time spans 10 seconds—a period where markets might witness substantial shifts. Contrastingly, if trades occur every millisecond or even faster, a 10-step MDP horizon implies a mere 10 milliseconds of sampling time. Here, expecting the market to remain stationary is more reasonable.
- The MDP formulation, influenced by numerous external variables like actuation frequency, precedes algorithmic design. Provided the environment under study remains relatively stationary during our sampling time per the MDP formulation, our algorithm remains applicable. The foundational assumption is that, within each real-time MDP episode, the environment retains its stationary nature.
- While we treat each episode as stationary, this doesn't negate our engagement with nonstationary environments. Cumulative changes, in effect, shift to the next episode, much like how signal discretization functions. As long as the "baseline resolution" (i.e., the real-time duration of an episode) is relatively brief compared to the intrinsic tempo of the environment, this presumption stands firm. Empirically, our experiments validate this assumption does not hinder performance across the tested environments.
- Though intriguing, the co-design of environment with algorithm remains a complex domain for future exploration.
---
Rebuttal 5:
Title: Further clarification on misundestandings - comment 2
Comment: $\textbf{[Comment 2] Misunderstanding on the total time elapsed}$
Thank you for highlighting this matter again. We've expanded upon the robot example to clarify any misunderstandings. Given a fixed real-time duration $t_{real} \in [0,T]$, let's define the sampling time as $\Delta_{sp}$. Suppose robot A has a training policy duration $\Delta_t$ (denoted as $\Delta_A$), and robot B's is twice that, i.e., $2\Delta_t$ (denoted as $\Delta_B$).
The number of episodes for robot A is then calculated as $K_A = \lfloor T / (\Delta_t + \Delta_{sp} ) \rfloor$ and for robot B as $K_B = \lfloor T / (2\Delta_t + \Delta_{sp} ) \rfloor$. Notably, the interaction count (number of episodes) varies: $K_A > K_B$, but the policy training time is inversely related, with $\Delta_A < \Delta_B$.
We wish to underline that this __inverse relationship__ is pivotal. It brings forth the trade-off dilemma and illuminates the quest for an optimal training duration, culminating in Proposition 3.
---
Rebuttal Comment 5.1:
Comment: Dear authors,
I confirm that your answer in point 4 addresses my concerns. The one in point 3 is related to my comment (Comment 2 as you have called it). Point 2 partially addresses my concern but I it is not critical in my evaluation of the paper.
Regarding your answers to my latest comments,
Comment 1-1: I appreciate the clarification of the framework and believe that it could help readers that, like me, find that the framework lacks some realism.
Comment 1-2: I am aware that certain related works consider piecewise non-stationary MDP but the novelty of your framework is to consider wall-clock-time. Therefore, I consider that, if non-stationary is not modelled at the inter-episode level, a discussion should be provided for why this is the case. I like the examples that you have given with the discretisation and think that it would find its place within the paper. I also note the example on high-frequency trading and think that this could be the occasion to warn the reader that the framework is suited for environment where $\Delta_sp$ is negligeable compared to $\Delta_t$.
Comment 2: I understand that the bound on the number of the difference results depend on $K$ but I assumed this quantity fixed. It seems to be nowhere indicated that K is a variable. Sentences such as during "the total K episodes" (l. 99) or " Then, at the end of episode $k \in [K]$" (l. 133) mislead me. Moreover, as I understand proposition 3, it gives, for a fixed $K$, the optimal number of policy iteration. It doesn't seem to answer the question: what is the optimal number of policy iteration given that I have a time budget of $T$? Furthermore, for a fixed K, changing $G$ changes the real time elapsed. It seems that this bounds, by considering different values of G for a fixed $K$, compares the regret of different process that are at a different point in time. In addition, the definition of $B_p(\Delta_t)$ and $B_r(\Delta_t)$ depend on $K$ but it is not explicit from the notation. This adds to the belief that $K$ is a constant.
The authors have satisfactorily addressed one of my two main concerns. I note that the other reviewers are leaning toward the positive side. I am happy to follow their decision and raise my score. Note that I still consider my comment 2 to no have been solved yet.
---
Rebuttal 6:
Title: Further clarification on comment 2
Comment: Dear reviewer 4AXz,
Thanks for the constructive and insightful feedback, which helped us improve the quality of the paper.
We have provided further clarification on comment 2.
Although the discussion period is closing soon, we welcome the reviewer's further concerns anytime and we will try to answer as fast as possible.
$\textbf{[Comment2] Misunderstanding on the total time elapsed}$
Thanks for re-raising this issue. We agree that the reader can naturally assume $K$ to be fixed while going through the paper and stating "variable" $K$ would not mislead the reader to misunderstand the paper. We will clearly add that $K$ is the variable that the agent should choose beforehand in the camera-ready version if we get accepted.
However, It is straightforward to change the regret bound depending on $T$, since $K$ and $T$ are in the relationship with a closed form, i.e. $K := \lfloor T / (\Delta_{sampling} + \Delta_{training})\rfloor$[Eq (1)]. Please note that inserting Eq (1) into all theorems, propositions of the paper also guarantees sublinear in $T$ of their regret bounds. This supports that substituting all theorems written on $K$ to $T$ does not harm our works. Also, this leads to straightforward answers to the reviewer's additional concerns as follows.
$\textbf{[Comment2-1] It doesn't seem to answer the question: what is the optimal number of policy iterations given that I have a time budget of $T$}$
__Optimal number of policy iterations does not change__ . Intuitively, this is because it depends on how fast the environment changes ($B_r(G),B_p(G)$) and environment (or MDP) constants ($\alpha_r,\alpha_p$) as stated in Proposition 3. Namely, for a given $K$, let its optimal number of policy iterations as $G^*$. Then for any $T \in [K(\Delta_{sampling} + \Delta_{training}),(K+1)(\Delta_{sampling} + \Delta_{training}) )$ also have same $G^*$.
$\textbf{[Comment2-2] Why theoretical works on based on $K$, not $T$}$
The main reason why we stated all works on $K$ is to emphasize __regardless of how the agent choose $K$, we can compute the optimal $G^*$ while also guarantee sublinear regret in $K$.__
We also made some further clarification on some sub conserns in comment2 as follows.
- __Moreover, as I understand proposition 3, it gives, for a fixed $K$__ $\rightarrow$ What we meant by fixed $K$ is that $K$ is pre-determined by the agent, so it is fixed with respect to the agent during the whole time $t \in [0,T]$, not $K$ is given as constant.
- __Sentences such as during "the total K episodes" (l. 99) or " Then, at the end of the episode " (l. 133) mislead me__ $\rightarrow$ Same reason with above. "total K episodes, end of the episode" are all respect to the "agent" since the agent executes the 1st episode after determining what is optimal $K$ (= optimal $G$)
- __The definition of time elapsing variation budget depends on K__ $\rightarrow$ Yes, this is true. For fixed time duration $[0,T]$, the agent can choose $K$, and interaction time instances {$ t_1,t_2,..,t_K $}. Depending on how many times the agent interacts with the environment (=$K$) gives a different evaluation of environment non-stationarity(=timp elapsing variation budget). This is well-elaborated by our robot example.
---
Rebuttal Comment 6.1:
Comment: Dear authors,
You have addressed my last concern with this answer. I will align on the acceptance side. I stress, however, that the authors must work on the clarity/presentation of the paper. This has caused me some misunderstandings and the issue has been raised by by the other reviewers as well. | null | null | null | null | null | null |
Statistical and Computational Trade-off in Multi-Agent Multi-Armed Bandits | Accept (poster) | Summary: This paper studies the Multi-Agent Multi-Armed Bandits problem with factor graph reward structure, which is motivated by the real-world antenna tilt optimization problem. It first proposes an asymptotic lower bound that involves an optimization problem with exponential number of variables and constraints. Then, this optimization problem is relaxed by approximation techniques motivated from probabilistic graphical model. Finally, this paper proposes an learning algorithm based on the approximated lower bound and supports it with both synthetic and real-world experiments.
Strengths: - The relaxation contains novel application of various techniques from probabilistic graphical model to this concrete problem.
- The proposed algorithm can be computationally efficient and is flexible to lie at different points on the trade-off curve between statistical and computational complexity.
- The proposed algorithm enjoys good asymptotic regret and the theoretical guarantee is corroborated with real-world experiment.
Weaknesses: Although I don't view this as a serious issue, it seems this paper focuses a lot in how to solve the problem in a computationally efficient way while the analysis of the algorithm's performance is a little bit limited. For example, we do not know how far the approximation is away from the exact lower bound and how the algorithm with locally tree-like reduction will perform in general.
### Suggestions on Writing
- It may be better to briefly mention the rough magnitude of $C^{\diamond}_{\theta}(m)$ again after stating Theorem 6.1.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - What is the intuition behind using $\min_{a_i\in\mathcal{A}\_i}\frac{N_{t, i, a_i}}{w_{t, i, a_i}}$ as the action selection for exploration? In particular, how does this selection rule approximately satisfies the lower bound interpretation that action $a$ should expectedly be selected about $v^\star_a\log(T)$ times?
- It seems the varialbes $w=(w_i)\_{i\in[N]}$ is undefined for $C^{\diamond}_{\hat{\theta}_t}(m)$ when $\diamond=\mathrm{L}$. How will the ESM algorithm proceed when $\diamond=\mathrm{L}$?
- Is that possible to analyze how the algorithm will perform when $\diamond=\mathrm{L}$ and the factor graph is not an acyclic factor graph? (It certainly cannot be better than the lower bound.) Also, can we see how it will perform empirically?
- Is that possible to quantify the difference between $C^\star_{\theta}$ and $C^{\diamond}_{\theta}(m)$? That is, how far is the approximation away from the exact lower bound?
- In the antenna tilt experiment, does the choice of action set $\mathcal{A}_i=\lbrace 2^\circ, 7^\circ, 13^\circ\rbrace$ contain any special considerations (why these three degrees)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations are addressed well in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer pqkD for the constructive and comprehensive review. We address questions in the order they are presented.
- The intuition here is that in the exploration phase, we sample local actions that are the farthest from satisfying $N_{t,i,a_i} \approx w_{t,i,a_i} \log(T)$, for all agents $i\in[N]$. This is naturally achieved by selecting, local actions that minimize the ratio $N_{t,i,a_i}/w_{t,i,a_i}$. Now, the interpretation for the lower bound in terms of group variables $\tilde{v}$ (see Lem 4.2 and lines (124-126)) is that an optimal algorithm should sample each group arm $a_e$ for about $\tilde{v}^\star\_{e,a_e}\log(T)$ times. To see why the local action selection above will eventually satisfy this, we need to consider that (i) the condition in the exploitation phase of ESM enforces that $N_{t,e,a_e} \ge \tilde{v}\_{t,e,a_e}(1+\gamma)\log(T)$, and (ii) $C^\diamond\_{\hat{\theta}\_t}(m)$ is defined as an optimization problem over a constraint set $\tilde{\mathcal{V}}\_{\diamond}$ which imposes consistency between local variables $w$ and group variables $\tilde{v}$. These constraints are key to ensure that ultimately this local sampling will ensure that $N\_{t,e,a_e} \approx \tilde{v}\_{t,e,a_e}(1+\gamma) \log(T)$. We will include this discussion in the final version of the paper.
- For $\diamond=L$, the variables $w=(w_{i,a_i})\_{i\in[N], a_i \in\mathcal{A}_i}$ are defined as $w\_{i,a_i} = \sum\_{b\in\mathcal{A}:b_e \sim a_i} \tilde{v}\_{e,b_e}, \forall e\in[\rho],\forall i\in\mathcal{S}_e,\forall a_i \in\mathcal{A}_i$ (recall that the notation $a_e\sim a_i$ means that the $i$-th element of $a_e$ equals $a_i$). This definition directly follows from the one of $\tilde{\mathcal{V}}\_L$ in Sec. 5.1.1.
In practice, when running ESM, we proceed as follows: to solve the optimization problem defined by $C^L\_{\hat{\theta}\_t}(m)$, we instantiate both the group variables $(\tilde{v}\_{t,e,a_e})_{e\in[\rho], a_e \in\mathcal{A}_e}$ and the local variables $(w\_{t,i,a_i})\_{i\in[N], a_i\in\mathcal{A}_i}$, and the corresponding constraints, as defined by $\tilde{\mathcal{V}}\_{L}$, so that both variables are naturally returned by the solver. We will include this clarification in the final version of the paper.
- Thank you for your question. We acknowledge that quantifying the tightness of the approximation is an important point, which requires further investigation. However, we believe that addressing this point in its generality (for any $m$ and for both $\diamond = \\{L,MF\\}$) is very challenging due to the intricacy of the lower bound optimization problems involved in the approximations. Observe however that we presented different results on the quantification of our approximations in the paper, as summarized in the following points.
(i) We proved in Lem. 5.4 that for any $m$ and any $\diamond \in \\{L, MF\\}$, that $C^\diamond\_\theta(m+1)\le C^\diamond\_\theta(m)$, i.e. the approximations tightens as we increase $m$, and that $C^\diamond\_\theta(K^N-1) = C^\diamond\_\theta$.
(ii) We proved in Lem. 5.1 that for acyclic factor graphs, it holds that $C^L_\theta = C^\star_\theta$ and hence the approximation is tight.
(iii) We proved in Lem. 5.3 the following scaling for the MF approximation: for any $\psi$ we have that $$C_\theta^{\psi-MF} \le \rho/\Delta_\min^2 \sum_{e\in [\rho], a_e\in\mathcal{A}_e}\theta_e(a_e^\star)- \theta_e(a_e).$$
We believe that these results complement each other and provide an overall understanding of the statistical properties of our approximations.
Finally, we would like to highlight that the main contribution of the paper is to break the combinatorial nature of the problem. In other words, going from a regret scaling as $\Theta(K^N)$ to one scaling as $\Theta(\rho K^d)$ (this scaling is guaranteed with our approximations) in a computationally efficient manner is the main contribution of the paper.
- Thanks for the very interesting question. Analyzing the performance of ESM with $\diamond=L$ for factor graphs with cycles is actually one of the future directions we want to explore. Although we do not have any conclusive results on the analysis yet, we present a few empirical results in the attached document.
We consider the cyclic factor graph in Fig. 3 (left) of the document accompanying this rebuttal with $K=3$ local actions. We compare the performance of ESM for $\diamond=\\{\star,L\\}$ and $m=\tilde{A}$. In Fig. 3 (center), we show the lower bound as $C^\diamond_\theta\log(T)$ for $\diamond=\\{\star,L\\}$ and in Fig. 3 (right), we show the results for the regret of ESM. As expected, although $C^L\_\theta\le C^\star\_\theta$, $ESM(\diamond=L)$ cannot outperform $ESM(\diamond=\star)$.
In general, for many instances, we empirically observed that $C^{L}\_{\theta}$ and $C^\star\_\theta$ are usually close. For example, in Fig. 12 (App. K), we compare these two quantities for $100$ randomly generated instances (of the group means) for a ring factor graph (see Fig. 3). As shown in such figure, $C^L\_\theta$ is very close or indistinguishable from $C^{\star}\_{\theta}$ for all the instances. We observed similar behavior for other types of cyclic graphs. Based on these empirical observations, we believe that, potentially, it might be possible to use ESM with $\diamond=L$ even for cyclic factor graphs, and we will explore this direction in the future. We will include the above discussion in the final version of the paper.
- The choice of the tilt angles in our experiments has no particularly deep explanation and is based on simulation-dependent observations.
The choice of the maximum tilt ($13^\circ$) is also backed by the intuition that one may not want to select tilts with excessive values as these create blind spots in the coverage where network users are not connected to the network (see also App. L for additional details). We will include this discussion in the final version of the paper.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you very much for your rebuttal and my concerns have been well-addressed!
---
Reply to Comment 1.1.1:
Comment: We are glad to have addressed your concerns. Thank you very much for taking our rebuttal into account. | Summary: This paper proposes the ESM algorithm for the regret minimization problem in Multi-Agent Multi-Armed Bandits, where the rewards are defined through a factor graph. The ESM algorithm attains a dedicate trade-off between statistical efficiency and computational efficiency in n Multi-Agent Multi-Armed Bandits. The ESM algorithm is inspired by simple upper bounds of the regret lower bound, which characterizes the minimal expected number of times each global action should be explored and is computationally intractable to be exploited in the design of efficient algorithms. By tuning this upper bound, the ESM explores the trade-off between the achievable regret and the complexity of computing the corresponding exploration process. The ESM has a regret that asymptotically matches this upper bound. The regret and computational complexity of ESM are assessed numerically, using both synthetic and real-world experiments in radio communications networks.
Strengths: This paper establishes a lower bound for the Multi-Agent Multi-Armed Bandits, where the rewards are defined through a factor graph.
Simple upper bound of the regret lower bound is established, which enables the design ESM.
Asymptotic regret upper bound of ESM is derived.
Weaknesses: The regret lower bound looks not new and the proofs does not contribute new techniques. They look like straightforward applications of techniques from structured bandits.
Though upper bounds of the regret lower bound are established, the tightness of the upper bound is not proved. It is unknown how tight the upper bound is. Thus, it is unknown how much statistical efficiency is lost.
The regret upper bound of ESM is asymptotic, which looks much weak than the finite round regret upper bound.
The gap between the regret upper bound of ESM and the regret lower bound is not quantified. It is unknown how tight the regret upper bound is.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weakness part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See comments above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer EJNU for the comprehensive and detailed review. We address the main questions in the following.
- *"The regret lower bound looks not new and the proofs does not contribute new techniques. They look like straightforward applications of techniques from structured bandits."*
We agree that the lower bound is derived using classical techniques. We clearly state, after Theorem 4.1, that the bound is derived by using "classical change-of-measure arguments" and again in App. A, that the bound "is a direct consequence of an analogous bound in the combinatorial semi-bandit feedback setting, first given in [9, Th. 1] and leverages general techniques for controlled Markov chains".
However, we do not understand why the reviewer is considering this point as a weakness. Presenting the lower bound serves two important purposes in the paper:
1. Previous works in the literature on MAMABs do not provide any lower bound.
2. More importantly, the lower bound is used in our paper as a starting point for the lower bound approximations and the algorithm design.
- *"Though upper bounds of the regret lower bound are established, the tightness of the upper bound is not proved. It is unknown how tight the upper bound is. Thus, it is unknown how much statistical efficiency is lost -- The gap between the regret upper bound of ESM and the regret lower bound is not quantified. It is unknown how tight the regret upper bound is."*
Thank you for raising this point. We are aware that a complete analytical quantification of the lower bound approximation is lacking and we believe that this is an important point for which further investigation is required. We would like to mention that addressing this point in its generality (for any $m$ and for both $\diamond \in \\{\text{L, MF}\\}$) is very challenging due to the intricacy of the lower-bound optimization problems involved in the approximations.
However, we would also like to mention that we presented different results on the quantification of our approximations as summarized in the following points.
(i) We proved in Lem. 5.4 that for any $m$ and any $\diamond \in \\{\text{L, MF}\\}$, that $C^\diamond_{\theta}(m+1)\le C^\diamond_{\theta}(m)$, i.e. the approximations tightens as we increase $m$, and that $C^\diamond_{\theta}(K^N-1)\le C^\diamond_{\theta}$.
(ii) We proved in Lem. 5.1 that for acyclic factor graphs, it holds that $C^L_\theta = C^{\star}_{\theta}$ and hence the approximation is tight.
(iii) We proved in Lem. 5.3 the following scaling for the MF approximation: for any $\psi$, we have that $$C^{\psi\text{-MF}} \le \rho/\Delta_{\min}^2 \sum_{e\in [\rho], a_e\in\mathcal{A}_e}\theta_e(a_e^\star)- \theta_e(a_e).$$
We believe that these results complement each other and provide an overall understanding of the statistical properties of our approximations.
In Fig. 2 of the accompanying document attached to this rebuttal, we show a sample of the approximation ratio $C^\diamond_{\theta}(m)/C^\star_{\theta}$ for $m \in \\{1, 3, 18, 37, 75, 100 \\}$, and $\diamond \in \\{\text{MF}, \text{L}\\}$ for different graph topologies. The results show that the $C^{\text{MF}}_{\theta}(m)$ is close to $C^{\star}\_\theta$ (they are equal up to a small constant) and decreases with $m$. For $C^\text{L}\_{\theta}(m)$, the same hold, and for $m$ large enough the approximation is tight $C^L\_{\theta}(m) = C^{\star}\_{\theta}$, as predicted by our results.
Finally, we would like to mention that the main contribution of the paper is to break the combinatorial nature of the problem. In other words, going from a regret scaling as $\Theta(K^N)$ to one scaling as $\Theta(\rho K^d)$ (this scaling is guaranteed with our approximations) in a computationally efficient manner is the main contribution of the paper.
- *"The regret upper bound of ESM is asymptotic, which looks much weak than the finite round regret upper bound"*
Although we present the regret upper bound of ESM in the asymptotic form (as $T\to \infty$), we also obtain a finite time upper bound in App. D, as a byproduct of the proof of Thm. 6.1. The bound is: For any $\kappa >0$, and $T\ge 1$,
$$
R^{\pi}(T) \le \theta(a^\star) \left(M(\gamma,\rho) + \frac{2\tilde{A}}{\varepsilon\delta(\kappa)^2}\right) + ((1+\kappa)C^{\diamond}_{\theta} + 2 \varepsilon\psi^{\diamond}(\theta)(1+\gamma))\log(T),
$$
where $M(\gamma,\rho)$ is a constant depending only on the exploration constant $\gamma$ and the number of groups $\rho$, $\psi^{\diamond}(\theta)= \tilde{A} \\|\tilde{v}^\diamond(\theta) \\|\_\infty \sum_{e\in[\rho],a_e\in \mathcal{A}_e} (\theta_e(a^\star_e) - \theta_e(a_e))$ (see App. D for details). We will highlight the finite-time regret upper bound by using the extra space in the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: I do not buy your claim that "Previous works in the literature on MAMABs do not provide any lower bound." There is a vast literature on MAMABs. It has been shown that the DPE algorithm is optimal for some variants of MAMABs, in the sense of of matching upper and lower bound[1]. Though they consider a setting different from this paper, it does not mean previous works on MAMABs do not provide any lower bound.
To the best of my knowledge, MAMABs normaly study the setting that agents do not communicate or with restricted communication. It seems that this paper does not consider such factors. To me, the setting of this paper is more like a variant of combinatorial bandits instead of MAMABs.
This paper is placed more like a theoretical paper, since the experiments are just numerical simulations. From a theoretical point of view, showing or even sufficient discussion on tightness of bound is of great value. Furthermore, if the bound is not tight, it is difficult to judge how much statistical efficiency is lost and how insightful the result is.
[1] POAN WANG et al. Optimal Algorithms for Multiplayer Multi-Armed Bandits, AISTAT 2020.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments.
- *I do not buy your claim that "Previous works in the literature on MAMABs do not provide any lower bound." There is a vast literature on MAMABs. It has been shown that the DPE algorithm is optimal for some variants of MAMABs, in the sense of of matching upper and lower bound[1]. Though they consider a setting different from this paper, it does not mean previous works on MAMABs do not provide any lower bound.*
Regarding the lower bound. Apologies for the confusion in our rebuttal. We agree that for the MAMAB problem considered in [1] (that is very different than ours), lower bounds exist. What we intended to say in the rebuttal is that there exists no lower bound for the MAMAB problem considered in our paper, and also studied in [2, 3, 35]. What is important is that, as mentioned in our rebuttal, the lower bound is used in our paper as a starting point for the lower bound approximations and the algorithm design.
- *To the best of my knowledge, MAMABs normaly study the setting that agents do not communicate or with restricted communication. It seems that this paper does not consider such factors. To me, the setting of this paper is more like a variant of combinatorial bandits instead of MAMABs.*
The terminology “MAMAB” has indeed been used for different kinds of problems. In Wang et al. [1] (mentioned in your comments), all agents share the same action set and the reward distributions are the same across agents. The only interactions between the agents are when multiple agents select the same action (they experience a collision). That is why in [1], limiting the communication between agents is possible. The MAMAB we study is, from the agents’ coordination and learning perspective, more challenging because the rewards received by an agent truly depend on the actions selected by neighboring agents (i.e., agents in the same group). We agree that our MAMAB can be seen as a variant of a combinatorial bandit problem, as we explain at the beginning of the introduction. We also make a detailed connection between our problem and combinatorial bandits in Appendix E.
- *This paper is placed more like a theoretical paper, since the experiments are just numerical simulations. From a theoretical point of view, showing or even sufficient discussion on tightness of bound is of great value. Furthermore, if the bound is not tight, it is difficult to judge how much statistical efficiency is lost and how insightful the result is.*
Although the paper investigates the MAMABs problem theoretically, it also provide extensive simulations on the antenna tilt optimization problem in a realistic radio network simulator. We believe that these experiments are valuable to validate the practical applicability of our work. Note that this kind of experiments are not common in the bandit literature.
About the tightness of the lower bound approximations, as we stated in the rebuttal, we believe that a complete characterization of the approximation is difficult. Nevertheless, we summarized our main results regarding the tightness of our approximations in the three points (i), (ii), and (iii) in the rebuttal.
Finally, we would like to state that the main contribution of the paper is to break the combinatorial nature of the problem. Without exploiting the structure of the problem, one would get a regret scaling as $\Theta(K^N)$, hence exponentially growing with the number of agents. With our approach, we provably design a computationally efficient algorithm that achieves a regret scaling as $\Theta(\rho K^d)$.
Thank you again for the discussion. We would be happy to answer further questions if any. | Summary: This paper focuses on an interesting problem, which is a version of multi-armed bandits in which there are multiple agents and an action needs to be selected for each one of them. This is a problem that has been studied before in the literature and the authors here provide improved bounds on the regret by approximately a factor of k^d, where k is the number of agents and the degree of a graph that defines the reward function. Moreover, the authors' algorithm gives a knob that controls the tradeoff between the regret and the time spent solving the multi-agent optimization problem of each round.
The authors finally provide an experimental evaluation of their algorithm on a synthetic and a real dataset. The plots show that the regret is low and improves over a simple baseline and over previous work.
Strengths: - The paper improves over the state of the art in theoretical guarantees
- The paper is well written, easy to follow
- The experimental evaluation shows good results
Weaknesses: - The theoretical results are shown to be an improvement against [2] and [36] but the experimental part does not compare against [2] and [36].
- The related work pointers seem incomplete. The vast literature of combinatorial bandits seems to be very related (again combinatorial problems need to be solved in each round) but is not mentioned.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Why do you not compare against [2] and [36] in the experiments and rather choose weaker baselines?
- How does your problem compare against combinatorial bandits and why can't techniques from there be used in your setting?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 7bJY for the comprehensive and detailed review. We address the main questions in the following.
- *"The theoretical results are shown to be an improvement against [2] and [36] but the experimental part does not compare against [2] and [36]. Why do you not compare against [2] and [36] in the experiments and rather choose weaker baselines?"*
Thank you for noticing this issue, only due to a mistake we made when referencing a paper. Indeed, unfortunately, we realized that we misplaced a reference in the experiment section: [3] should be replaced by [2] in line 255. In our experiments, we do not choose a weaker baseline but the MAUCE algorithm shown in Figs. 4,5,6, actually corresponds to the reference [2]. We thank the reviewer for highlighting this issue, which we will fix in the final version of the paper.
Now, to the best of our knowledge, the MAUCE algorithm [2] is the state of the art in MAMABs and the best competitor to our algorithm. As stated in lines (73-75), the HEIST algorithm proposed in [36] is very similar to MAUCE: it is a UCB-type algorithm but with a different constant in the UCB term. However, HEIST has worse regret guarantees for most MAMABs instances: its asymptotic regret scales as $O(K^N\Delta_{\max}/\Delta_{\min}\log(T))$, while the scaling of the regret of MAUCE is $O(\rho^2 K^d\Delta_{\max}^2/\Delta_{\min}^2\log(T))$ (please refer to Sec. 2 and App. M.1 for details). Hence, in the experiments, we choose to compare our algorithm to MAUCE only. However, for the sake of completeness, we have now performed a few additional experiments testing the performance of HEIST [36]. The results of these experiments are included in Fig. 1 of the accompanying document attached to this rebuttal. We will include these experiments in the final version of the paper.
- *"The related work pointers seem incomplete. The vast literature of combinatorial bandits seems to be very related (again combinatorial problems need to be solved in each round) but is not mentioned. How does your problem compare against combinatorial bandits?"*
Thank you for your question. Although we acknowledge your concern, we believe that related works in combinatorial bandits (and the relationship between combinatorial bandits and MAMABs) have been covered in the paper and the appendices, as we explain below.
1. First, the connection between combinatorial bandits and MAMABs is presented in App. E, where we explain that our MAMAB setting can be interpreted as a particular instance of combinatorial bandits with semi-bandit feedback, and we provide an example of this connection.
2. Second, in lines (78-87) of the related work section, we mention different related works in the combinatorial semi-bandit feedback literature ([24, 9, 11, 12, 35]) and we discuss the closest and most relevant work [11] in the main body of the paper. In the current submission, other related works in combinatorial bandits are discussed in detail in App. M.2, for reasons of space.
We will include this discussion in the main body of the paper by using the extra space in the final version of the paper. We have further included a few additional references in the answer to the next question, which are reported at the end of this rebuttal. Moreover, should the reviewer be aware of other relevant works in combinatorial bandits that are not already covered in the current manuscript, we would welcome the opportunity to include them.
- *"Why can't techniques from there be used in your setting?"*
We believe that this point has been addressed in App. M.2 and App. H, and we are glad to provide further clarification. As the reviewer suggests, in principle, one could indeed use combinatorial bandit methods as those referred into Sec. 2 or App. M.2, but most likely these will yield computationally inefficient algorithms, as we explain in the following.
For example, combinatorial bandit algorithms based on Thompson Sampling (TS) techniques [34, a, b] usually require, at each time step, a maximization step of the type $\max_{a\in\mathcal{A}} \sum_{e\in[\rho]} \theta_e(a_e)$. We show in Lem. H.1 that, for general factor graphs, performing this operation is $\mathcal{NP}$-hard.
Other combinatorial bandit algorithms, employing e.g., Upper Confidence Bound (UCB) methods such as [16, 9, c], often involve even harder maximization steps w.r.t. the maximization presented above. For example, the algorithms proposed in [16, 9, c] require solving an index maximization problem of the type
$$
\max_{a\in\mathcal{A}} \sum_{e\in[\rho]} \theta_e(a_e) + \sqrt{\sum_{e\in[\rho]} c\log(T)/N_{e,a_e}(t)}.
$$
For many combinatorial structures (e.g. even for $m$-sets), there is no polynomial time algorithm to solve the task above [11]. We will include the above discussion in the final version of the paper.
**Additional references**
[a] Siwei Wang and Wei Chen, *Thompson sampling for combinatorial semi-bandits*. In Proc. of ICML, 2018.
[b] Pierre Perrault, Etienne Boursier, Vianney Perchet, Michal Valko, *Statistical Efficiency of Thompson Sampling for Combinatorial Semi-Bandits*. In Proc. of NeurIPS, 2020.
[c] Wei Chen, Yajun Wang, Yang Yuan. *Combinatorial Multi-Armed Bandit: General Framework, Results and Applications*. In Proc. of ICML, 2013.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, you indeed answered the raised questions. I will revise my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Many thanks for considering our rebuttal and for revising the score. | Summary: This paper consider the regret minimization problem in MAMABs, which typically has an exponentially large action space in relation to the number of agents. The authors first establish a lower bound result for general MAMAB by generalizing techniques from single agent bandit literature. More precisely, the lower bound is obtained by solving a convex optimization problem with exponentially many variables and constraints. Next, the authors present an equivalent characterization of the program for reward structures with acyclic factor graphs and provide mean-field approximation programs for general graph models. Finally, the authors design computationally tractable algorithms based on the proposed equivalent(approximation) program for acyclic factor graphs(general graphs).
Strengths: 1. Most writing is clear and easy to follow, even for those without extensive knowledge in MAMABs. Most claims and definitions are well-supported by existing literature.
2. Both the proposed lower bound and the proposed approximation scheme are interesting.
3. Authors also provide simulations and real-world applications to further enhances the relevance and practicality of the proposed methods.
Weaknesses: While the paper overall is well-written and informative, I found some parts of the presentation to be unclear. Specifically, there were several points where I had questions and would have appreciated more explanation or clarification.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the trade-off when letting $\varepsilon, \gamma\to 0$ in Algorithm 1? It appears that Theorem 6.1 always encourages selecting small values for $\varepsilon,\gamma$
2. In the acyclic factor graphs setting, while the proposed characterization is statistically tight, what is the computational complexity of the proposed algorithm to achieve optimal regret(i.e. select $m = K^N - 1$)? Will it achieve the same polynomial-time complexity as in [11] in this setting?
3. In line 155, authors said "Unfortunately, for general graphs, we have that $C^L_\theta < C_\theta^\star$ (a direct consequence
156 of [36, Prop. 4.1]), and hence it is impossible to devise algorithms based on this approximation."
I am wondering why it is impossible to propose an approximation algorithm based on the $C^L_\theta$ value (since the proposed approximation program in the mean-field approximation is also not guaranteed to have a value equal to $C_\theta^\star$)
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer Pi4y for the constructive and comprehensive review. We clarify the main questions in the following.
1. *"What is the trade-off when letting $\varepsilon,\gamma \to 0$ in Algorithm 1? It appears that Theorem 6.1 always encourages selecting small values for $\varepsilon,\gamma$*
The choice of $\varepsilon, \gamma$ in our algorithm does not really involve any trade-off. These parameters are "safety parameters" that should just be strictly positive for our analysis to hold. The parameters impact the amount of exploration performed by ESM. When decreasing both these parameters the exploration of ESM also decreases, and this in turn improves the statistical performance of the algorithm. More precisely, in the proof of Thm. 6.1 (see App. D), we show that the regret upper bound is minimized when $(\varepsilon,\gamma) \to (0,0)$. We will include these clarifications in the final version of the paper.
2. *"In the acyclic factor graphs setting, while the proposed characterization is statistically tight, what is the computational complexity of the proposed algorithm to achieve optimal regret (i.e. select $m = K^N-1$)? Will it achieve the same polynomial-time complexity as in [11] in this setting?*"
In the acyclic factor graph setting, the leading term in the computational complexity of ESM is given by the cost of determining the $m$ smallest gaps. Finding the best algorithm for solving this task is still an active area of research (see [15] and references therein for details). For example, the elim-$m$-opt algorithm has complexity $O((m + 1)NK^{A_{\mathcal{O}}+1})$ (see line 203 and App G). Hence, for $m = K^N-1$ the computational complexity of ESM in this case would be exponential in $N$. In fact, in practice, our method is meant to be applied when $m$ does not grow exponentially in $N$ (we will clarify this), because if it does, then the algorithm we obtain is at least as complex as the initial lower bound problem. As explained in Sec. 5.2, in practice selecting $m \simeq \tilde{A}$ yields a good trade-off between statistical complexity and computational complexity (see Fig. 2), and the latter will be polynomial in this case.
About the computational complexity of [11], we believe that even for acyclic factor graphs, the algorithm of [11] would not lead to a polynomial-time computational complexity. Indeed, to prove a polynomial sample complexity of their algorithm, the authors postulate the existence of a *"budgeted linear maximization"* oracle (Assumption 2 in [11]), and we are not aware of the existence of such an oracle for these types of MAMABs instances. We will include these considerations in the final version of the paper.
3. *"In line 155, authors said "Unfortunately, for general graphs, we have that $C^L_{\theta} < C^\star_{\theta}$ (a direct consequence 156 of [36, Prop. 4.1]), and hence it is impossible to devise algorithms based on this approximation." I am wondering why it is impossible to propose an approximation algorithm based on the $C^L_{\theta}$ value (since the proposed approximation program in the mean-field approximation is also not guaranteed to have a value equal to $C^{\star}_{\theta}$)."*
This is an interesting question! We cannot devise algorithms based on $C^{\text{L}}_{\theta}$ for MAMABs instances with cyclic factor graphs because it would contradict the lower bound result in Thm. 4.1.
To see why, note that:
(i) for acyclic factor graphs, we have that $C^L_{\theta} < C^\star_{\theta}$, and
(ii) the lower bound dictates that any (consistent) algorithm must satisfy $R^{\pi}(T)/\log(T) \ge C^\star_{\theta}$ as $T\to \infty$ (see Thm. 4.1).
Now, if one could devise an algorithm based on ESM which target $C^L_{\theta}$, we would get an algorithm with an upper bound than that attains $R^{\pi}(T)/\log(T) \le C^L_{\theta} < C^\star_{\theta}$ as $T\to \infty$ (see Thm. 6.1), which clearly contradicts the lower bound.
On the other hand, for the Mean-Field (MF) approximation, we have that $C_{\theta}^{\psi\text{-MF}} > C^\star_{\theta}$, for any $\psi$, and hence the problem described above for $C^L_{\theta}$ is avoided. We will include these clarifications in the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the helpful response. My original score remains unchanged. | Rebuttal 1:
Rebuttal: We thank the reviewers for their comprehensive and detailed reviews. We address the reviewers' questions individually in the rebuttals below. As requested by the reviewers, we have run additional experiments during the rebuttal period. We present the results of these experiments in the accompanying document attached to this rebuttal.
Pdf: /pdf/0531e547039d7749023c0be245a25e02ceca819e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Dual Self-Awareness Value Decomposition Framework without Individual Global Max for Cooperative MARL | Accept (poster) | Summary: The paper focuses on value decomposition without IGM constraints by proposing Dual self-Awareness Value dEcomposition (DAVE) framework. DAVE is inspired by dual self-awareness studied in psychology and uses an ego model, i.e., a policy for each agent for actual action selection and an alter ego model, i.e., a value function to address credit assignment without the IGM constraint. To avoid premature convergence to poor local optima, an anti-ego exploration method is proposed. DAVE is evaluated in several domains and compared with many state-of-the-art MARL approaches.
Strengths: The paper proposes an interesting approach inspired by knowledge from psychology.
The evaluation is structured well by starting with small and tractable problems first and scaling up. Many state-of-the-art MARL algorithms are used as baseline for sufficient comparison.
Weaknesses: The paper strongly focuses on the omission of IGM, which is motivated as a limitation of MARL. However, DAVE introduces additional complexity that at least doubles the computational effort compared to the baselines, e.g., two networks are used per agents and the exploration mechanism adds more parameters on top of that (note that despite doubled parameters, DAVE does neither perform twice as well nor learn twice as fast to fully compensate for the additional effort). Therefore, the fairness of comparison is questionable. Furthermore, I am not sure about the purpose of the individual Q-functions, since they are not used to maximize the joint Q function as usual. The ego model/policy is used instead thus individual Q-functions could be completely omitted as far as I understood the text.
The initial motivation of dual self-awareness in the introduction implies that ego and alter ego models have a symmetric relationship. However, the ego model represents a policy while the alter ego model represents a value function, which is sometimes called "alter ego model" and sometimes "alter ego value function model", which is confusing to read.
The maximization of the joint Q function through sampling, reminds me of FACMAC, where agents select their actions in a similar way (albeit for continuous action spaces though).
Despite the nice structure of the experimental presentation, the SMAC results are not particularly overwhelming because SMAC is a widely solved benchmark, where most baselines already achieve high win rates. Furthermore, the results are not in line with previous work, e.g.
- MAVEN reaches an average win rate of 40% at 5 million steps in 6h_vs_8z in the original paper, while it remains with 0% win rate in this paper with the same about of steps.
- QMIX reaches 100% win rate in 10m_vs_11m and 80% in 3s_vs_5z in the SMAC paper, while it is notably lower in this paper.
The paper heavily depends on the appendix indicating its lack of self-containment. The paper's content or the approach need more simplification to give a sense of completeness of the presentation.
Figure labels and numbers are only readable on a sufficient zoom scale which is problematic when printed.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: I do not fully understand the purpose of the alter ego models. Since the policies are trained to directly maximize the joint-Q function, why introducing a „detour“ to the mixing network through the individual Q-functions? Wouldn’t simply feeding the resulting joint action (e.g., as one hot vector) be sufficient to compute the TD-target for Eq. 4?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: Some limitations like the additional compute through sampling actions are mentioned in the text.
Potential negative societal impact is not discussed at all.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and interesting comments! We are glad that you have read our paper carefully. We hope we can address your concerns below.
**Q1**: Wouldn’t simply feeding the resulting joint action (e.g., as one-hot vector) be sufficient to compute the TD-target for Eq. 4?
**A1**: We think this is a valuable question. We do not employ a monolithic critic whose input is the global state and the joint actions of all agents because our method can scale better to tasks with a larger number of agents and actions. For example, as shown in Figure 16(c), the *Humanoid* task involves 17 agents, each with 31 actions. Then the dimension of the sparse one-hot vector corresponding to the joint action is $17\times31=527$. However, if each agent has an alter ego model to output an individual Q-function as in DAVE, it can avoid generating the sparse high-dimensional one-hot vector mentioned above. In addition, we also tested the performance of DAVE when introducing sparse one-hot vectors corresponding to joint actions in complex environments. The performance did not change significantly but the algorithm lost scalability. So the value decomposition mechanism in DAVE is not a detour. We apologize for not explicitly mentioning this point in the main text and will incorporate it in the next version of our paper.
**Q2**: Despite doubled parameters, DAVE does neither perform twice as well nor learn twice as fast to fully compensate for the additional effort.
**A2**: We give the relative size of neural network models for some value decomposition variants over QMIX. The model size here refers to the number of parameters each algorithm needs to train in the *MMM2* scenario.
| Algo | QMIX | DAVE | CW-QMIX | OW-QMIX | MAVEN | QPLEX |
| ------------- | :--: | :--: | :-----: | :-----: | :---: | :---: |
| Relative Size | 100% | 132% | 373% | 373% | 167% | 142% |
The number of parameters in the value decomposition framework mainly depends on the mixing network structure. The DAVE framework only has three more fully connected layers (an additional ego model) than QMIX, so DAVE has the smallest change compared to QMIX among the variants. Nevertheless, DAVE still performs better than other baselines related to IGM. The most important thing is that DAVE can quickly converge to the optimal solution in the matrix game, while QMIX cannot.
**Q3**: The SMAC results are not particularly overwhelming because SMAC is a widely solved benchmark, where most baselines already achieve high win rates.
**A3**: Our concern is that our IGM-free framework can achieve similar or better performance than IGM-based methods, especially in non-monotonic problems. In addition, we also tested the performance of DAVE and some baselines in other more challenging environments, including SMACv2 and MA-MuJoCo, which are shown in the appendix.
**Q4**: The results are not in line with previous work.
**A4**: We think there are two reasons. The first and most important point is that the version of StarCraft we use is 4.10 instead of 4.6.2 used in [1,2]. Performance is *not* always comparable between versions. Secondly, there are some small typos in the original paper of MAVEN. The X-axis in Figure 4(b) in [2] is strange, the "0m" scale is not at the origin and there are two "4m" scales. Also we point out some other typos about MAVEN in Appendix D.3. All experiments in this paper are carried out with 6 random seeds in order to avoid the effect of any outliers.
**Q5**: The agents in FACMAC select their actions in a similar way as DAVE (albeit for continuous action spaces though).
**A5**: The motivation of DAVE is completely different from that of FACMAC. FACMAC is IGM-based and focuses on continuous action space tasks, while DAVE is IGM-free and focuses on non-monotonic tasks. Although FACMAC proposes its IGM-free variant FACMAC-nonmonotonic, it performs worse than IGM-based methods in most of all scenarios, even in simple matrix games.
We will try to simplify the paper's content while keeping the experiments sufficient and change the minimum text size in figures to 9pt. We really appreciate your comments and they really help us improve our paper. And we also appreciate it if you have any further comments or improve your score.
**Reference**
[1] Samvelyan, Mikayel et al. The StarCraft Multi-Agent Challenge. 2019.
[2] Mahajan, Anuj et al. MAVEN: Multi-Agent Variational Exploration. 2019.
---
Rebuttal Comment 1.1:
Title: Answer
Comment: Thank you for your response. I have read it through and raised my score. | Summary:
This paper proposes a different approach for multi-agent RL algorithms that does not depend on individual global max (IGM), but instead builds a new framework, DAVE, based on dual self-awareness. The algorithm is shown to perform favorably in several testing cases.
Strengths: The paper is well organized and the sections’ structure serve their purpose in illustrating this new framework. The experimental results also corroborate the authors’ claims and show that the proposed framework/algorithm, DAVE, performs better than other MARL algorithms.
Weaknesses: The main weakness in this paper is the lack of justification of differences between DAVE and the actor-critic models. In addition, since this paper proposes a new algorithm, it would be nice to have the pseudocode be in the main text. Overall, a better analysis of the sampling procedure is needed to justify the new DAVE framework.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Question: from figure 2, it appears that the alter ego model calculates the q value function and is similar in concept to a critic network. The ego model also computes the policy which is the same as an actor network. How is the dual self-awareness model different from an actor-critic model? The description in lines 193-194 appears to limit the self-awareness model to be a special case of a general actor-critic model.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations:
The proposed model “dual self-awareness” requires a specific sampling procedure to get the actor policy which renders this framework as a special case of actor-critic algorithms.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the sincere comments! We thank you for pointing out the many strengths of our paper. We hope we can address your concerns below.
**Q1**: How is the dual self-awareness model different from an actor-critic model?
**A1**: We apologize for misleading you into thinking that DAVE is a special case of actor-critic algorithms. We bolded "DISTINCT" in lines 193-194 to mean that DAVE is *completely* different from other actor-critic and policy-based algorithms. We then pointed out the reasons as follows:
> The actions sampled from the ego policies remain independent and identically distributed (IID), so the ego policy is trained through *supervised learning* rather than *policy gradient* used in policy-based methods.
All actors in the actor-critic framework are updated in the policy gradient manner, so the ego policies cannot be regarded as the actor model in the actor-critic framework. The ego policy model in DAVE is more like Best-of-N sampling, or rejection sampling [1] in the recently popular large language model field. We do not use policy gradient to update the ego policy because it is not effective for non-monotonic tasks, as shown in Appendix D.2. Details of the ego policy update can be found in Appendix B.
Thank you so much for your helpful comments. And we also appreciate it if you have any further comments or improve your score.
**Reference**
[1] Touvron, Hugo et al. Llama 2: Open Foundation and Fine-Tuned Chat Models. 2023. | Summary: This paper proposes a novel MARL algorithm, Dual self-Awareness Value dEcomposition (DAVE), which avoids the IGM constrain obeyed by most of the previous researches. The algorithm introduces three different policies including Ego Policy, Alter Ego Policy and Anti-ego Policy, where the Ego Policy tries to fit the optimal actions of the Alter Ego Policy and the Anti-ego Policy serves as an exploration strategy. Empirical results show that DAVE outperforms the baseline methods on several cooperative tasks.
Strengths: 1. The proposed DAVE algorithm avoids the IGM principle, which typically requires consistency between local and global optimal actions. This deviation enables DAVE to leverage the expressive capabilities of neural networks, particularly the mixing network component.
2. The Anti-Ego policy, a key component as the DAVE's exploration strategy, adopts the idea of count-based exploration techniques such as RND [1] and "Ensemble" in model-based RL. It quantifies the familiarity of $(s,a)$ pairs by evaluating the reconstruction loss of an auto-encoder.
3. Experimental results on Matrix Games show that DAVE does avoid local optima. Additionally, experiments conducted on SMAC, MA-MuJoCo indicate that DAVE-QMIX performs comparably to QPLEX and other baseline algorithms.
[1] Burda, Y., Edwards, H., Storkey, A., & Klimov, O. (2018). Exploration by random network distillation. arXiv preprint arXiv:1810.12894.
Weaknesses: 1. **Methodology**:
*a*) The Ego Policy only approximates the global optimal actions $u^{*}$ by NLL loss, which means the equivalence between the Ego Policy and the global optimum is not strictly guaranteed while it is hold strictly by IGM principle.
*b*) The basic idea of the exploration strategy is not quite novel. Simultaneously, while sampling from the Anti-Ego Policy guarantees the selection of low-Q actions, it does not guarantee that suboptimal-Q actions have been explored enough in the environment when the $M$ is not large enough. This is because reaching state $s$ is not guaranteed. As a result, the Q-values associated with these actions are prone to overestimation or underestimation, and the exploration policy tends to overlook this aspect of the action space.
2. **Experiments**: The results on SMAC and MA-MuJoCo are a bit confusing to me and i have listed my concerns in the *Qustions*.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The optimization of the $Q_{\text{tot} }^\text{{alter}}$ network heavily relies on the action sample set $U^{\text{ego}}$ due to the computation of the target. However, unlike common MARL algorithms where the policy receives gradients through backpropagation, in DAVE, the Ego policy and $Q_{\text{tot}}^\text{{alter}}$ network are trained individually. Although *Proposition A.1* guarantees that objectively existing $u^*$ will be sampled when $M$ is sufficiently large, it does not ensure convergence to the optimal point in the optimization process.
Then, there are two different models need to optimize and they are highly related, how do you ensure the convergence of them?
2. There are concerns regarding whether Equation (8) truly returns "the most novel joint action." As mentioned in *1.b)* in the *Weakness* part, reaching state $s$ can be challenging. However, the action set $\hat{U}^{\text{ego}}$ is sampled from the Anti-Ego Policy. When $M$ is limited, the suboptimal-Q actions are hard to be sampled both in the true interaction process (due to the hardness of reaching $s$) and in the Anti-Ego Policy sampling process (as their Q-values are not sufficiently low).
3.
*a*) Most figures in the *Experiment Section* display the median performance. Can you replace them with the mean performance as it is more often used in other works?
*b*) The results on SMAC of DAVE is based on QMIX as stated in the paper. To facilitate better comparison, it is suggested to use a more distinguishable color for the QMIX curve.
*c*) If i've identified the QMIX curve correctly, DAVE exhibits significant superiority over QMIX in Figure 6. However, in the Appendix (Figure 13, 15, 17), DAVE shows only a slight advantage over QMIX, can you analyze the reason for this?
4. line 312: "The experimental results show that DAVE without anti-ego exploration still performs better than other baselines." Actually, I do not get this conclusion. In contrast, the green curve for "DAVE w/o Exploration" demonstrates a significant drop compared to the QMIX baseline. Does this mean that the exploration part plays the most important role in DAVE but not the relaxation of the IGM principle?
5. It is unclear why the curve for *M=100* does not outperform the others in Figure 7 (2s_vs_1sc, 5m_vs_6m). Further explanation is required to address this discrepancy.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations: This work is mostly aimed to solve the cooperative Multi-agent tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and constructive comments! We are glad that you have read our paper and the supplementary material carefully! We hope we can address your concerns below.
**Q1**: The Ego policy and $Q_\text{tot}^\text{alter}$ network are trained individually. How to ensure the convergence of these two highly related models?
**A1**: The training of these two models is complementary rather than independent. After evaluation by $Q_\text{tot}^\text{alter}$ network, the action $\boldsymbol{u}^\ast$ with high $Q_\text{tot}^\text{alter}$ will be used to train the ego policy. On the other hand, the new trajectory obtained by the ego policy interacting with the environment will help the $Q_\text{tot}^\text{alter}$ network to fit the real return. The above-mentioned update paradigm of DAVE is similar to Best-of-N sampling, also known as Rejection Sampling [1], which is an effective method in the recently popular large language model field.
**Q2**: For the anti-ego exploration mechanism, it does not guarantee that suboptimal-Q actions have been explored enough in the environment when the $M$ is not large enough. Does Equation (8) truly return "the most novel joint action"?
**A2**: Your analysis is very detailed and accurate. When $M$ is limited, it does affect the performance of the dual self-awareness framework and anti-ego exploration mechanism in DAVE. We illustrate this in both Figure 7 and Section 6. However, for most of the existing multi-agent benchmarks and real tasks, $M=100$ or even $10$ can make DAVE's performance close to or better than the existing IGM-based method. The resulting computational cost is entirely affordable.
**Q3**: Can you replace the median performance in the Experiment Section with the mean one?
**A3**: Thanks for your reminder. We will replace the aggregate function from median to mean.
**Q4**: It is suggested to use a more distinguishable color for the QMIX curve in figures.
**A4**: Thank you for your advice. We will correct it in the next version of our paper.
**Q5**: DAVE exhibits significant superiority over QMIX in Figure 6 but shows only a slight advantage over QMIX in the appendix.
**A5**: There are two main reasons. The first and most important reason is the illusion brought by the different ranges of the X-axis. In the appendix, we run all experiments for 5M or 6M steps. But in Figure 6, most tasks are run for 2M steps. This leads to the fact that DAVE's advantage over QMIX *appears negligible* in the appendix. This is a visual deception. For example, in the *terran_5_vs_6* task in Figure 15, DAVE can achieve 15% winning rate at 2M steps, while QMIX needs to achieve it at 3M.
The second reason is that SMACv2 is more challenging than SMAC. Since the position, number, and category of agents are constantly changing in each episode, there may be situations where our agents are at a complete disadvantage. So there is no guarantee that the winning rate can theoretically reach 100%. Therefore, DAVE's performance can only be closer to a certain winning rate value than QMIX, but cannot exceed this value.
**Q6**: I cannot draw such a conclusion that DAVE without anti-ego exploration still performs better than other baselines.
**A6**: In Figure 5 Right, although the median of DAVE w/o Exploration has always been 10, its upper quantile reaches 12 first compared to other baselines. If we change the aggregation function from median to mean, the result will be more obvious. As shown in Figure 20 in the PDF we added, although the mean test return of DAVE w/o Exploration is lower than that of DAVE, it is still higher than other baselines. Furthermore, in the single-state matrix game, we implement various algorithms under uniform visitation to disregard the influence of exploration capabilities, as described in lines 269 to 270. Therefore, in Figure 4, DAVE is equivalent to DAVE w/o Exploration.
**Q7**: It is unclear why the curve for *M=100* does not outperform the others in Figure 7.
**A7**: This discrepancy is due to the randomness of sampling. In *2s_vs_1c* scenario, the performance of DAVE with $M=100$ has been stable at 100% win rate while other performances are unstable. In addition, in *MMM2* map, the performance of DAVE with $M=100$ is also significantly better than others. In most scenarios, the difference in performance is not obvious, but this also shows that in common multi-agent tasks, $M=10$ or even $M=5$ is enough.
**Q8**: The equivalence between the Ego Policy and the global optimum is not strictly guaranteed while it is held strictly by the IGM principle.
**A8**: The IGM-based method cannot guarantee its strict convergence to the global optimum due to the limited family of functions. However, DAVE has a better chance of converging to the global optimal solution than the IGM-based method. This is why IGM-based methods tend to fall into local optimal solutions in matrix games, while DAVE can quickly converge to global optimal solutions.
We really appreciate your comments and they really help us improve our paper. And we also appreciate it if you have any further comments or improve your score.
**Reference**
[1] Touvron, Hugo et al. Llama 2: Open Foundation and Fine-Tuned Chat Models. 2023. | Summary: This paper presents DAVE, an IGM-free value decomposition method, to enhance coordination ability in MARL. Drawing inspiration from the concept of dual self-awareness in psychology, DAVE consists of two components: an ego policy responsible for executing actions, and an alter ego value function involved in credit assignment and value estimation. By incorporating an explicit search procedure, DAVE eliminates the need for the IGM assumption and updates the actor using supervised signals instead of policy gradients. Additionally, the authors propose a novel anti-ego exploration mechanism to prevent the algorithm from being trapped in local optima. The method demonstrates impressive performance across a range of cooperative tasks, highlighting its effectiveness.
Strengths: 1. By employing NLL instead of policy gradients to update actors, DAVE effectively addresses the challenges associated with using unconstrained (without IGM assumption/monotonicity) factored critic.
2. The utilization of AE loss in Anti-Ego Exploration to assess the novelty of joint-action is intriguing, albeit bearing some similarities to RND [1].
3. The paper is well written and easy to follow. The integration of psychological concepts with algorithm design is a unique aspect that captured my interest. Furthermore, each component of the overall framework is thoroughly elucidated.
4. DAVE demonstrates impressive performance in both matrix games and SMAC, with clear discussions regarding its limitations.
Weaknesses: 1. While the authors claim that DAVE is the first method to completely eliminate the IGM assumption in value decomposition approaches, it's worth noting that FACMAC-nonmonotonic [2] also achieves full IGM-free. Although DAVE may outperform FACMAC-nonmonotonic, the contribution of DAVE might have been overestimated.
2. The discussion on related works is insufficient, and it would be beneficial to include references to value decomposition actor critic methods [2][5].
3. Minor 1: The idea of applying NLL loss bears similarities to previous works that combine Cross-Entropy Methods with Policy Improvement [3][4] (discussion can be included).
4. Minor 2: It appears that exploration has a significant impact on the performance of DAVE, making it a crucial component that cannot be disregarded, unlike simple epsilon-greedy exploration. It would be advisable to include comparisons between DAVE and other MARL exploration methods, such as EITI/EDTI[6] and CMAE[7], in the appendix.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Considering that [2] states, "On SMAC, FACMAC-nonmonotonic performs similarly to FACMAC on easy maps, but exhibits significantly worse performance on harder maps," it would be worth discussing why the authors chose to compare DAVE with FACMAC-nonmonotonic instead of FACMAC (fairness may not be an issue).
2. There appears to be a notable discrepancy between the performance curves reported in Figure 6 of the current paper and Figure 7 in [2], despite both versions using SMAC 4.10. Could you explain the reasons behind this divergence?
3. While the elimination of the IGM assumption is touted as a significant contribution of DAVE, it would be valuable to explore the performance of DAVE with a monotonic mixing network (or others with IGM assumption). Does the unconstrained factored mixing network indeed enhance performance?
[1] Burda, Y., Edwards, H., Storkey, A., & Klimov, O. (2019, May). Exploration by random network distillation. In ICLR, 2019.
[2] Peng, B., Rashid, T., Schroeder de Witt, C., Kamienny, P. A., Torr, P., Böhmer, W., & Whiteson, S. (2021). Facmac: Factored multi-agent centralised policy gradients. *In NeurIPS, 2021*.
[3] Simmons-Edler, R., Eisner, B., Mitchell, E., Seung, S., & Lee, D. (2019). Q-learning for continuous actions with cross-entropy guided policies. *arXiv preprint arXiv:1903.10605*.
[4] Neumann, S., Lim, S., Joseph, A., Pan, Y., White, A., & White, M. (2018). Greedy Actor-Critic: A New Conditional Cross-Entropy Method for Policy Improvement. In ICLR, 2023.
[5] Wang, Y., Han, B., Wang, T., Dong, H., & Zhang, C. (2020). Off-policy multi-agent decomposed policy gradients. *arXiv preprint arXiv:2007.12322*.
[6] Ackermann, Johannes, et al. "Reducing overestimation bias in multi-agent domains using double centralized critics." *arXiv preprint arXiv:1910.01465* (2019).
[7] Wang, Tonghan et al. “Influence-Based Multi-Agent Exploration.” *ArXiv* abs/1910.05512 (2019)
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: As mentioned in Weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and inspiring comments! We hope we can address your concerns below.
**Q1**: Why did the authors choose to compare DAVE with FACMAC-nonmonotonic instead of FACMAC?
**A1**: FACMAC is an IGM-based method. Because in the canonical implementation of FACMAC, the mixing network is a non-linear monotonic function, as in QMIX. And FACMAC focuses on continuous action space tasks rather than non-monotonic tasks. However, DAVE focuses on non-monotonic tasks, so the baselines we selected are methods related to relaxing the IGM assumption, except for the basic algorithm QMIX. The comparison of the performance of the IGM-based method and its DAVE variant on different tasks is most indispensable, so we compare DAVE-QMIX with QMIX, and DAVE-QPLEX with QPLEX in the appendix. The DAVE variant outperforms or equals its base algorithm in almost all scenarios. Thanks for your suggestion. We will add the comparison of DAVE-FACMAC and FACMAC in the next version of our paper.
**Q2**: There appears to be a notable discrepancy between the performance curves reported in Figure 6 of the current paper and Figure 7 in the origin paper of FACMAC.
**A2**: Figure 6 in our paper only contains 2 tasks that are also found in Figure 7 in the origin paper of FACMAC, namely *MMM2* and *2c_vs_64zg*. At the same time, only two algorithms are shared, namely QMIX and QPLEX. In our paper, the final winning rates of these two algorithms in these two scenarios are similar to those in Figure 7 in [1]. The difference in the training process may be due to the relatively large variance of the median.
**Q3**: It would be valuable to explore the performance of DAVE with a monotonic mixing network.
**A3**: Thank you very much for the reminder! This is indeed a valuable experiment. We added some experimental results, mainly the performance of DAVE with a monotonic mixing network on different domains. As shown in Figure 18 and 19 in the PDF we added in the global response, DAVE with a monotonic mixing network still cannot converge to the global optimal solution in matrix games. In addition, in the complex SMAC environment, the performance of DAVE with a monotonic mixing network is similar to or even better than that of QMIX, but worse than that of vanilla DAVE. This is also substantial proof that the unconstrained factored mixing network indeed enhances performance.
**Q4**: Although DAVE may outperform FACMAC-nonmonotonic, the contribution of DAVE might have been overestimated.
**A4**: Thank you for your correction. We will declare our contribution more rigorously. Before that, we declared that DAVE was the first method to completely eliminate the IGM assumption because we thought that FACMAC-nonmonotonic simply changed the mixing network of FACMAC from monotonic to non-monotonic. The resulting consequences caused by FACMAC-nonmonotonic have not been considered in depth. This also leads to the poor performance of FACMAC-nonmonotonic in most tasks.
**Q5**: It would be beneficial to include some references and discussions.
**A5**: Thank you for your suggestion. More discussion of related work can be found in Appendix E. And we will supplement the relevant literature and experiments you mentioned.
We really appreciate your comments and they really help us improve our paper! And we also appreciate it if you have any further comments.
**Reference**
[1] Peng, B., Rashid, T., Schroeder de Witt, C., Kamienny, P. A., Torr, P., Böhmer, W., & Whiteson, S. (2021). Facmac: Factored multi-agent centralised policy gradients. *In NeurIPS, 2021*.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their efforts and their response has addressed my concerns. I am currently maintaining my scoring. | Rebuttal 1:
Rebuttal: Dear Reviewers,
Thank you for your time and effort in reviewing our manuscript. We are delighted to receive your comments and suggestions, which have been valuable in improving the quality of our research.
As part of the rebuttal process, we have submitted a PDF as an additional auxiliary material, containing supplementary information that we believe is important for your review. We would like to remind you to please take a moment to review the attached file for further details. The PDF contains the following content:
1. The learning curves of vanilla DAVE and DAVE with a monotonic mixing network on the matrix games and SMAC.
2. Performance over time on the multi-step matrix game. As suggested by Reviewer NJQv, we change the aggregation function from *median* to *mean* in Figure 5.
We have also addressed each comment and provided responses inline with the reviewers' comments. We believe that these responses fully address the concerns raised by each reviewer and hope that you find them satisfactory.
We look forward to hearing your feedback and appreciate the time you have taken to review our manuscript. Thank you for your consideration.
Pdf: /pdf/08bd82bcc4d82153933f9f947ae3ec2e5885ead8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a novel value decomposition framework for MARL based on the notion of dual self-awareness in psychology. The framework, called DAVE, consists of two neural network models for each agent: the alter ego value function model and the ego policy model. The former participates in the evaluation of the group to solve the global credit assignment problem, and the latter helps the former to find the optimal individual action through an explicit search procedure. The paper claims that DAVE is the first value decomposition method that completely abandons the Individual Global Max (IGM) assumption, which requires consistency between local and global optimal actions. The paper also introduces an anti-ego exploration method to prevent the ego policy from getting stuck in a bad local optimum. The paper evaluates the performance of DAVE in both simple and complex environments, such as StarCraft II and Multi-Agent MuJoCo, and compares it to other popular baselines. The paper shows that DAVE can solve non-monotonic problems that challenge existing value decomposition methods, and can achieve competitive performance despite the absence of IGM. The paper argues that DAVE provides a new perspective on addressing the problems posed by the IGM assumption in value decomposition methods.
Strengths: Improving the efficacy of CTDE and value decomposition-based approaches to cooperative MARL is an important topic. The paper's investigation of the role of the IGM principle in CTDE approaches is novel and should be of interest to the broader MARL and RL communities. The method is supported by empirical results in both small-scale, well-principled environments and larger-scale, complex environments; and is comprehensive in its comparison to baselines. The paper is well-written and clear.
Weaknesses: - The related work in the main paper is a bit sparse given the depth of work in this area. Moving more of the discussion from Appendix E into the Related Work section would improve the contextualization of this work.
- Given the importance of sample size M, it would have been nice to see a greater exploration of the performance/computational cost trade-off. For example, how does M scale in more realistic environments? Are there environments where an unreasonably high value of M is needed? In the MMM2 example in Fig. 7 it looks like the learning signal is just beginning.
- There is limited discussion of limitations, future work, and societal impact.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can you provide more context into what is needed to tune exploration coefficients dynamically?
How well can this method generalize to other variants of the MARL problem (e.g. mixed incentive, social dilemmas)?
Can DAVE provably increase the solution concept reached by agents in social dilemmas / game theoretic settings?
How do you balance the trade-off between exploration and exploitation in the anti-ego exploration method?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The main limitation here is the computational cost of large values of M. This does not prevent strong performance in the environments studied in this work, but may be expensive in more complex and/or real-world environments. There are also other small limitations, e.g. tuning exploration coefficients.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and inspiring comments! We sincerely thank you for taking the time to read our paper and the supplementary material carefully! We hope we can address your concerns below.
**Q1**: Can you provide more context into what is needed to tune exploration coefficients dynamically?
**A1**: We propose an anti-ego exploration mechanism, which is able to select the action least likely to be selected by the ego policy in the current state $s$ (implemented by softmin) and the action least seen before (implemented by autoencoder). Through these two filterings, *relatively* unexplored actions can be effectively selected as long as the number of samples $M$ is large enough. In order to allow the agent to choose these unexplored actions in real interactions, we use NLL loss to increase the probability of selecting these actions, as shown in Equation 8. We use the exploration coefficient $\lambda$ to control the strength of exploration. We emphasize "relative" because almost all actions have been fully explored in the later stages of training, but there will always be actions that are *relatively* unexplored compared to others. Therefore, $\lambda$ must decrease over time and must eventually be 0, otherwise it will be unfavorable for exploitation in the later stage of training. Through some experiments, we found that the annealing period of $\lambda$ should account for 20%-40% of the entire training period. That is to say, anti-ego exploration is only performed during the first 20%-40% of the training period. A comparison of DAVE with different $\lambda$ initial values is shown in Figure 8.
**Q2**: How well can this method generalize to other variants of the MARL problem (e.g. mixed incentive, social dilemmas)?
**A2**: We focus on how to completely abandon the IGM principle, which is specific to value decomposition methods. The value decomposition method is only applicable to decentralized partially observable Markov decision process (Dec-POMDP) tasks, in which all agents share a reward function. Therefore DAVE cannot be directly applied to other variants of the MARL problem where agents have their own reward functions. In order to enable DAVE to quickly generalize to problems with mixed incentives or social dilemmas, there is a simple way to artificially design a global reward function related to all individual reward functions. In this way, DAVE is also able to converge to the optimal solution, just like the result of the matrix game in our paper.
**Q3**: Can DAVE provably increase the solution concept reached by agents in social dilemmas / game theoretic settings?
**A3**: Global reward game, such as Dec-POMDP task, can be formed as Markov game with a global reward. Its aim is learning a stationary joint policy so that no agent tends to unilaterally change its policy to maximize cumulative global rewards and a Markov equilibrium is reached. For example, for the two single-state matrix games in our paper, their corresponding Markov equilibrium are (A,A) and (A,C). As shown in Figure 4, DAVE can always quickly converge to the equilibrium solution while other IGM-based algorithms cannot. For other problems such as social dilemmas, we can try to convert the original problem into a Markov game with a global reward according to the method mentioned in **A2** and then solve it through DAVE.
**Q4**: How do you balance the trade-off between exploration and exploitation in the anti-ego exploration method?
**A4**: First of all, in order to allow the agent to exploit better, the exploration coefficient $\lambda$ is generally set to $\lambda\in[0,1)$ in Equation 8. The two terms in Equation 8 denote exploitation and exploration, respectively. Second, as mentioned in **A1**, the exploration in the later stage of training is less meaningful, so $\lambda$ should anneal over time and eventually become 0.
**Q5**: It would have been nice to see a greater exploration of the performance/computational cost trade-off. For example, how does $M$ scale in more realistic environments? Are there environments where an unreasonably high value of $M$ is needed?
**A5**: Thank you for your suggestion, we will add the experimental results in more realistic environments in the next version of the paper. According to our current version of the paper, we believe that $M=100$ is sufficient and affordable for several currently popular multi-agent benchmarks. It can be seen from Figure 7 that in most environments, when $M$ is $10$ or even $5$, the performance of DAVE does not drop or drops very little compared to $M=100$. Furthermore, the results for *Humanoid* task shown in Figure 16(c) also illustrate that $M=100$ is sufficient even for tasks with a larger number of agents and actions.
**Q6**: Moving more of the discussion from Appendix E into the Related Work section would improve the contextualization of this work. And there is limited discussion of limitations, future work, and societal impact.
**A6**: Thank you for your suggestion! We will correct it in the next version of the paper.
We really appreciate your comments and they really help us improve our paper! And we also appreciate it if you have any further comments.
---
Rebuttal Comment 1.1:
Title: Thank you for the detailed response (no score change)!
Comment: Thank you to the authors for their detailed response -- they have clarified all of my questions and I am satisfied with their explanations. | null | null | null | null | null | null |
Bounding training data reconstruction in DP-SGD | Accept (poster) | Summary: The paper studies the training data reconstruction robustness of DP-SGD. The paper observes that there is a huge gap between the existing lower bound and upper bound (attacks) on reconstruction error. Motivated by this observation, the paper proposes a tighter lower bound and a stronger attack, which empirically has a smaller gap. The paper further demonstrates how the hyper-parameters in DP-SGD affect the training data reconstruction error, and shows that the error can vary a lot even with the same DP guarantee.
Strengths:
* Originality: The paper proposes new theoretical analyses and attacks for training data reconstruction in the context of DP-SGD. The results are new to the best of my knowledge.
* Clarity: The paper is well-written.
* Significance: Given that training data reconstruction is a practical concern, understanding ML models' robustness to it is an important topic. The paper is a solid contribution to this topic.
* Quality: Overall, the paper is of good quality.
Weaknesses: * I do not see any major weakness of the paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors:
* Eq 1: The first term is an inner product whereas the second term is l1 error. Intuitively, we want the first term to be large, and the second term to be small. It is unclear why it makes sense to "sum up" these two terms as the loss. Do we want to maximize it or minimize it?
Other minor questions:
* Line 97: should "<=" be "<" or "<<" instead? Randomly releasing one sample gives (0, 1/|D|)-DP. So delta=1/|D| is too weak.
* Line 311. It says Balle et al. give a trivial bound for epsilon=100. But the bound is indeed tight given that the attack achieves a success probability that is very close to the bound.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations:
The authors did not discuss the limitation or the potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks rZtg, answering your questions below.
> Eq 1: The first term is an inner product whereas the second term is l1 error. Intuitively, we want the first term to be large, and the second term to be small. It is unclear why it makes sense to "sum up" these two terms as the loss. Do we want to maximize it or minimize it?
Thank you very much for catching this. There’s a typo in this equation, there should be a negative sign at the start; it should be -inner_prod + L1 distance. In practice we swept over different coefficients for each of these two terms to find how to weight them. We surveyed the relevant literature on gradient-based attacks, who propose many different losses to optimize: just the inner product, L1, L2, TV loss, summed combinations of them etc. We found through experimentation that the sum of the (negative) inner product with L1 gave the best results.
> Line 97: should "<=" be "<" or "<<" instead? Randomly releasing one sample gives (0, 1/|D|)-DP. So delta=1/|D| is too weak.
Technically yes, but it is common practice in DP research to set $\delta$ to 1/|D|. Almost all papers reporting DPSGD results on CIFAR-10 set $\delta$ to 1/50,000.
> Line 311. It says Balle et al. give a trivial bound for epsilon=100. But the bound is indeed tight given that the attack achieves a success probability that is very close to the bound.
Sure the bound is technically tight in this case but we still argue it is trivial. We can always set the upper bound to one. In some cases, the attack will succeed (and so the bound will be tight), and in some cases (smaller epsilons) the attack will fail. In either setting, we haven’t learnt anything useful from the upper bound.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response! | Summary: This paper explores reconstruction attacks within the threat model associated with the usage of DP-SGD. The study provides a tighter upper bound on the success rate of the reconstruction attacks than previous works. Additionally, the paper presents an attack that exploits a strong discrete prior and aligns closely with the provided bound.
Strengths: - The investigation into the bounds against reconstruction attacks (under the usage of DP-SGD) is both valid and of practical relevance.
- The proposal of a tighter bound and a strong attack is intriguing.
Weaknesses: - The quality of the presentation could be largely improved. Currently, the main assumptions of the proposed attack, the primary algorithm, the evaluation setup, and the key novelty are vague.
- From the current submission, the bound appears to be a straightforward adaptation of the general DP bound to the reconstruction attack.
- The proposed attack seems to operate only under a strong, possibly unrealistic assumption. For instance, knowing the "discrete prior" (specific points from the dataset already) might be quite a strong assumption, potentially transforming the problem from "reconstruction" to a simpler "matching" task. And the assumption of batch gradient descent (instead of stochastic gradient descent) further introduces additional complications. While this could be useful in certain scenarios (e.g., auditing the privacy cost), explicit argumentation is needed (e.g., why the proposed method is superior to existing techniques for such tasks) to justify its practicality.
- The scope of this submission seems rather narrow, particularly given the attack assumption which restricts the application scenario to be focused only on DP-SGD (may not even be applicable to SGD due to the "batch" GD assumption). This makes the attack itself not stand-alone useful.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - A more structured presentation of the threat model should be provided for each method, including the baselines and the proposed one. It would be better to place such information in separate section(s), rather than interspersed among small, unstructured paragraphs in Section 2.
- What defines a successful attack in the context of previous works aiming for approximate reconstruction instead of exact retrieval?
- Providing more details in Section 3.2 would be beneficial, such as explicitly presenting the definitions of $\mu$ and $\nu$ in Algorithm 1. I would also expect that a relatively large number of samples (e.g., for $N$ and $n$) would be required to make the estimation precise, but such information seems to be missing in the submission.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: - Possibly due to the presentation, I may have underestimated the technical and practical value of this submission. A clear claim about its key advantages over existing work should be provided in the revision.
- I quickly scanned through the proofs and did not spot any obvious errors.
- I didn't notice any discussion regarding the computational cost of the proposed method. I would expect such information to be included in the revision.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review
> The bound appears to be a straightforward adaptation of the general DP bound.
We respectfully disagree. Our reconstruction bound leverages the notion of blow-up/trade-off function and uses the concavity/convexity of this function in a novel way. The fact that our bounds significantly outperforms all the previous bounds in this domain shows that our upper bound is not a straightforward adaptation of the general DP bound.
> Attack uses unrealistic assumptions.
Reconstruction for reconstruction's sake is a naive view on the goal of privacy attacks. Successful reconstruction (in the reviewer's sense) implies privacy leakage, but lack thereof does not imply that the data is secure. For example, if the target image contains a car, an attacker might be interested in the digits of the license plate, not the pixels of the image of the car and its background. Thus, the 1 out of n formulation is the right approach to frame reconstruction from the point of view of studying privacy leakage (which is exactly what DP is trying to protect): if the license plate has 4 digits, the attacker is interested in which out of all possible 10000 combinations was present in the training dataset. What this formulation does is move away from the classical membership inference approach where the attacker has so much side information that they only need to infer one bit of information, to a setting where reconstruction requires the attacker to infer log(n) bits to perform reconstruction, where n represents who diffuse the adversary's prior is. Finally, we do have experiments outside of this framing (prior=1/(possible number of pixel configurations) in Appendix I).
> Assumption of batch gradient descent further introduces additional complications.
We have results on DPGD (full batch) and DPSGD (mini-batch) in the experimental section: see Figure 4, Figure 5, Figure 15. Considering almost all state-of-the-art implementations of DPSGD (e.g. De et al. (2022)) either operate in full or mini-batch mode we think our work has direct practical implications.
> The scope of this submission seems rather narrow.
The entire focus of our work is to analyze how DPSGD protects against reconstruction attacks. That our attack is most applicable to DPSGD seems like an unfair and irrelevant criticism.
> Write a threat model
We will add a threat model section.
> What defines a successful attack in the context of previous works aiming for approximate reconstruction instead of exact retrieval?
There is not a well defined metric from other works and is somewhat fuzzily defined, hence why we chose to study exact reconstruction here. Balle et al. do this by reporting a success if the reconstruction is smaller than the average nearest neighbor between any two images in CIFAR-10. Haim et al refrain from determining what a “success” is and just report structural similarity (SSIM). Although designing criteria for how to judge approximate reconstructions is certainly interesting, we believe this is an orthogonal direction.
> Providing more details in Section 3.2 would be beneficial, such as explicitly presenting the definitions of $\mu$ and $\nu$ in Algorithm 1.
$\mu$ and $\nu$ are arbitrary probability distributions and are defined in line 215. We will explicitly state this in Alg 1. In terms of N, computing $\gamma$ with 1M samples is efficient (<1 s on a GPU), however, N=10,000 samples is sufficient for consistent results. We refer to our answer to Reviewer j4Wo for precise numbers. In terms of n (the size of the prior), we do have experimental results (Fig 3).
> A clear claim about its key advantages over existing work should be provided in the revision.
Please see our response to Reviewer paNh for a detailed list of contributions. Generally, we believe that this work makes a number of important contributions to analyzing the protection DP offers against privacy attacks.
- Showing that reconstruction can be provably defeated by large values of epsilon.
- Providing a framework to translate prior reconstruction risk to posterior reconstruction risk,
- Our bounds are tighter than prior work.
- More generally, it is rare that in privacy attack research one can show that a privacy attack on DP is efficient enough to reach the bounds predicted by theory on state-of-the-art models. There is usually a compromise to be had, either by reporting loose bounds (Guo et al. (2022)) or running attacks on non state-of-the-art models (Balle et al. (2022) attacks only work on DP CIFAR-10 models with < 50% test accuracy). Our attack results do not have this compromise.
> I didn't notice any discussion regarding the computational cost of the proposed method.
Good point, we will add a discussion on computational costs. For all experiments presented computing upper bounds is cheap (< 1 s). The prior-aware attack is a bit more expensive as it requires us having to re-run DPSGD for each point in the prior. The model-based attack in Fig 1 is extremely expensive (see Balle et al. (2022) for a detailed explanation of this attack), while the gradient-based attack is cheaper as we just need to run gradient descent, and takes <10 mins on a GPU. We add specific numbers to the revision.
Balle et al. "Reconstructing training data with informed adversaries". SP 2022
Stock et al. "Defending against Reconstruction Attacks with R'enyi Differential Privacy." arXiv:2202.07623 (2022).
De et al. "Unlocking high-accuracy ..." arXiv:2204.13650 (2022).
Haim et al. "Reconstructing training data from trained neural networks." NeurIPS 2022
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. It has effectively addressed the majority of my concerns regarding the novelty and practicality of the attack. I've raised my score accordingly. | Summary: This paper pertains to a line of work exploring reconstruction attacks (RA) as a complement to the standard membership inference attacks (MIAs) for evaluating the privacy leakage of ML models. The idea is that a model can be made resistant to RAs for large values of the privacy budget epsilon for which it would not be resistant to MIAs. In cases where membership leakage is not considered a privacy issue, the model might be both “safe” as per low RA risk and useful, since less noise would have to be added. Resistance to RAs can be studied theoretically (by upper bounding the RA success in the worst case) and empirically by developing ever stronger attacks. This work proposes contributions on both fronts for models trained using DP-SGD. On the attack front, two new reconstruction attacks are proposed. The first one is based on the gradients of the model after every iteration and the second one tries to infer the correct record among n candidates. On the theoretical front, a new upper bound for RA success is proposed, which is much better than the previous bound. Finally, the attack initially developed for models trained with gradient descent (no mini-batches) is extended to the mini-batch case. The attack success is shown to be sensitive to the mini-batch size.
Strengths: The gradient-based and prior-aware attack methodologies both seem sound, are well explained and motivated, and the authors explored different alternatives to the proposed formulations. The attacks present sufficient novelty.
The empirical evaluation is extensive which is highly appreciated.
Empirically, the new theoretical upper bound on the success of reconstruction significantly improves upon the previous bound of Balle et al. (2022) as shown on Figure 2. Theoretically, the contribution seems significant too, although I did not verify the proof and I found the hypotheses somewhat opaque.
Weaknesses: 1. Prior-aware attack: Conceptually, I am not convinced the 1 out of n classification is the right approach to frame the reconstruction problem. The goal of reconstruction is to extract a target record with limited prior knowledge about the record, i.e., reconstruct the record from scratch. The prior-aware attack changes the task to something quite different, by turning it into a distinguishing game, i.e., finding the right record among 2, 5, 10 etc. possible candidates (specifically, for 2 candidates, the task becomes equivalent to MIA under bounded DP). This means that the adversary already knows the record and the only uncertainty left is which one it is among a limited set of options.
2. One problem with the prior-aware attack is that the prior can be made arbitrarily easy or hard. For instance, if the other records in the prior have a different label than the target record, which the adversary is assumed to already know (Appendix A), it is very easy to guess the right image. In the current experiments, do all of the records in the prior belong to the same class as the target record? Similarly, the authors mention that the prior could be motivated by the adversary narrowing down the list of candidates before running the attack. Have the authors explored a list of candidates that are the most similar to the target itself?
3. The research gap addressed by the paper and the significance of the contributions aren’t clear. The empirical results seem solid, but how the findings translate to the real world is unclear. Perhaps the introduction could explain more the real-world implications of the findings. Eg what does it mean that when training a model with epsilon=10, an extremely strong attack (the prior-aware attack) exists that succeeds 60% of the time, 6x better than the random guess (Figure 2)? In light of the results of this paper, does there exist an epsilon such that the newly proposed, state-of-the-art RAs fail but utility remains very good?
4. Clarity: I found Sec. 3.1 and 3.2 hard to follow. Theorem 2 makes some hypotheses but it’s very difficult for me to understand what these mean and whether they’re validated under real-world scenarios. Hopefully, a reviewer with expertise in DP can check these assumptions as well as the proofs, but regardless of this I think the authors should make these sections clearer and more accessible.
5. Tightness of the upper bound: the authors claim that the upper bound is tight, and indeed Fig.2 shows that for epsilon=1, the upper bound and prior-aware attacks perform similarly. However, the upper bound is also very close to the random guess baseline, such that any better than random attack would appear to be tight. I am not convinced that we can conclude that the upper bound is tight, considering the large gaps shown on Figure 3. At least, some analysis or explanations are needed to better understand when the upper bound is tight and when it isn’t.
6. Minor: Can the authors equalize the color maps on Fig. 5 and 15 so that e.g., yellow on the left becomes comparable with yellow on the right?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. How sensitive is the empirical upper bound to the parameters of Algorithm 1 used to estimate it, e.g, N?
2. Is the gradient-based attack the first one to successfully reconstruct records from models trained using DP-SGD? The authors state that Balle et al (2022)’s attack doesn’t succeed, but it isn’t clear in the text whether other attacks from prior works succeed. Clarifying this aspect would better highlight the significance of the contribution.
3. Fig. 3 suggests that even with a very strong prior (size=2) there is a gap of up to 5% depending on the value of epsilon, and the gap can be much larger for larger prior size. Can the authors give potential reasons why the prior-aware attack is not tight in these scenarios and potential directions for improving the success of attacks? It seems that the threat model cannot be made stronger, what about the method? Or should future work focus on tightening the theoretical upper bound?
4. Prior-aware attack: Do all of the n candidate records belong to the same class as the target record? The attack could be made artificially easier by selecting candidates with a different label. Similarly, the authors mention that the prior could be motivated by the adversary narrowing down the list of candidates before running the attack. Have the authors explored a list of candidates that are the most similar to the target itself?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper doesn’t mention limitations or broader societal impacts but studies the privacy of ML models, a topic with positive societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review!
> Not convinced 1 out of n classification is the right approach to frame the reconstruction problem.
Reconstruction for reconstruction's sake is a naive view on the goal of privacy attacks. Successful reconstruction (in the reviewer's sense) implies privacy leakage, but lack thereof does not imply that the data is secure. For example, if the target image contains a car, an attacker might be interested in the digits of the license plate, not the pixels of the image of the car and its background. Thus, the 1 out of n formulation is the right approach to frame reconstruction from the point of view of studying privacy leakage (which is exactly what DP is trying to protect): if the license plate has 4 digits, the attacker is interested in which out of all possible 10000 combinations was present in the training dataset. What this formulation does is move away from the classical membership inference approach where the attacker has so much side information that they only need to infer one bit of information, to a setting where reconstruction requires the attacker to infer log(n) bits to perform reconstruction, where n represents who diffuse the adversary's prior is. Finally, we do have experiments outside of this framing (prior=1/(possible number of pixel configurations) in Appendix I).
> The prior can be made arbitrarily easy or hard.
We view this as a positive aspect of the prior-aware attack! That we can calibrate our bounds based on the knowledge that the adversary might possess is something that we want in our reconstruction bounds. The adversary might just know that they are reconstructing a photo that is similar to CIFAR distribution (or within a certain class). These are different levels of prior that would lead to potentially different bounds through our framework. Our framework shows that if one can construct a reasonable prior probability of reconstruction, then we can estimate how the posterior probability of reconstruction will increase after training. We believe this flexibility is a core contribution of our work. Deciding if the prior is too easy or too hard is certainly an important question, but will be specific to the application to which one applies DP.
> Do all records in the prior belong to the same class? Have you explored a list of similar candidates?
Yes, and for the experiments on MNIST, the prior set looks almost identical to us. The L2 distance between images in the set is extremely small for e.g. class 1.
> Epsilon where RA's defeated & utility is good?
To be clear, our main contribution is showing that if we can define a prior probability of reconstruction, then we can calibrate epsilon such that the posterior probability of reconstruction is bounded by some value. What a reasonable prior and posterior probability is will be use case dependent; some applications may be more risk averse than others. In general, unlike analysis DP with membership inference attacks (where any epsilon > 2 gives vacuous guarantees against the attack), we can show that for reconstruction, larger values of epsilon can give meaningful results.
> Not convinced upper bound is tight.
This is fair. We will edit our wording to “tight in some cases” and “nearly tight in other cases”, unless the reviewer feels “nearly” is an unfair term here. Given the closeness in attack success and bounds in the middle bar of Fig 2 and (for eps < 10) in Fig 3, and in Fig 4 and Fig 5 we believe this is is fair. We would like to note that it is rare for a privacy attack on DP to achieve such close results to an upper bound, and so we feel these attacks do represent a real contribution.
> Equalize the color maps?
Will do.
> Upper bound sensitivity
For the experimental parameters used in Fig 2 at eps=10, over 1000 different calls to Algorithm 1 we get:
- N : Upper bound (avg +/- std)
- 10,000 : 0.3358 +/- 0.0195
- 100,000 : 0.3325 +/- 0.0062
- 1,000,000 : 0.3304 +/- 0.0017
So anything above N=10,000 will give a reasonably non-sensitive upper bound. We also refer the reviewer to answer to wroj where we discuss how we can obtain confidence intervals for our bounds.
> Is the gradient attack the first to successfully reconstruct records from models trained with DPSGD?
Other works have looked at this problem but we are the first to (as far as we are aware):
- Provide bounds specifically on DP-SGD. The main algorithm used in all of private deep learning, and show that our bound is much stronger than prior work (see Appendix H).
- Design attacks that come to close to the upper bounds on state-of-the-art models. Other works either look at attacks on small convex models (Guo et al. 2022) or constrict their analysis to other domains like text or tabular data (Stock et al. 2022, Guo et al. 2023)
> Reasons why prior-aware attack isn't tight / future directions.
We think the reason for larger gaps with larger priors is because there is a higher chance that two gradients from two different candidates are very similar. For example, if we have candidate gradient g_0 from a candidate point x_0 and candidate gradient g_1 from a candidate point x_1, if say x_0 and x_1 are both 1’s from MNIST that look similar then g_0 and g_1 will be similar, increasing the probability of a mistake in identifying the right candidate from the prior. As the prior size increases, the probability that this happens will increase. Looking into how to improve the attack to account for this could be a fruitful area for future work.
Balle et al. "Reconstructing training data with informed adversaries". SP 2022
Carlini et al. "Membership inference attacks from first principles." SP 2022
Guo et al. "Analyzing privacy leakage in machine learning via multiple hypothesis testing: A lesson from fano." ICML 23.
Guo et al. "Bounding training data reconstruction in private (deep) learning." ICML 22.
Stock et al. "Defending against Reconstruction Attacks with R\'enyi Differential Privacy." arXiv:2202.07623 (2022).
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for your response.
The response brings much needed clarification on the contribution and I strongly recommend that the authors incorporate the “That we can calibrate our bounds based on the knowledge…” and “To be clear, our main contribution is showing that…” responses to the text. This will facilitate the understanding of the paper. To be honest, I was disappointed to see that the authors did not offer to make these changes, as these discussions (and overall framing) are not currently in the paper. The applicability of the algorithm beyond uniform priors should be discussed, e.g., can I get an upper bound if my prior is that with a large probability the car plate starts with an 1 and with a small probability it starts with a different digit, or for a tabular dataset if I only know one-way marginals and correlations.
I agree with modifying the tightness claim with nearly tight, and I strongly recommend revising the paper to explain when and why the gap is larger, as well as adding the statement “We would like to note that it is rare for a privacy attack on DP to achieve such close results…” (or a similar statement) to the paper. Similarly, any insights on why the decrease w.r.t. prior size is convex for small epsilon but concave for large epsilon would be interesting.
The remaining issue is the attack definition. The framework presented in this paper studies reconstruction attacks that account for an attacker prior. The key novelty of this paper hinges on using the uniform prior over a discrete set of candidates to analyse the privacy of ML models. However, this framework is very general, and the semantics of the work depend strongly on the prior: for instance, for a binary prior, the “reconstruction” attack is actually a MIA, which conflicts with the usual understanding of a reconstruction attack. In that sense, the results in the main paper should be interpreted as a somewhat generalized MIA (which one of ten records is in the data). Similarly, the example given by the authors (car digits) corresponds to an attribute inference attack. This is not necessarily a problem, but it weakens the novelty of the paper and calling it reconstruction is a bit misleading. The fuzzy boundary between these threat models should at least be discussed in the paper.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer j4Wo
Comment: Thank you for the quick response. We will absolutely incorporate the clarifications into the paper, along with changing the tightness claims. Apologies for not confirming this in our first response. Our framework is flexible enough to handle non-uniform priors. We have a (very brief) comment on this for the attack in Appendix G, but can expand the discussion around non-uniform priors.
> However, this framework is very general, and the semantics of the work depend strongly on the prior: for instance, for a binary prior, the “reconstruction” attack is actually a MIA, which conflicts with the usual understanding of a reconstruction attack. In that sense, the results in the main paper should be interpreted as a somewhat generalized MIA (which one of ten records is in the data). Similarly, the example given by the authors (car digits) corresponds to an attribute inference attack. This is not necessarily a problem, but it weakens the novelty of the paper and calling it reconstruction is a bit misleading. The fuzzy boundary between these threat models should at least be discussed in the paper.
We completely agree the boundary between membership - attribute - reconstruction is fuzzy and deserves comment, which we will add to the paper. Our framework has two key ingredients, a prior probability of reconstruction, and a function that determines if reconstruction is successful. The framework is flexible enough to incorporate different functions (it could be L2 distance as is used in Haim et al and Balle et al, or it could be choosing a point from a set). As we mentioned, we chose to study "reconstruction" in a setting that gave a lot of power to the adversary, as is standard when modelling attackers in DP, although we do have experiments with L2 in the Appendix. We will add a discussion on the fuzzy boundary between reconstruction and attribute inference, as we agree that these discussions can add a lot of value. | Summary: The paper proposes a novel bound on Reconstruction Robustness from DP training using a notion of a blow-up function. In order to use the bound, one needs to estimate the quantities $\kappa$ (prior probability of reconstruction) and $\gamma$ (blow-up between neighboring noise distributions) using Monte-Carlo methods. Compared to the prior bounds based on RDP, the proposed bound is tighter. The paper also introduces a new gradient-based that is shown to perform close to the bound, and investigates the impact of different DP-SGD hyperparameters on the bound and on the attack success.
Strengths: - The paper proposes a significant improvement in bounding reconstruction attack risk in DP training. This is of practical importance, as **improved bounds on relevant privacy risks enable practitioners to calibrate their DP-SGD parameters to mitigate those risks** as opposed to relying on the $\epsilon$ value which is hard to interpret.
- The paper proposes a new reconstruction attack using gradient information.
Weaknesses: *Approximate nature of the bound in practice.* The computation of the bound relies on a Monte Carlo procedure that is only shown to converge to the optimal solution asymptotically. The paper does not provide methods to estimate the accuracy of the resulting numeric value of the bound for a given number of MC algorithm samples.
UPDATE (August 10): I update my score to 8 as the weakness above has been clarified in the response.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How can one evaluate the accuracy of the upper bound given inherent error in the Monte Carlo estimates? Is there a way to provide an upper confidence bound at a given level of certainty as opposed to an asymptotic guarantee? Is 1M good enough?
- What is the computational cost of estimating $\gamma$ with 1M samples?
- Is the notion of blow-up function new? Are there citations? Most bounds in DP rely on information-theoretic divergences --- is it related to a known divergence?
- How exactly can the bound be used to reduce the required level of noise in DP-SGD if one were only concerned with reconstruction risks?
- How can the attack be applied in federated learning scenarios?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Approximate nature of the bound in practice --- see weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks wroj for the in-depth review, we’ve tried to answer and clarify your points below.
> How can one evaluate the accuracy of the upper bound given inherent error in the Monte Carlo estimates? Is there a way to provide an upper confidence bound at a given level of certainty as opposed to an asymptotic guarantee? Is 1M good enough?
Of course the review is correct that the bound currently only holds asymptotically. We can provide confidence intervals for the bound as well, using two types of inequalities. Note that our algorithm has two points of error: 1) The first step is the potential gap between the empirical quantile and the population quantile. . To bound this distance we can use the DKW inequality which provides uniform concentration for each quantile. In particular, we can show that the error of quantile estimation is at most $2e^{-2n\eta^2}$, where $n$ is the number of samples and $\eta$ is the error in calculation of the quantile. Specifically, with a million samples, we can make sure that with probability 0.999 the error of quantile estimation is less than 0.002 (we can make this smaller by increasing the number of samples). Then, we also need to account for the error of the second step which is the mean estimation. Here, we can use the Bennet’s inequality that leverages the bounded variance of the estimate. In all our bounds, we can show that the error of this part is also less than 0.01 with probability 0.999. We will be happy to add these analyses to the paper.
For the experimental settings reported in the paper (with T=1), the time to compute $\gamma$ on our local CPU is as follows:
- 10,000 samples = 0.002s
- 100,000 samples = 0.064s
- 1,000,000 samples = 0.196s
- 10,000,000 samples = 2.084s
When T=100, with 1,000,000 samples it takes ~5s to compute $\gamma$ on a CPU. On a GPU it takes <<1s.
> Is the notion of blow-up function new? Are there citations? Most bounds in DP rely on information-theoretic divergences --- is it related to a known divergence?
This notion is tightly related to the notion of trade-off function defined (e.g. see Definition 2,1 in the GDP paper [DRS]). Precisely, blow_up(x)=1-trade_off(x). Note that the trade-off function has more information than a single divergence, because it contains information about TV_a (or hockey_stcik divergence) for all values of a. This is what enables a tight calculation of the upper bound.
[DRS] Dong J, Roth A, Su W. Gaussian Differential Privacy. Journal of the Royal Statistical Society. 2021 Jan.
> How exactly can the bound be used to reduce the required level of noise in DP-SGD if one were only concerned with reconstruction risks?
Our work shows that if one is concerned with reconstruction risk, they can calibrate epsilon / level of noise based on the expected risk as long as they can compute a sensible prior. This is likely going to be context and task specific. For example, do we assume the adversary has a lot of side-knowledge (prior will be larger) or completely uninformed (smaller prior)? These questions are difficult to answer in any detail without getting into specifics of the type of data, the threat model the developer is concerned with and the context in which DP will be implemented. We hope that follow up work can dig into these details for specific use cases of DP.
With all else being equal, we can quantify the increase in risk to reconstruction attacks from before, to after the model has been trained. If one starts with a *small risk*, then DP training with a larger epsilon will not increase this risk *by much*. What *by a small risk* is, and what an acceptable value of *by much* is, will be specific to the application in which DP is applied.
> How can the attack be applied in federated learning scenarios?
Great question. The attack requires access to updates/gradients (permitted under the DP threat model), so as long as this information is available the attack is applicable. In most FL settings this information is available and so will be applicable there.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
I strongly recommend re-parameterizing the results in terms of the trade-off function, which is quite standard now. I also strongly recommend incorporating the error bounds at least in the appendix.
---
Reply to Comment 1.1.1:
Comment: We will incorporate your suggestions in the next iteration of the paper. Thank you for the careful reading of the paper, your useful comments, and also for increasing your score! | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper is about the bounds of differentially private training for protection against data reconstruction attacks. This paper provides an upper bound on the success of any reconstruction attack against DP-SGD. Also it includes experiments that match the expected bounds. The experiments also include different settings that lead to various success rates for reconstruction.
Strengths: * This paper is well written. It is easy to follow the paper with the background, main theorems (with proofs in supplementary), and experimental settings.
* This paper provides sounded proofs for the theorems.
* This paper provides enough context on the problems and how they approach the problems differently.
* There are sufficient experiments to support the theorems and bounds.
Weaknesses: The WideResNet in the experiments is pretrained on ImageNet. It would be a stronger paper if the authors also perform experiments on ImageNet. I suppose while the bounds should also apply to ImageNet, the higher resolution images might also affect the reconstruction success rate.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Unfortunately, I'm not the expert in this field. I'm curious to learn from the authors about how important these results (theorems and experimental results) are, as it is not easy to be interpreted from the written contributions.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors are encouraged to discuss how their findings might be used to help develop more sophisticated reconstruction attacks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review, paNh. We’ve answered your comments and questions below.
> Why pretrain on ImageNet.
Excellent question. There are a number of reasons we chose to pretrain on ImageNet. Reporting results directly on ImageNet is technically possible but computationally challenging.
- In the non fine-tuning regime (DPSGD on ImageNet with no pre-training on public data), the state-of-the-art test accuracy at DP epsilon=8 is around 32%. We do not believe instantiating our bounds and attacks on such a poor model is interesting.
- Unlike in standard training, CIFAR-10 is still a very challenging dataset to train with DP and is an active benchmark within this research area (De et al. (2022), Bu et al. (2022), we can provide more references if desired), while ImageNet has been slightly less investigated due to the computational challenges of training on ImageNet with DP.
- Even fine-tuning on ImageNet with DP (after pre-training on public data) is highly expensive (c.f. De et al. (2022)). We could instantiate our attack here, but if we want to report “average success of the attack” – which helps inform us of the average performance of the attack, rather than how well it performs on a specific training run – we would need to re-run this training procedure hundreds of times (as we do in the paper), making this computationally challenging. We feel for this reason operating on state-of-the-art CIFAR-10 models is reasonable.
- Other than the computational aspect, there is no specific challenge that makes training from scratch more interesting. Our upper bounds on reconstruction would be calculated in the exact same way (with different parameters), and our attack would also stay the same. For instance we can train a CIFAR-10 classifier from scratch with the exact same hyperparameters that we use for fine tuning (i.e. number of iterations, sampling rate, clipping threshold, and noise multiplier) and expect to see the exact same upper bound as the one for fine tuning, and very similar lower bound as well. Of course, in this setting the accuracy of the model will not be good, but there is nothing specific about training from scratch that makes the attack or the upper bound more interesting.
> I'm curious to learn from the authors about how important these results (theorems and experimental results) are, as it is not easy to be interpreted from the written contributions.
We believe that this work makes a number of important contributions to analyzing the protection DP offers against privacy attacks.
- At a high level, DP has mainly been analyzed with membership inference attacks (i.e., can we detect if an example was used to train the model?). We know that for epsilon>2, membership attacks should always succeed against models trained with DP (the guarantee of protection becomes vacuous after this), but in practice it seems like models trained with epsilon up to 100 still protect against most privacy attacks. This motivated us to look at why larger epsilons can still protect against privacy attacks. Our work shows that large epsilons can provably protect against reconstruction attacks. The general rule of thumb for practitioners is to use an epsilon < 10, but with our work we have a more formal and granular way of deciding the acceptable epsilon to use. This will help with model utility – if a practitioner wants to train with DPSGD and only cares about protection against reconstruction attacks, our work shows they can use larger epsilons and still provably be robust to this attack. Model utility will benefit because training with larger epsilons means injecting less noise, which in turn, means more accurate models.
- There have been other works that look into reconstruction attack bounds on DP (Guo et al. (2022)), but ours is the first to provide tight bounds for DPSGD (the algorithm that most people use in private deep learning).
- More generally, it is rare that in privacy attack research one can show that a privacy attack on DP is efficient enough to reach the bounds predicted by theory on state-of-the-art models. There is usually a compromise to be had, either by reporting loose bounds (Guo et al. (2022)) or running attacks on non state-of-the-art models (Balle et al. (2022) attacks only work on DP CIFAR-10 models with < 50% test accuracy). Either a practitioner can run our bounds to achieve almost tight reconstruction upper bounds, or implement our privacy attack, and use the result as evidence for the desired privacy or a proof for lack of privacy. This is reminiscent of the topic of auditing differential privacy, where there is an attack that tries to “audit” the proven upper bounds on privacy. One can look at our work as the first tight upper bound for reconstruction as well as the first auditing attack for reconstruction upper bounds.
De et al. "Unlocking high-accuracy ..." arXiv:2204.13650 (2022).
Bu et al. "Scalable and efficient training ...." NeurIPS (2022): 38305-38318.
Guo et al. "Analyzing privacy leakage in machine learning..." arXiv:2210.13662, 2022b.
Balle et al. "Reconstructing training data.." Security and Privacy, SP 2022
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the response!
I don't have further questions. | null | null | null | null | null | null |
Representation Equivalent Neural Operators: a Framework for Alias-free Operator Learning | Accept (poster) | Summary: The paper proposes a new paradigm to look at operator learning via the lens of frame theory termed as Representation Equivalent Neural Operator (ReNO). It talks about an important missing piece in the literature, the one regarding the balance of continuous nature of operators and the discrete nature of data these are trained on. Aliasing error is used as the quantifier for determining the equivalence.
Strengths: The paper is very well motivated and suitably placed in the literature.
The use of frames as a tool for representation is presented in an interesting way.
The paper address a common issue in a very mathematically rigorous manner.
The experimental evidence is convincing.
Weaknesses: The overall presentation of the paper can be improved significantly.
Few pointers: Figures appearing the paper have no caption, Figure 1,2 and 3 have been referred within the text, others not. In a paper like these, visual guides are very much helpful, otherwise it is easy to not follow. Additionally there is over use of inline equations which are hard to parse. If space is the issue, some details can be moved out to the appendix, for example section 2 can be compressed, although it reads well, it is somewhat standard.
Notations, need to made clear around when introduced. For example: that $\mathfrak{g}$ is dicretization of $\mathcal{G}$ is mentioned around line 211 and introduced much earlier. This makes it harder to read through.
Experimental section should have more details, the current form is not at all clear in the main paper. I see lot of blank space in that page anyway.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Of all the three architectures considered in the paper, none of them perform on resolutions lower than the one used for training. Can the authors discuss this a bit?
While the paper is written in general for frames, can the authors comment on some pros/cons of specific choice for example wavelet frames? That would be very interesting.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I don't see any direct potential negative social impact.
Authors have mentioned technical limitations which is very much appreciated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We start by thanking the reviewer for their appreciation of the merits of our paper and their welcome suggestions to improve it. We address their detailed concerns below.
1. In a CRV, if accepted, we will ensure that all figures have captions and are referenced within the main text.
2. The reviewer's point about *visual guides being important* is excellent and we follow this suggestion. As a visual guide, in the CRV, we will add a cartoon depiction of the ReNO framework that we have created in Figure 1 in the uploaded pdf for this rebuttal.
3. We will also reduce the number of inline equations and move parts of Section 2 to the appendix if necessary.
4. The definition of $\mathfrak{g}$ is stated in line 191, just shortly after the mentioned instance in line 211, and its definition is also a numbered equation that we refer to in line 211.
5. Regarding the reviewer's concern about lack of experimental details, we kindly refer to our answer in the general response to all the reviewers. With this, we hope to clarify the questions about the experiments and we would add such explanation in a CRV.
6. When testing below training resolution in our experiment, indeed, any architecture will yield poor results. The input and output space in these experiments are spaces of bandlimited periodic function of bandwidth $K$. If we then test with resolution $M<K$, we are possibly undersampling the functions: if a function has bandwidth $K$, we will need at least $2K+1$ coefficients to represent it. At resolution $M$, only $2M+1<2K+1$ point samples of the function are taken. This automatically introduces errors as it means an attempt to represent a $2K+1$-dimensional vector space by only $2M+1<2K+1$ elements. We hope that this clarifies the reviewer's question and we will make it point to add this explanation to a CRV, if accepted.
7. Wavelet systems can be constructed to yield frames for $L^2$ and in the context of ReNOs, this setup would enjoy the property that applications of nonlinear activation functions naturally remain in the space spanned by the frame. In many applications, however, the measurements are given as point evaluations rather than wavelet coefficients, so that some transformation would have to be performed initially. Also, an architecture based on a multiresolution analysis such as wavelets, curvelets, shearlets, etc. is in general more technical (e.g. separate channel for each scale). These difficulties can of course be overcome so that we believe wavelets to be an interesting family of frames for the ReNO framework that we plan to study in future work on designing ReNO architectures.
We sincerely hope to have addressed your concerns, particularly about the presentation of the paper, satisfactorily and would kindly request the reviewer to update their assessment accordingly.
---
Rebuttal Comment 1.1:
Comment: Thank you authors for your response. Please try to add the discussion regarding resolution, point 6 above to the updated version of the paper. My concerns have been address and I am increasing my score.
---
Reply to Comment 1.1.1:
Title: Thanking the Reviewer
Comment: We thank the reviewer for your positive feedback and for increasing our score. We will certainly incorporate our reply to pt 6 in a CRV, if accepted. It is a very valid point that needs to be made. | Summary: Many recent studies have emerged in the field of operator learning. However, many models attempt to learn operators using the discretized values of functions rather than sending functions as operators to other functions. In this paper, the authors interpret the relationship between infinite-dimensional functions and their discretized values using the framework of frame theory. The authors analyze various operator learning models to assess whether they effectively capture this relationship in an equivalent manner, and they propose a framework called ReNO for this purpose.
Strengths: Investigating whether models that engage in operator learning, a mapping process between infinite-dimensional function spaces, effectively handle and address this infinite dimensionality is a pivotal and highly relevant topic within the field. The endeavor to analyze and comprehend this specific aspect of operator learning appears to be a novel and significant contribution. Leveraging the definitions associated with frame theory, as established in Hilbert spaces, and extending their application to operators, enables a profound exploration of the extent to which various operator models align with the ReNO framework. This comprehensive analysis sheds light on the capacity of these models to capture and interpret the intricate relationship between infinite-dimensional functions and their discretized counterparts. Such insights provide valuable guidance for the advancement of operator learning methodologies and contribute to the broader understanding of this field.
Weaknesses: The explanations of frame theory discussed in Sections 2 and 3, as well as the summaries of applying frame theory to operators, are written in a manner that is not easily comprehensible. Many readers, including myself, who may lack a strong background in signal processing and functional analysis, find these sections challenging to understand (See Questions). It would be beneficial to provide clearer explanations and examples regarding aspects such as the invertibility of the frame operator S and the well-definedness of the pseudo-inverse, clarifying their significance.
Furthermore, in Section 4, which aims to verify whether the defined operator learning models conform to ReNO, it is intuitive to consider that CNN and FNO may not fall under the purview of ReNO, as they simply discretize functions into image-like forms. On the other hand, SNO, which predefines a basis and solely learns the mapping between the coefficients, appears to naturally align with ReNO. It would be intriguing to explore additional insights derived from these theories based on this intuition. Additionally, there is a well-known operator learning model called DeepONet, which could be described in more detail. While it is briefly mentioned on line 271 that DeepONet also belongs to SNO, further elaboration is necessary to understand how and what differentiates it from other models. Is there a specific reason for focusing solely on CNN, FNO, and SNO among the numerous operator learning models?
Moreover, in Section 5, the experiments conducted, particularly those related to super-resolution discussed in the FNO paper, and the results presented in Figure 4, raise questions regarding whether they truly represent the best experiments to validate the aforementioned theories. Specifically, what are the specific definitions of the frames $\Phi$ and $\Psi$ used in the experiment where the resolution is altered in the predictions (as explained in Section 4 of the FNO paper)? It is unclear whether these definitions are explicitly provided or not. Are there alternative approaches, apart from altering resolution, to demonstrate the equivalence of arbitrary frames $\Phi$ and $\Psi"? It would be valuable to explore additional experimental setups that further support the underlying theories, as relying solely on the resolution experiment appears inadequate.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: * Is there a specific reason why not all the various models mentioned in line 30 were examined under ReNO? Why was the analysis limited to only CNN, FNO, and SNO?
* What are the definitions of $f(nT)$ and $f(n/2\Omega)$ mentioned in line 70? Additionally, why do the sinc functions in line 75 form an orthonormal basis? Understanding these concepts seems crucial to grasp the difference between Definition 2.1's $f$ and $P_B f$.
* I'm curious to understand why the adjoint $T^*$ in line 93 is well-defined uniquely for all $f_i$. Furthermore, what makes the frame operator $S$ invertible and well-defined?
* In line 102, when referring to "point samples of the underlying function $f$," which specific $f_i$ does this pertain to? (Similarly, for line 221, when does $T^\dagger_\Psi$ precisely become a discretization representation given $\Psi$?)
* I'm interested in understanding how Definitions 2.1 and 2.2 naturally lead to the discussion of aliasing error in Definition 3.1. Explaining the differences and similarities between these concepts would be helpful.
* The diagrams between lines 156 and 157, the diagram mentioned below line 265, and the captions and explanations for Figures 1, 2, and 3 are significantly lacking. They need to be added with essential details. What are the differences between the three figures, and how are they interconnected?
* What is the relevance of Equation 4.3 in the analysis of SNO, considering the diagram? While A and B were assumed to be zero in FNO, does this equation account for the bias term b?
* DeepONet is known as a successful operator learning model mentioned in line 271. What does it mean that DeepONet is also equivalent to SNO? Further elaboration is needed.
* In Section 5, it would be beneficial to summarize the experimental setups (Line 506 F.2) , even briefly in the main text, rather than relegating them to the appendix. It would be helpful to explain what type of regression was performed, whether it involved the solution operator of PDE or not, or if it was a different type of experiment.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: * The explanation of frame theory is challenging to understand, and the extension of frame theory to operators is equally difficult to grasp. There are intricate aspects in the theoretical part that make it hard to follow, such as meticulous definitions of notations, the reasons for orthogonality, the existence of inverses, whether $T^*$ is necessary for operator extension, and the well-definedness of $T^\dagger$. It would be beneficial to summarize the essential concepts needed to extend these aspects to operators and provide a slightly more accessible explanation. Ultimately, it would be helpful to clearly articulate why frame theory is crucial for explaining operators, rather than presenting it as a separate theory.
* There seems to be a lack of experimental evidence supporting the theoretical claims. In Section 5, which focuses on experiments and analysis related to resolution, additional experiments that better illustrate the theory could enhance the persuasiveness of the need for this theory.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We start by thanking the reviewer for their welcome suggestions to improve our paper. We address their detailed concerns below.
1. We apologize for the lack of clarity in the explanation of frame theory. To address this shortcoming and following the excellent suggestion of the reviewer, we suggest to add a new section *Frame Theory* in the SM dedicated to the mathematical background on frame theory, including examples of frames for some classical Hilbert spaces and explanations of properties such as the well-definedness and invertibility of the frame operator and the well-definedness of the pseudoinverse. We note that the frame operator $S$ is invertible because it is bounded and one can show that the frame property implies $\\|Id-S\\|<1$.
2. To further improve on our exposition, we will add a cartoon depiction of the ReNO framework as a visual guide in the CRV, if accepted. We have created such a scheme in Fig 1 in the uploaded pdf for this rebuttal.
3. We agree that DeepONet is an important operator learning architecture. It was already discussed and highlighted in Appendix E, lines 480--496.
4. Regarding the reviewer’s concern about lack of experimental details, we kindly refer to our answer in the general response to all reviewers. With this we hope to clarify the questions about the experiments and we would add such an explanation in Section 5 of a
CRV. As for experiments with other frames, where coefficients do not correspond to point samples: we thank the reviewer for this useful feedback. Preliminary experiments have confirmed that similar results hold in other settings such as wavelet frames and we would add such experiments in the appendix of a CRV, if accepted.
5. Fig 2 in the uploaded pdf for this rebuttal reports on an additional experiment we have conducted. It highlights that aliasing error can in fact be eliminated without compromising the approximation power of the network. This further supports the necessity of constructing architectures that respect the representation equivalence. Moreover, we have also explored PCANet of Bhattacharya et al and CNO of Raonic et al arXiv:2302.01178 as additional ReNO architectures (please see General Response to all reviewers and Figs 3 and 4 of attached pdf)
6. While our experiments do not involve the solution of a PDE, CNO, which is a ReNO, is demonstrated to be the SOTA architecture for a wide range of operators for benchmark PDEs.
7. The expressions $f(nT)$ and $f(n/2\Omega)$ denote evaluation of the function $f$ at the specified points. For example, if $\Omega=1$ and $T=1/2$, then the expressions become $f(nT)=f(n/2\Omega)=f(n/2)$ for $n\in \mathbb{Z},$ resulting in the sequence $(\dots,f(-5/2),f(-3/2),f(-1/2),f(1/2),f(3/2),\dots)$.
8. Intuition behind why the $\mbox{sinc}$ functions form a basis for bandlimited functions: Bandlimited functions have Fourier transforms in $L^2([-\Omega,\Omega])$. Consequently, these Fourier transforms admit a Fourier series expansion. The appearing complex exponentials in the Fourier series are related to the translated $\mbox{sinc}$ functions via the Fourier transform. For a rigorous proof, we refer to the WSK sampling theorem Unser Proc. IEEE, 2000.
9. The adjoint $T^*$ is defined as the bounded linear operator such that
$\langle h,T(\\{c_i\\}\_{i \in I}) \rangle_{\mathcal{H}} = \langle T^* h, \\{c_i\\}\_{i \in I} \rangle_{\ell^2(I)},$
for all $h \in \mathcal{H}$ and all $\\{c_i\\}_{i \in I} \in \ell^2(I)$. The fact that $T^*$ takes the form in line 93 follows from Riesz' representation theorem (see Christensen 2008, Lemma 3.1.1).
10. In line 102, the general frame coefficients of a function $f$ for a frame $\\{f_i\\}_{i \in I}$ refer to the expressions $\langle f, f_i \rangle$. In the specific case of $\\{f_i\\}\_{i \in I}$ the orthonormal basis of sinc functions, the frame coefficients become point evaluations of $f$, i.e. $\langle f, f_i \rangle = f(i/2\Omega)$ and $I=\mathbb{Z}$ (cf. also Eqn. (2.2)).
11. In line 221, since $\Psi$ is a frame for Dom $U$ (cf. Eqn (3.2)) and $u_i \in$ Dom $U$, we obtain that $T_\Psi^\dagger u_i$ is a discrete representation of $u_i$. This means that $u_i$ is obtained by applying $T_\Psi$ to $T_\Psi^\dagger u_i$, i.e. $u_i = T_\Psi T_\Psi^\dagger u_i$ (cf. Eqn (2.4)).
12. Definition 2.2 describes the continuous-discrete equivalence between functions in a separable Hilbert space $\mathcal{H}$ and their discrete representation given by frame coefficients w.r.t. a frame sequence $\Psi$ for $\mathcal{H}$. The aliasing error function $\epsilon(f)$ defined therein quantifies the loss of information in representing $f$ by its frame coefficients and can be equivalently expressed as $\epsilon(f)=\operatorname{Id}(f)-T_\Psi\circ \operatorname{id}\circ T_{\Psi}^\dagger(f)$ (cfr. formula in Def. 2.2), where $\operatorname{Id}$ and $ \operatorname{id}$ denote resp. the identity operators of $\mathcal{H}$ and $\ell^2(I)$, with $I$ the index set of $\Psi$. It therefore results natural to generalize this formula by replacing the identity operator $\operatorname{Id}$ with an arbitrary operator between separable Hilbert spaces and $\operatorname{id}$ by a discrete mapping, which is Definition 3.1.
13. In a CRV, we will ensure that all figures and diagrams have captions and references within the main text with the necessary explanations.
14. The relevant aspect of SNO in terms of the representation equivalence is not so much the specific form of the neural network in Eqn 4.3, but rather the fact that SNO first performs an analysis step by extracting the Fourier coefficients, then the neural network acts only on the coefficients and then, the resulting coefficient vector is synthesized back to a signal that is periodic and bandlimited.
15. Yes, Eqn 4.3 accounts for a bias term.
We sincerely hope to have addressed your concerns, particularly about the mathematical details, satisfactorily and would kindly request the reviewer to update their assessment accordingly.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Thank you for the detailed responses to my questions. I have read through all of your answers, and I now have a better understanding of the points that were initially unclear to me.
Regarding specific points:
* I appreciate your responses regarding the aspects I missed in questions 3, 8, 14, and 15.
* It would be beneficial if the sections addressed in questions 1, 2, 7, 9, 10, 11, 12, and 13 could be revised and incorporated into the paper in the future.
* I have thoroughly reviewed the additional experiments and explanations for questions 4, 5, and 6. I understand that the ultimate conclusion is that not only SNO but also CNO and PCANet all exhibit an aliasing error of 0, resulting in them being representation equivalent and leading to ReNO. As the primary goal of operator learning models is to excel in operator learning for accurate solution predictions, could you further explain the final implications of these three models being ReNOs? While I comprehend the theoretical analysis and experimental demonstrations, the ultimate question remains: do these findings translate to improved solution predictions in practical applications?
* Furthermore, I have examined the CNO paper (https://arxiv.org/pdf/2302.01178.pdf), where the concept of a 'representation equivalent neural operator' is also discussed to showcase CNO's fulfillment of this property. What sets your work apart in this context? I am curious if there have been many prior papers demonstrating and discussing their operator learning models as 'representation equivalent neural operators.'
* Lastly, I came across the paper at https://arxiv.org/pdf/2207.10241.pdf, which also appears to focus on fixing the basis while learning coefficients. Could all models that learn only coefficients be considered as ReNOs?
I truly appreciate your comprehensive responses, and I apologize for any inconvenience caused by my additional inquiries.
---
Reply to Comment 1.1.1:
Title: Reply to the Reviewer
Comment: At the outset, we thank the reviewer for your prompt feedback on our rebuttal. We take this opportunity to address further questions below.
1. We will certainly incorporate the changes that we have outlined in a CRV, if accepted.
2. Regarding the reviewer's question about *final implications of these three models being ReNOs* and *do these findings translate to improved solution predictions in practical applications*?, we start by stating that it is not obvious to claim that possessing the ReNO property suffices to ensure improved approximation of operators as aliasing error (which directly corresponds to the ReNO property or lack of it) is just one component of overall error and other components such as training, approximation and generalization errors might also play a role. That being said, we can make an argument for architectures where having the ReNO property does lead to improved performance over not possessing this property. A good example is provided by the CNO model of Raonic et. al. where they present the results of an ablation study in Table 11, Page 56 and compare CNO which is a ReNO with CNO w/o filters in which the filtering operations that make CNO a ReNO are ablated out, resulting in a model that is no longer a ReNO. We see from this table that for every single benchmark, the ReNO version of CNO significantly outperforms its non-ReNO version. Moreover, a *very essential* practical manifestation of the ReNO property is the ability of a ReNO to switch between resolutions without incurring aliasing errors. This feature is of great importance in practical problems where operators need to be evaluated on grids at different resolutions. In this case, just as we show in Figure 4 of our paper and of the uploaded rebuttal pdf, not having the ReNO property can lead to aliasing errors, when the operator is evaluated on different grids. This feature also is observed for realworld data sets in Raonic et. al (Figures 2 and 24), highlighting another implication of the practical importance for possessing the ReNO property.
3. Regarding the reviewer's question about *what distinguishes our paper from Raonic et al (CNO paper)*, we would like to point out that our paper proposes the general analytical framework of representation equivalence and derives the concept of aliasing error for operators on which representation equivalence is based. CNO is just one, albeit very powerful, example of ReNO as there are others (SNO, PCAnet, DeepONet etc). The authors of Raonic et. al. do not invent the concept of ReNOs and they themselves make it very clear in their discussion (last paragraph of page 12 of Raonic et al) that they only realize one example of ReNO but do not propose the overall theoretical framework, which is the core contribution or our paper.
4. Regarding the reviewer's question about whether *Choi et. al's model is a ReNO*, our preliminary reading of this paper is its core model is a slight perturbation of SNO, but with a Legendre, instead of a Fourier or Chebychev basis and on top of it, the authors train their operator with a PINN-type loss. Given its close connection with SNO, we believe that this model is also a ReNO. In general, if a model only learns coefficients, then it could be a ReNO provided that the right input and output spaces and their corresponding frames are taken into consideration, please see the discussion about DeepONet in Appendix E of our paper.
We hope that we have addressed the reviewer's questions to your satisfaction and if so, we would request the reviewer to kindly revise your assessment of our paper accordingly. | Summary: In this paper, the authors investigate the concept of neural operators in the context of operator learning architectures. They address the fundamental question of what defines a neural operator and propose the notion of Representation equivalent Neural Operators (ReNOs) that satisfy a systematic consistency between continuous and discrete representations, referred to as continuous-discrete equivalence (CDE).
The authors present a framework for analyzing existing operator learning architectures and determining whether they qualify as ReNOs. They emphasize the importance of enforcing CDE at each layer of the approximating operator to ensure genuine learning of the underlying operator, rather than just a discrete representation. Experimental analysis is conducted to validate the theoretical claims and demonstrate the practical implications of respecting representation equivalence.
Strengths: 1. Originality: The paper introduces a novel perspective on neural operators in the context of operator learning architectures by focusing on the concept of continuous-discrete equivalence. The notion of Representation equivalent Neural Operators (ReNOs) provides a fresh and rigorous definition that emphasizes the importance of maintaining consistency between continuous and discrete representations. This original approach extends the understanding of neural operators and adds a new dimension to the analysis of operator learning architectures.
2. Quality and Clarity: the framework is well-structured, and the definitions and mathematical formulations seem correct and rigorous.
3. Significance: The paper holds significant importance for the theoretical analysis of operator learning architectures.
Weaknesses: 1. Experimental Validation: Although the paper includes experimental analysis to support the theoretical claims, there is room for further elaboration and validation, e.g. incorporating more SOTA architectures and more benchmark datasets.
2. Practical Significance: Lack of a principled architecture instance or design. The paper does not propose a new architecture satisfying the ReNO definition, and does not provide a systematic way to design new instances of ReNO.
3. Readability: Lack of intuition and hints between mathematical definition and derivation, especially in section 3.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Is SNO only suitable for periodic signals? If one uses a chebyshev basis instead of fourier basis to implement SNO, then does the analysis in the paper still hold?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors provide a through and insightful discussion about limitations and extensions in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We start by thanking the reviewer for their appreciation of the merits of our paper and their welcome suggestions to improve it. We address their detailed concerns below.
1. Regarding the reviewer's concerns about *experimental limitations, lack of SOTA architectures, benchmarks*, we start by saying that the main point of our paper was to provide a novel theoretical framework of representation equivalence, based on a non-trivial and novel definition of aliasing errors for general operators. In our view, this is the crucial contribution of this paper and we apply it to analyze whether and when existing neural operator architectures are ReNOs. In the paper, we have already analyzed CNN, FNO (not ReNOs in general) and SNO, DeepONet (in SM Sec. E) which are ReNOs. Following your and other reviewer's suggestions, we have also analyzed the PCA-net architecture of Bhattacharya et al as a ReNO (see general response to all reviewers as well as the attached 1-page pdf). We would also like to mention that recently, CNO (Arxiv:2302.01178) was also shown to be a ReNO. The authors of CNO show that this ReNO is state-of-the-art on learning operators corresponding to a variety of PDEs, already demonstrating the practical utility of the concept of ReNO. We will add further discussion about these architectures in a CRV, if accepted. Moreover, we have reported additional experiments in the 1-page pdf and described them in the general response to all reviewers. In particular, we show how FNO can be made into a ReNO (approximately) by minimizing the differentiable aliasing error (Eqn 5.1 of the main paper).
2. Regarding the reviewer's comment about a *systematic way to design ReNOs*, in addition to the reply in pt 1, we would like to emphasize that one of our key contributions is Eqn (3.6), which specifies how ReNOs act under a change of frame sequences. This is novel and allows for a principled way in which resolution can be varied for a ReNO and it has been utilized in the design/evaluation of CNO for instance. Moreover, we have added a possible algorithm on how non-ReNO can be made into a ReNO by minimizing aliasing error, without any significant loss of expressive power -- please check the general response to all reviewers as well Figure 2 of the attached 1-page pdf.
3. Following the reviewer's excellent suggestion, we will include more intuition and hints regarding the mathematical aspects of Sections 2 and 3 in the CRV, if accepted. As an example, we plan to add a cartoon depiction of the ReNO framework that we have created in Figure 1 in the uploaded pdf for this rebuttal. Hopefully, this will make our underlying concepts clearer to a reader.
4. The work on SNO (Fanaskov et al) indeed already considers both choices of basis functions: Chebyshev and trigonometric polynomials. Thus, SNO is a ReNO with respect to a Chebyshev basis too.
We sincerely hope to have addressed your concerns, particularly about the practical utility and realization of ReNOs, satisfactorily and would kindly request the reviewer to update their assessment accordingly.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response, my questions have been well addressed.
---
Reply to Comment 1.1.1:
Title: Thanking the Reviewer
Comment: We thank the reviewer for your reply and are at your disposal if you have any further questions during the discussion period. | Summary: This paper investigate the aliasing error of operator learning. The work points out that many existing neural operators have aliasing error, so the error is higher on super-resolution problem. To deal with the aliasing error, the paper proposed Representation equivalent Neural Operators (ReNO), which have no aliasing error. Numerical experiments show ReNO model perform better with super-resolution task.
Strengths: The paper investigates a very interesting problem -- the concept of resolution invariance in operator learning. It introduces frame theory to study the aliasing error in operator learning and it proposes a neural class of model called Representation equivalent Neural Operators (ReNO). The paper helps the community to better understand the key concept of neural operator.
Weaknesses: I think it's very important to distinguish the two different concepts:
### Discretization invariance vs representation equivalence
- Previous work define neural operator as asymptotic discretization invariant model that converge as the grid refines [1]. In a sense that the neural operator model will converge to the underlying solution operator in continuum. In the case of Fourier representation, discretization invariant FNO model has the capacity to learning infinite number of Fourier basis as created by the non-linear activation layer, which in the end, will cause the aliasing error.
- In this work, the authors the representation equivalent model that has no aliasing error. It is equivalent to say these models are prescribed in a fixed linear subspace of the underlying infinite-dimensinal function space. In the case of Fourier representation, the representation equivalent SNO model has a fixed number of Fourier basis, and a consequential irreducible approximation error.
While the newly proposed representation equivalent models have no aliasing error, they have a limited expressive power, which prevents the model extrapolates to unseen models and higher frequencies. Since the ReNO models cannot use non-linear activation layers after the inverse spectral transform, the output space is restricted to the spectral space, and the model is of the class of linear reconstruction operator [2], which may require exponentially more number of parameter compared to non-linear reconstruction operator like FNO.
In practice, the ReNO models probably perform better at super-resolution, while previous asymptotic discretization invariant models perform better at fixed resolution or mixed resolution training.
### Writing
Besides, the overall writing is very clear. The reviewer kindly suggests to use a more informative title.
[1] Kovachki, Nikola, et al. "Neural operator: Learning maps between function spaces with applications to PDEs." Journal of Machine Learning Research 24.89 (2023): 1-97.
[2] Lanthaler, Samuel, et al. "Nonlinear reconstruction for operator learning of pdes with discontinuities." arXiv preprint arXiv:2210.01074 (2022).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The CNO work [1] describes a method that makes neural operator model using activation layers also aliasing-free. Specifically it apply upsampling and downsampling before and after band-limited activation layers. As claimed in Proposition 2.1, CNO are representation equivalent.
I wonder if other works such as FNO enjoy a similar properties. Since the IFFT is equivalent to upsampling and FFT is equivalent to downsampling. FNO can restrict the number of Fourier modes of its representation space too if equipped with a band-limited activation function. On the Fourier space FNO truncates the latent function to N Fourier modes, and the band-limited activation function introduce at most M new Fourier modes, so the highest number of Fourier modes is N*M. If N*M < the resolution, there will be no aliasing error.
If it is the case, does FNO also belong to ReNO?
[1] Raonić, Bogdan, et al. "Convolutional Neural Operators." arXiv preprint arXiv:2302.01178 (2023).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: If will be better to discuss the relationship between the aliasing error and approximation power.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We start by thanking the reviewer for their appreciation of the merits of our paper and their welcome suggestions to improve it. We address their detailed concerns below.
1. Following the reviewer's welcome suggestion, we propose to change the title to the hopefully more informative *Representation equivalent Neural Operators: Frame Theory Meets Operator Learning*.
2. On the reviewer's question about *link between discretization invariance (DI) and representation equivalence (RE)*, we start by pointing out that these concepts are indeed different and to some extent, complementary. This difference is precisely what motivated the current paper. We believe that, in practice, the main utility of the notion of representation equivalence is that it precisely describes what is happening when changing resolutions, when compared to discretization invariance which corresponds to a notion of asymptotic consistency in the infinite resolution limit. Using the notion of RE, we can interpret varying resolution plots, such as those in Kovachki et al or even in Fig. 4 of our paper, while the asymptotic consistency requirements of DI do not explain why the observations of these figures hold true as one cannot rule out large errors between resolutions when consistency is only required in the infinite resolution limit. Given this context, we would provide a detailed discussion of DI vs. RE, emphasizing their complementary roles in a CRV, if accepted.
3. Regarding the reviewer's questions about *linear vs non-linear methods in the context of operator learning*, we would like to start by clarifying that the results of Lanthaler et. al. were specific to the case of scalar 1-D transport of a single discontinuity and a related 1-D Burgers example where they claimed that FNO is better than DeepONet as the approximation error of a linear reconstruction operator can be bounded below by the rate of decay of eigenvalues that may converge slowly for some transport equations. This does not imply that methods with a linear reconstruction cannot be efficient or have limited expressive power for a very large class of PDEs, say elliptic and parabolic PDEs, which have a regularizing effect resulting in fast spectral decay. Moreover, the result of Lanthaler et. al. only pertains to size efficiency vis a vis approximation error. In the case where model size $\ll$ resolution of the data, the computational bottleneck is data resolution, which we have no control over, as it is given to us a-priori. In this case, using a linear method does not necessarily increase complexity as the data resolution dominates. Moreover, other sources of error, such as aliasing error, discussed extensively in our paper can also affect the performance. Finally, as the reviewer has pointed out the work of Raonic et. al. on CNO -- Table 1 in Raonic et. al. shows that a ReNO like CNO can outperform FNO even on benchmarks with discontinuities (Transport Eqn.) and Shocks (Compressible Euler Equations), attesting to the abilities of methods based on linear reconstruction for approximating operators with slow spectral decay.
4. In reply to the reviewer's excellent question, it is indeed possible to modify the FNO architecture in a similar spirit as it is done in CNO of Raonic et al. , to respect the continuous-discrete equivalence. We will mention this fact in the CRV, if accepted. Here, we have explored another option for making FNO into a ReNO, which we have described in some detail in our general response to all reviewers above. Our approach consists of training FNO to simultaneously minimize the regression error as well as the discrete aliasing error (Eqn. 5.1 in our paper). The results in Figure 2 of the attached pdf show that the aliasing error is indeed minimized by this approach while the training error remains as small. This experiment shows that working directly to minimize the aliasing error is also a viable approach for making a neural operator into a ReNO.
5. Regarding your comment about *link between approximation and aliasing errors*, we would like to point out that there is no intuition to believe that there is a possible link (or even trade-off) between aliasing error and approximation power. Moreover, the experiment on FNO (see pt. 4 above) reported in Figure 2 in the uploaded pdf for this rebuttal further strengthens the argument that such a link does not exist. It is rather the case that aliasing error is an artifact that can be eliminated without necessarily losing approximation power. Following your suggestion, we will include a detailed discussion of this issue in the CRV, if accepted.
We sincerely hope to have addressed your concerns, particularly about the differences between discretization invariance and representation equivalence, satisfactorily and would kindly request the reviewer to update their assessment accordingly.
---
Rebuttal Comment 1.1:
Title: Requesting the reviewer for feedback
Comment: Due to imminent closure of the discussion period, We kindly request the reviewer to provide us with your valuable feedback on our rebuttal and we are at your disposal to answer any further questions in this regard. | Rebuttal 1:
Rebuttal: At the outset, we would like to thank all five reviewers for their thorough and patient reading of our article. Their criticism and constructive suggestions will enable us to improve the quality of our article. If our paper is accepted, we will incorporate all the changes that we outline below in a camera-ready version ( CRV) of our article. As many of the reviewers had questions about the experimental analysis (Sec 5), we provide a very detailed clarification below in this response. Individual comments of each reviewer is answered in the following. We have also added a 1-page pdf to supplement our argument.
Yours sincerely,
the Authors
### Clarification of Experimental Analysis Section
We wish to learn an unknown target operator $Q$ using neural operators. In this experiment, all neural operators (CNN, FNO and SNO) take as input pointwise evaluations on a grid, and are able evaluate inputs at varying grid resolutions. The goal here is to check whether their outputs at the discrete level are consistent when varying grid resolution, and verify our theory, which--informally-- tells us the outputs are consistent if and only if the operator is a ReNO.
**Construction of the target operator:** $Q: H \to H$ with $H$ being the space of periodic and $K=30$-bandlimited functions on $[-1,1]$. A simple way of constructing such a mapping is by sampling input and output pairs in a random fashion. How to sample a function? As we know that $\Psi_K := \\{ d_K(. - x_k)\\}\_{k=-K, ..., K}$ constitutes a frame for $H$, $d_K$ being the Dirichlet kernel of order $K$ and $x_k=\frac{k}{2K+1}$,
any function $f \in H$ can be written as $f(x) = \sum_{k=-K}^{K} f(x_k) d_K(x-x_k)$. Thus, the discrete representation of $f$ simply corresponds to its point-wise evaluations on a grid, i.e. $\\{f(x_k)\\}_{k=-K, \dots, K}$. Note that for simplicity we have used and will use the same frame sequences for both the input and output space $H$. After sampling a finite number of such input-output function pairs, we wish to learn the mapping between them using our neural operators.
**Training and Evaluation:** During training, we simply learn the neural operator $u_K: \mathbb{R}^{2K+1} \to \mathbb{R}^{2K+1}$ between frame coefficients associated to $\Psi_K$, which are the point-wise evaluations of the input-target functions that we had sampled in the previous step. In other words, we regress to the frame coefficients of the target function, with coefficients of the input function as input to the neural operator. Once training is over, we evaluate how the different neural operators behave when dealing with changing input and output frame sequences. The frame sequences here are $\Psi_M$ for different testing frames, with associated $2M+1$ sized grids, with associated discrete operator $u_M: \mathbb{R}^{2M+1} \to \mathbb{R}^{2M+1}$.
On the continuous level, SNO maps from $H$ into $H$. Thus, at the discrete level, $u_M$ corresponds to the following: it takes in $2M+1$ point values, synthesizes these to a function in $H$ which is then sampled on the training grid. Then, $u_K$ is applied to this input vector of length $2K+1$. Finally, the output vector is synthesized to a function in $H$ and then sampled on the evaluation grid of $2M+1$ points. On the other hand, evaluation of both CNN and FNO on different grids is straightforward.
Having defined training and evaluation frame sequences, the resulting aliasing errors (See Eqn (5.1)) can now be computed with the discrete aliasing map $$\epsilon(u_K, u_M): \mathbb{R}^{2K+1} \to \mathbb{R}^{2K+1}, \epsilon(u_K, u_M) = u_K - T_{\Phi_K}^\dagger \circ T_{\Phi_M} \circ u_M \circ T_{\Psi_M}^\dagger \circ T_{\Psi_K}$$
Intuitively, $T_{\Psi_M}^\dagger \circ T_{\Psi_K}: \mathbb{R}^{2K+1} \to \mathbb{R}^{2M+1}$ first transforms input data from the training $2K+1$ grid to the evaluation $2M+1$ grid, then the output is computed using the neural operator at resolution $M$, and finally $T_{\Phi_K}^\dagger \circ T_{\Phi_M}: \mathbb{R}^{2M+1} \to \mathbb{R}^{2K+1}$ transforms outputs back from evaluation grid to training grid, so that even though $M \neq K$, discrete outputs are of a different size, these can be readily compared. This is precisely the procedure that led to the aliasing error plot of Fig. 4 of the main paper.
**Additional Experimental Follow-up**: Following the suggestions of some reviewers about adding experiments as well questions about whether FNO can be made into a ReNO, we describe a simple method to do so. We repeat the same experiment as above with FNO but we add a term to the loss function that corresponds to the discrete aliasing error $\epsilon(u_K, u_M)$, defined above. Thus, we minimize both the regression error and the aliasing error simultaneously, with a parameter $\lambda$ weighing the aliasing error. To evaluate $\epsilon(u_K, u_M)$ during training, we randomly sample $M \in [K,2K]$, evaluate the differentiable aliasing error as above and backpropagate. The results are shown in Fig. 2 of the attached pdf. They demonstrate that as $\lambda$ increases, the aliasing error is indeed minimized (Fig. 2 (Left)) and is almost negligible for $\lambda =1$, not just in the range $M \in [K,2K]$, used to evaluate aliasing error during training, but it also generalizes very well to $M > 2K$, not seen during training. Moreover, the training error (Fig. 2 (Right)) continues to be very small even when $\lambda$ is increased, showing that there is no trade-off between approximation error and aliasing error, at least for this experiment and both can be made small simultaneously for FNO.
**Are any other neural operators ReNOs ?**: Following suggestions from reviewers, we investigate PCAnet of Bhattacharya et al and were able to show it is a ReNO with the commutative diagram (Fig. 3 of attached pdf) and in Fig. 4, we also verify the ReNO property empirically in the above experiment for PCAnet and the recently introduced CNO of arXiv:2302.01178.
Pdf: /pdf/b47dfa7bc900bfd0dc352f6399695edc1243d377.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The concept of “representation equivalence” is introduced in the context of operator learning. The definition amounts to requiring zero aliasing error from the mode. It is shown whether several popular architectures satisfy the new definition.
Strengths: The paper is well written and easy to follow. The math is made accessible and good intuition is given. Claims are proven rigorously with detailed proofs.
Weaknesses: While I like the general direction of this paper, I do not appreciate catchy titles that obfuscate mathematical ideas for the purposes of a “wow” factor. Are Neural Operators Really Neural Operators? Yes they are because that’s how they were defined in [14]. Putting titles aside, I think that two ideas about what should constitute “operator learning” are being mixed. The point of [14] is to define families of architectures whose parameters are not tied to the discretization of the input or output functions. This makes the cost of a model (with cost defined as the size of the model) independent of discretization. The analogy is to explicit vs. implicit methods for solving time-dependent PDEs. For an explicit method the cost (defined here as the inverse size of the time step) increases as the spatial discretization increases due to CFL. On the other hand, for implicit methods, the time step and the spatial discretization are independent. From this point of view, neural operators are analogous to implicit methods while, say CNNs, are analogous to explicit methods. This makes CNNs a perfectly reasonable method for operator learning, however, the challenge which remains is how does one increase the size of the CNN with the resolution in order to guarantee consistent results. This can then map into a proper cost-accuracy trade-off analysis between methods just as one can do for explicit vs implicit methods. Note that for this to be done, architectures need to be re-trained at each testing resolution. Indeed what is shown in [14] is if one re-trains the same model with higher and higher resolutions, the answer remains consistent. A consequence of the neural operator definition (parameters not tied to grid points) is that a trained architecture can be used to predict at multiple resolutions, however, one cannot, in general, have guarantees that the answer will stay consistent. Indeed if the true operator transforms all modes of an input function in some way then how can one guarantee that a surrogate trained only on some of the modes will generalize to unseen modes? One can hope that the surrogate extrapolates correctly and that has empirically been shown on some problems (some in [14] but also many others), but it cannot be guaranteed any more than generalization abilities of deep neural networks can be guaranteed. From this point of view, neural operators do precisely what they are meant to do,
and I strongly think that the authors should be much more clear and explicit about the problem that their work tries to address because it is ultimately a different one.
I like the idea of quantifying and mitigating the aliasing error. It is a very important direction that has been almost unexplored in deep learning with many practical consequences. However, I do think that Definition 3.4 is much too strong to be useful. Indeed, in Remark 3.5, the authors show that, as a consequence of this definition, there is only one recipe for constructing ReNO architectures, mainly eq. (3.6). Note the similarity of this equation to eq. (2.3), upon combining with eq. (2.2), in https://arxiv.org/pdf/2005.03180.pdf. I point this out because constructions of this kind yield only linear methods of approximation. It is well known in function approximation that linear methods are suboptimal (for example, https://people.math.sc.edu/devore/publications/NLACTA.pdf). While this has certainly not been studied nearly as deeply in the operator setting, similar empirical and theoretical results are beginning to emerge (for example, https://arxiv.org/pdf/2210.01074.pdf). I therefore struggle to see the practicality of such a strong definition. For what kinds of problems would this be useful?
Lastly, I think the numerical experiments are quite insufficient in demonstrating the practical usability of ReNO. Is aliasing error actually prevalent or is the main source of error, the approximation error? If the latter, should architecture design not be focused on reducing approximation as opposed to aliasing error? What is the trade-off between approximation and aliasing, and can an architecture which is not ReNO, still be modified to reduce aliasing error while retaining the same approximation power?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I do not understand the details of the numerical example. After having read the paper and the Appendix, I feel that I am missing something. I will describe the set-up to the best of my knowledge. I will denote by f the input function and by u the output i.e. Q(f)=u. For simplicity, consider only two resolutions and denote by x_1,...,x_m the first discretization and by y_1,...,y_n the second with the assumption that they are nested so that {x_j} is a proper subset of {y_j}. Data is generated by randomly (from a Gaussian) sampling the point values f(y_j) and u(y_j). Three neural network based methods are then trained to regress on samples f(x_j) and u(x_j). They are all then tested on fitting u(y_j) from f(y_j) (I assume with samples from a test set). It is shown that the SNO achieves the same error on regressing u(x_j) from f(x_j) as it does on regressing u(y_j) from f(y_j) while the other two methods do not. I do not understand how this is possible. The SNO approximates Q in the form Q(f)(x) = sum_{k=-d/2}^{d/2-1} g_k ({<f, e^{i .}>, …, <f, e^{id .}>}) e^{ikx} where g is a neural network mapping C^d to C^d with d some fixed positive integer. By passing from f(x_j) to f(y_j), the input to g changes since the inner products now see f(y_j), but the periodic bases stay the same. The values of u(y_j) were picked at random and independently from those of f(y_j) so how can this new information about f possibly have an effect on the prediction of u? In general, this does not seem possible since whenever Q transforms all modes of f whatever method that regresses Q must extrapolate this transformation to new modes. While this may happen sometimes, it cannot be guaranteed. But this looks completely impossible in the current experiment since the SNO must extrapolate randomness. There must be something in this experiment that I am missing; it would be helpful for the authors to clarify.
While the authors do not propose the SNO, it is the only method which they show satisfies ReNO, so I do wonder about it. It is well known that, for approximation in L^2, an optimal linear subspace is given by PCA. This is the motivation behind PCA-Net [3] which is precisely like the SNO but, instead of picking Fourier bases, computes bases for the inputs and outputs by doing PCA on the data. What is the motivation behind sticking with Fourier bases? In fact, since any method which is ReNO is a linear method of approximation, is PCA-Net not the optimal ReNO method (barring optimality of the finite-dimensional neural network)? Generally, if the ReNO definition is to be adopted as a standard way of doing operator learning, how do the authors envision new architectural designs? Remark 3.5 fully characterizes ReNO architectures, so it is hard for me to see a way forward.
The first example of Section 4 shows that discrete convolution layers are not ReNO, yet the work https://arxiv.org/pdf/2302.01178.pdf (CNO, Proposition 2.1) claims that they are. As far as I can tell, the way that ReNO is proved for CNO is by showing that discrete convolution maps w-bandlimited functions to w-bandlimited functions and then constructing a pointwise nonlinearity that does the same. I am not convinced by the CNO proof since nonlinearities used in practice (like ReLU or, more generally, learned MLPs) will not map bandlimited functions to bandlimited functions of any order and, so, even going through the upsampling-nonlinearity-downsampling procedure, will not give an aliasing error that is exactly zero (but it will mitigate the aliasing error as was first proposed in https://arxiv.org/pdf/2106.12423.pdf). Putting nonlinearities aside, however, one still needs that the discrete convolution is ReNO, so where do the differences lie?
How do the authors envision a similar theory for Banach spaces when the $\ell_2$ isomorphism breaks down?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately address potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive comments and remarks, which we address below:
1. We strongly acknowledge the significance of neural operators, as defined in [14] as a concept, and this is exactly why we intend to draw inspiration from it and further develop upon it. Although the reviewer may have interpreted our approach as a challenge to the Neural Operator (NO) concept, this was never our intention. To the contrary, our research aims to advance better understanding of various aspects of NOs. For this reason and to avoid any possible misunderstanding by intended readers, we propose to change the title to a more informative "Representation equivalent Neural Operators: Frame Theory Meets Operator Learning".
2. With this context, one of our goals was to analyze under what conditions do NOs actually correspond to the mathematical notion of operators. Indeed, NOs satisfying just the discretization invariance property may not necessarily correspond to operators, leading to possible negative implications such as loss of *structures* like symmetries and locality, only defined at the continuous level, leading to possibly poor generalization as well as inconsistencies across resolutions. Our aim here was to develop a theoretical framework that effectively addresses these aspects. We demonstrate that when seeking to establish a relationship between continuous and discrete or between different resolutions, the concept of aliasing is the crux and as acknowledged by the reviewer, one key and highly non-trivial contribution of our paper was a rigorous definition of aliasing errors for operators (Eqn (3.1)) which is the first instance of this formula in the literature. Moreover, we believe that aliasing complements the consistency errors for NOs in [14] very well. We will highlight this contribution more prominently in the CRV, if accepted.
3. Regarding the *rigidity and utility of Def 3.4*, as the reviewer suggests, it automatically leads to the possibility of not necessarily setting the aliasing error to zero, but to control it and make it as small as desired. This is a natural corollary of def 3.4 and we will discuss it at greater length in the CRV (see also pt below). At the same time, we politely disagree with the reviewer's contention that Def3.4 (Rem 3.5) is too restrictive. It leaves the room to choose frames at each layer, as well as the neural net model itself, i.e., nonlinear layer $G_\ell$. Choosing the frames $\Psi_\ell$ and $\Psi\_{\ell+1}$ for the input and output spaces, only specifies how the corresponding discrete operator $g_\ell(\Psi_\ell,\Psi\_{\ell+1})$ has to be chosen. This just states that moving from continuous to discrete representations has to happen in a precise and controlled manner, but does not specify the architecture. The neural operators $G_\ell$ are to be learned and are not fixed by the ReNO framework and there is considerable room for innovative architectural designs here.
4. Regarding *linear vs. nonlinear models in operator learning*, we start with the contention that frames based on multiresolution, such as wavelets, allow in fact to go beyond linear approximation of signals to *nonlinear approximation* (in the sense of DeVore, Acta Num 1998). With such frames, one can extract signal-adapted approximations that can zoom in where necessary, instead of using a uniform discretization. Explicitly implementing such constructions are topics for future research, but our framework of considering general frames is, in our opinion, the right setup to look for such architectures. Moreover in this context, the reviewer defines the cost as model size. We believe that the overall computational complexity is more pertinent. This is a subtle but important point. In the case where model size $\ll$ resolution of the data, the computational bottleneck is data resolution, which we have no control over, as it is given to us a-priori. In this case, using a linear method does not necessarily increase complexity, putting into question the utility of so-called *non-linear reconstruction* operators of Lanthaler et al. Clearly, more discussion on this point is necessary and will be included in a CRV, if accepted.
5. We apologize for the possible lack of clarity in our description of the numerical experiment in Sec. 5. We have added a detailed description in the general response to all reviewers above and request the reviewer to examine it and ask us further questions, if needed.
6. The reviewer's excellent questions about *trade-off between approximation and aliasing errors* and *can a non-ReNO NO be made to reduce aliasing* inspired us to perform the follow-up experiment described in the general response. As shown in figure 2 of the attached pdf, we were able to reduce the aliasing error of FNO significantly by minimizing the discrete aliasing error (Eqn 5.1) (Fig.2 Left) while keeping training errors small (Fig. 2 Right). In this example, there appears to be no trade-off between aliasing and approximation errors!
7. SNO is simply one example of a ReNO and we have analyzed when DeepONet is a ReNO (SM E, lines 480-489). Following the reviewer's suggestion, we can show that PCANet is also a ReNO, see Fig 3 (attached pdf) for the commutative diagram and Fig. 4 for experimental verification. However, we also observe that the resolution invariance of PCAnet is very sensitive to slight perturbations like randomly removing even one data sample while evaluating on a different resolution (See plot for PCAnetJitter in Fig. 4). In the CRV, we will investigate this issue carefully, particularly in the context of the question about *optimality* of PCAnet.
8. It is indeed possible, although technical, to derive a ReNO framework in Banach spaces using Banach frames (Casazza et. al, 1999). We will add a discussion on this topic in the CRV.
We sincerely hope to have addressed your concerns satisfactorily and would kindly request the reviewer to update their assessment accordingly.
---
Rebuttal Comment 1.1:
Title: Points 1-4
Comment: Thank you to authors for the detailed response and added clarifications. I'll respond to each point separately:
1. Thank you. I like the new title.
2. I agree that there can be a loss of structures when moving from the continuous the discrete. This can, and does, happen in numerical methods as well; for example, numerical dissipation can occur for operators that are known to be non-dissipative in the continuum. I am far from convinced that aliasing error is at "crux" of this issue however. I think this should be addressed in a problem dependent way based on what is known about the operator of interest; there is no "one fits all" solution. In fact, the solution presented here preserves no structure of the operator to be learned but rather proposes consistency with respect to a predefined frame that is independent of the solution operator. Consider an example. Since basically everything done in practice deals with pointwise evaluations, we can take as the "right" frame, the one defined by the sinc functions. So any ReNO architecture should essentially upsample by a sinc filter. Take Navier-Stokes (NS) as the operator of interest. If I introduce more fine scales in the initial condition (even just by sinc interpolation), I will end up with a solution which has a finer scale turbulent structure. Surely if I take a low resolution NS simulation and upsample it by a sinc filter, I will not end up with the high resolution simulation solution. Yet a ReNO is constrained to do so. In fact, this remains true of the input data as well. Take a microstructure problem where the material interface is discontinuous. A sinc interpolation will not yield the right boundary structure. Even if I take as input the samples from a Gaussian field then higher resolution samples correspond to a particular scaling of the high frequency modes based on the eigenvalues of the covariance. These have nothing to do with the sinc interpolation consistency being imposed here. In practice, why should I try to do "super-resolution" with a ReNO operator when I can just as well take my low resolution solutions and upsample with a sinc filter?
3. Right but this not currently in the paper. As written, a model is ReNO if and only if the aliasing error is zero and what is shown is that some models satisfy this while others do not. This can be quite misleading for readers as it seems to make the implication that some models do "true" operator learning while others do not. Furthermore, I am generally confused why it is imposed that every layer in an architecture needs to have zero aliasing error. At the end of the day, if you want zero aliasing error then it enough to simply make the last layer have the form (3.6). Am I missing something?
4. Linear vs. non-linear has nothing to do with wavelets or any particular choice of frame/dictionary. The question is what is the structure of the range space of the learned operator. If the range space is a linear subspace of the Hilbert space then it is a linear method and results concerning the Kolmogorov n-width apply (as pointed out in Lanthaler et al.). Based on definition 3.4, the range space of any ReNO operator is the span of a fixed, finite number of functions belonging to the frame \Psi_L which is therefore a linear subspace. A possible way of making a non-linear architecture is to adaptively pick, based on the input function to the operator, which functions from \Psi_L are used in the final projection. However, as I understand it, definition 3.4 does not allow for this as the synthesis operator is fixed. Furthermore, I am not sure how this will interact with definition 3.1 which defines aliasing relative to an operator rather than a function. There are other ways of making non-linear architectures, for example, applying a learned (or fixed) Nemitskii operator to the output of the last layer. This is how the FNO is made non-linear. But, again definition 3.4 does not allow for this. Another possibility to is make the last layer a non-linear integral kernel transform, for example the attention layer in transformers, but again, the definition rules this possibility out. I am not sure I understand the point about model size. While I agree that computational complexity is more important than number of parameters, the two are corrolated and one can derive exact formulas as done in https://arxiv.org/abs/2203.13181. The number of parameters in most models used in practice is very large. Even the smallest models usually have at least a million parameters; this correspond to data, in 2d, given at a 1000x1000 resolution which seems quite high. I am not sure what is meant by "putting in question the utility" of the methods from Lanthaler et al. Is the implication that if data in the experiments performed there was given at a much much higher resolution, the linear methods would be competitive with the non-linear ones? I have a hard time believing this. | null | null | null | null | null | null |
Adaptive Contextual Perception: How To Generalize To New Backgrounds and Ambiguous Objects | Accept (poster) | Summary: This paper presents an empirical study on out-of-distribution (OOD) generalization in two different settings: contexts that are either beneficial or irrelevant. The authors highlight an interesting finding that models performing well in one setting tend to struggle in the other. They analyze a population of models and demonstrate that those with more factorized representations and appropriate feature weighting achieve improved performance in handling both OOD settings. Additionally, the paper introduces novel data augmentation methods aimed at enhancing model generalization.
Strengths: One of the major strengths of this paper is its detailed analysis of generalization and the identification of factors that influence generalization performance.
Weaknesses: While the paper proposes a data augmentation method, the motivation behind its introduction lacks strong support.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In section 7, the authors describe the proposed data augmentation method without adequately establishing its connection to the motivation of “adaptively using context to recognize objects in new settings” due to the following reasons: (1) The weight of the training loss is set as fixed hyperparameters and lacks the adaptive nature required to effectively utilize context. (2) The data augmentation method does not provide clear instructions on how context can be effectively employed for object recognition.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The method requires additional annotation of the background classes and the segmentation of the foreground objects.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > “the authors describe the proposed data augmentation method without adequately establishing its connection to the motivation of “adaptively using context to recognize objects in new settings””
We establish the connection between our analysis and our proposed method in the introduction “In order to encourage model factorization, we augment the data with random-background and background-only images, and we weight the corresponding objective terms to encourage appropriate feature weighting of the foreground and background information” (Lines89-91). Our proposed method does not directly manipulate the representation geometry, but we reason that our proposed method encourages the models to represent the foreground and background in the right way (Lines367-369).
> “The weight of the training loss is set as fixed hyperparameters and lacks the adaptive nature required to effectively utilize context.”
We want to clarify what “adaptive” means here. Adaptive means the model uses context when the object features are ambiguous but ignores the context when the object is easily recognizable (Lines35-38), which is viable with a fixed weight. And this is evidenced by the improved performances in two OOD settings, which is what our method achieves. It is not the case that making the weight variable would make the method "adaptive". The weight only shows how the evidence is combined. And we argue there is an optimal weight depending on the domains for optimal generalization (see Line 338-340 & Figure 4b).
> “The method requires additional annotation of the background classes and the segmentation of the foreground objects.”
We appreciate the concern, and we agree that it would be ideal for the method to not require any additional human annotations. Please refer to the General Response and pdf Table R1, where we add results using SegmentAnything pseudo-masks rather than our ground-truth masks on ColorObject and SceneObject. We are pleased to see performance improvements with SegmentAnything masks that are similar to our original results (though note we require heuristic cross-checking with the ground-truth masks, which we believe could be alleviated with more controllable segmentation methods like https://arxiv.org/abs/2308.00692). This result should provide a pathway for future work to obtain good foreground/background separation masks without the need for additional human annotation, enabling the application of our data augmentation method to improve generalization to background-invariance and object-disambiguation tasks.
---
Rebuttal Comment 1.1:
Title: Engage in the discussion with the authors
Comment: Dear Reviewer,
The author has provided responses to your questions and concerns. Could you please read their responses and ask any follow-up questions, if any?
Thank you! | Summary: This paper examines the influence of context on visual recognition capabilities. The authors distinguish two regimes, one where background context can help disambiguate objects and another one where the object information is orthogonal to the background information. The study shows that models can thrive in one regime but fail to extrapolate to the other regime. The paper then suggests that factorizing object and background information can help in out-of-distribution generalization and proposed data augmentation methods to help models generalize.
Strengths: The work is clearly written and presents a nice systematic examination of different aspects of how context can impact visual recognition.
The work nicely puts the emphasis on generalization to OOD, which is not always tested in other context-aware models.
Figure 3 is nice and clearly demonstrates the trade-off between invariance and the ability to use contextual features.
Weaknesses: No major weaknesses noted, but see questions below.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The authors distinguish situations where context can help recognition or where context is irrelevant and the desired output is invariance to contextual noise. Context can also hurt performance (see e.g., Bomatter et al ICCV 2021 and multiple studies on incongruent context conditions, summarized in Oliva and Torralba, TICS 2007).
The addition of feature weights and especially the transition to All features lead to small improvements in Table 1. All the other entries are clearly justified and critical for the model. Similar comments apply to Fig. 4b. The enhancement is small.
How is the pareto frontier computed in Fig. 3 and why is it called a frontier if there are points that can trespass this line?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: There is minimal discussion of limitations in the current version.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > “Context can also hurt performance.”
We thank the reviewer for pointing out the relevant works. We believe that the referred works are consistent with ours and will add them to our references. It is true that context can also hurt performance for both models (as observed in our experiments too) and humans. Although the ideal behavior in the Background-Invariance setting is to ignore the background completely, we argue that there needs to be some assumption about what the test distributions will look like, in order to decide how sensitive the model should be to the backgrounds. This is because models and humans need to deal with both Background-Invariance and Object-Disambiguation settings. Our Pareto plots show that models should not be completely invariant to the background, because this would drastically lower performance in an object-disambiguation setting.
> “The addition of feature weights and especially the transition to All features lead to small improvements in Table 1… Similar comments apply to Fig. 4b. The enhancement is small.”
We want to clarify that in Table 1, the row for Feature Weighting metrics do not include Factorization metrics. That is, the Factor. row does not include Feature Weighting in the regression, nor does the Feature Weighting row include the Factorization metric in the regression. In other words, both feature factorization and weighting on their own are both strong predictors. We apologize for any confusion in the table formatting and will clarify this further. This said, the transition to the All row (final row) does show that these two variables are fairly collinear, with each providing a relatively smaller amount of information that the other does not contain. However, the first takeaway from this table is the very large improvement over the ID baseline, resulting in high R^2 values above 0.85. Past work has not been able to obtain _any improvement_ over ID, while we're getting quite large improvements in R2.
In Figure 4, we wanted to show causal evidence for our hypotheses about OOD generalization, which has not been done before, to our knowledge. We believe the changes in accuracy of up to 3 points are meaningful, especially when considered alongside the high R^2 values of our regressions. The results are also all statistically significant (Lines333-337), so we believe this experiment supports the overall argument for the causal importance of these features.
> “How is the pareto frontier computed in Fig. 3 and why is it called a frontier if there are points that can trespass this line?”
We fit a polynomial function to the outer edge of all the data points as the Pareto frontier. Since the data is noisy, we would see some points trespassing this line. This is not the ground truth frontier, which no points should pass through. Considering the large size of the model population, we believe the ground truth frontier would be in a similar shape as our approximate frontier, showing a similar kind of trade-off. We can add these details to the final paper.
---
Rebuttal Comment 1.1:
Title: Engage in the discussion with the authors
Comment: Dear Reviewer,
The author has provided responses to your questions and concerns. Could you please read their responses and ask any follow-up questions, if any?
Thank you! | Summary: This work investigates how visual models leverage background and foreground information for out-of-distribution (OOD) generalization. The authors trained a large number of models and evaluate their performance in two OOD settings. They find that there is a tradeoff for the models in these two OOD settings as they need to balance how much foreground and background are separately used during the recognition task. Multiple analyses are further conducted to better understand this tradeoff. The authors find that “factorized representations and appropriate feature weighting are more successful in handling” the OOD tests. They also present experiments supporting the causal influence of these factors. Finally, they propose a new method to train the models through adding more background and foreground related augmentations. This method yields a better model compared to existing methods.
Strengths: The paper is well written. The logic is clear. The results are clearly presented. I enjoy reading the paper.
The main idea in this paper is analyzing the models’ performance in two separate but relevant OOD tests: OBJECT-DISAMBIGUATION and BACKGROUND-INVARIANCE, where the background information is either beneficial or irrelevant. This idea is also innovative. To thoroughly investigate the performance of different algorithms, the authors train a lot of models, which makes the results general.
The regression and probing analysis is also neat. After leveraging the results from this analysis, the authors also create a new method that yields a better model.
Weaknesses: My biggest worry is that the results reported in this work is just reflecting the models trained on small amount of data and cannot generalize to models trained with much more data. The training set of the models only contain 16000 images, not mentioning that many of these images are generated from the same object. So these images contain very limited diversity across objects and natural background. State-of-the-art models in recognition are trained with at least 2 magnitude more images, not mentioning very recent models trained on almost billion scale images (like Segment-Anything). Is the tradeoff observed in this work just a result due to not enough training data? When there are more data, maybe the model can achieve the optimal performance in both benchmarks. Then how would the finding in this paper be general to more realistic settings that are used in actual applications?
In addition to this worry, the proposed new method is also over-engineered for the constructed benchmarks. The new method is simply augmenting the data with background and foreground operations, with clear implications to benefit the constructed benchmarks. How would this augmentation benefit other OOD settings? Is this augmentation useful for training on more images?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See comments in the weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > “Is the tradeoff observed in this work just a result due to not enough training data? When there are more data, maybe the model can achieve the optimal performance in both benchmarks.”
The two standard settings (ColorObject and SceneObject) we test on for fair comparison with prior works are indeed small. While we aren’t able to conduct experiments with data of the SAM size (1B images) due to academic resource constraints, we are able to produce “scaling laws” results in the form of training size ablations. Shown in the General Response and pdf Fig. R1, we find a clear trade-off between the two OOD settings regardless of how much training data is available to the model. Note the frontier moves “outward” as the training size increases, since models become more accurate on both tests. We expect the same trade-off to hold with more training data with the frontiers moving further outwards.
> “the proposed new method is also over-engineered for the constructed benchmarks… How would this augmentation benefit other OOD settings?”
We test our method on two standard datasets with metrics in three domains, in-distribution, Background-Invariance, and Object-Disambiguation. Past published work has proposed new methods with only these datasets, and using only one of our two OOD tests (including the strongest baseline, Fish [39]). We believe this is a good proof of concept for the new data augmentation method. It would be interesting to test other domains shifts, but we consider that out of scope for our paper.
> “Is this augmentation useful for training on more images?”
While we are not able to conduct large scale experiments on larger sizes than 16000 training points, we provide an ablation of our data augmentation method across different training sizes in the General Response and pdf Table R1. We do observe that the boost to OOD performance from the data augmentation method is larger when there is less training data available, although for now we cannot say for certain that the effect would entirely dissipate with larger training sizes, especially given the fundamental mismatch between the in-distribution training data and the Background-Invariance and Object-Disambiguation tests. Moreover, we believe one additional benefit of the data augmentation method is that it allows one to bias the model toward higher performance on either the Background-Invariance _or_ Object-Disambiguation tests, by adjusting the weight on the objective terms in the loss. This is a new benefit of the method that would enable one to tailor their model to perform better in one setting or the other, based on an expectation of how the test-time distribution will look for a model in deployment.
---
Rebuttal Comment 1.1:
Comment: Thank you for your comment! PDFs for all papers were temporarily unavailable, presumably due to some OpenReview system issues. Our PDF is now accessible again and can be found at the bottom of our general response (https://openreview.net/forum?id=7JuReDmGSL¬eId=pLEPyZJhLW). Please let us know if you have any further questions. | Summary: This work investigates how vision models use background information in various contexts and find that models that are more invariant to backgrounds are less able to use the background to disambiguate. A new objective function and augmentations are proposed to control the balance between ignoring the background and using it for disambiguation. It is hypothesized that in order to make adaptive use of background as in biological vision, computer vision models must have factorized (orthogonal) object vs background representations.
Strengths: 1. Clear presentation and motivation.
2. While many works have investigated the role of background vs foreground, much of the literature has focused on over-reliance on the background leading to spurious correlations, though background can also disambiguate. While this is still "known to the field", this systematic study can still be helpful to the community.
Weaknesses: 1. There is some re-treading of scientific points from Xiao et al, 2020.
2. The architecture is limited to Wide ResNet-28 and MLP for synthetic datasets. Transformers are known to make adaptive use of context (and are an important, broadly applicable architecture, arguably more so than ResNets). How do these findings change when using vision transformers is an important unanswered question and a limitation. Also unknown is how the findings change with respect to model scale.
3. The augmentations proposed are possible on toy or synthetic datasets, but the need for dense annotations like segmentation masks for foreground objects makes the approach not really practical - especially for more complex scenes, with many objects and scales. If the authors really want to argue that the augmentation they propose is a new *methodological contribution*, then they will need to demonstrate it's feasibility on more practical datasets with more complex scenes. If the authors could please elaborate on this in the response, that would be great.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > “There is some re-treading of scientific points from Xiao et al, 2020.”
Xiao et al. 2020 focus on showing that models over-rely on background. In fact, they argue that models rely on background too much, writing that models “exploit background correlations” and hoping for models to exhibit “robustness to misleading background signals.” We argue instead that models need both foreground and background by proposing a novel mechanism (feature factorization + appropriate weighting), which is supported by our regression and causal analysis and augmentation method.
> “How do these findings change when using vision transformers is an important unanswered question and a limitation.”
Thanks for the suggestion. We have extended our results to include Vision Transformers (Swin), focusing first on the tradeoff between generalization to background-invariance and object-disambiguation tests. As shown in the General Response and pdf Fig. R1, we are pleased to find that Transformers show the same tradeoff in generalization as ResNets. We will add these additional results in the final version.
> “need to demonstrate it's feasibility on more practical datasets with more complex scenes.”
We appreciate the suggestion, and we agree that it would be ideal to showcase the data augmentation method on more practical datasets. Please refer to the General Response and pdf Table R1, where we add results using Segment Anything masks rather than our ground-truth masks on ColorObject and SceneObject. We are pleased to see performance improvements with SegmentAnything masks that are similar to our original results (though note we require heuristic cross-checking with the ground-truth masks, which we believe could be alleviated with more controllable segmentation methods like https://arxiv.org/abs/2308.00692). Since the Segment Anything Model obtains impressive segmentation results with complex naturalistic images, we are hopeful that future work can build on our method to improve how SOTA models generalize across background-invariance and object-disambiguation tests with naturalistic data.
---
Rebuttal Comment 1.1:
Title: Engage in the discussion with the authors
Comment: Dear Reviewer,
The author has provided responses to your questions and concerns. Could you please read their responses and ask any follow-up questions, if any?
Thank you! | Rebuttal 1:
Rebuttal: We thank the reviewers for their time and effort in reading our paper. We are glad to see that reviewers view the paper to be “helpful to the community” (D3vS) and “innovative” (WSbu), with a “systematic examination” (FMxu) and “detailed analysis…of factors that influence generalization performance” (Qyp3). In our individual responses to each review, we try to address any noted questions about the paper.
Here, we wish to highlight a few extensions of our experiments with additional training sizes, model architectures, and SAM-generated image masks, all of which are consistent with our original results and demonstrate robustness across additional experiment setups.
**Training Size Ablations and Transformer Results.** First, to strengthen our analysis results, we provide OOD Pareto frontier plots for (a) models with varying amounts of training data and (b) Vision Transformers (Swin) in addition to ResNets. See Fig. R1 in the response pdf for these results, respectively. Our goal here is to demonstrate that models must tradeoff between background-invariance and object-disambiguation performance in general, regardless of training data scale or model architecture. Our results are consistent with our original experiments using ResNet and 16k training points: all models show a clear tradeoff between generalization to background-invariance and object-disambiguation tests.
**Segment Anything Model Experiments.** Second, to provide a stronger proof of concept for our data augmentation method, we present a new heuristic masking method based on the Segment-Anything model. We show that this model can automatically segment images into foreground and background, which enables our data augmentation method to apply generically without the need for ground-truth foreground/background masks. Using pseudo-masks generated by Segment-Anything, we achieve comparable results to using the ground-truth masks (see Table R1). In addition, new segmentation datasets like SA-1B provide billions of segmentation masks for tens of millions of naturalistic images. Thus we believe our method can provide fertile ground for future work to explore with Segment-Anything and more complex scenes.
Pdf: /pdf/552bc70145db720d2b081685b4077e2058cbd8ab.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Long-Term Fairness with Unknown Dynamics | Accept (poster) | Summary: This paper examines the issue of enduring fairness within a dynamically responsive population, framing it as a reinforcement learning (RL) problem with certain constraints. In this scenario, the state distribution is subject to change dynamically based on the actions of the deployed agent. To address this problem, the authors introduce L-UCBFair and R-TD3. L-UCBFair assumes a simplified linear Markov Decision Process, while R-TD3 incorporates existing deep reinforcement learning methods for a more general setting. According to the numerical experiments conducted, the proposed method consistently surpasses the performance of baseline methods. The RL approach enables an agent to learn how to forego short-term utility to steer the system towards more desirable equilibrium states.
Strengths: **Originality**:
The authors uniquely approach the fairness problem in machine learning within a context where the underlying state distribution dynamically evolves based on the agent's decision. This novel setup extends the static fairness issue found in the existing literature. The dynamic viewpoint introduces the concept of cumulative reward and distortion (cumulative fairness disparity), aptly fitting the method of reinforcement learning. The experiments carried out distinctly highlight the benefits of incorporating a reinforcement learning method, particularly in cases of changing underlying environments.
**Quality and Clarity**:
The motivation behind this work is cogently presented in the introduction, while the related works section explores a wide range of existing literature, discussing both fairness issues in non-stationary settings and safe reinforcement learning. The demonstration of numerical experiments is lucidly laid out.
**Significance**:
The suggested L-UCBFair and R-TD3 methods surpass the performance of baseline methods. The concept of foregoing short-term utility to guide the system towards more optimal equilibrium states is effectively demonstrated. Furthermore, the author presents a thorough theoretical analysis for the proposed L-UCBFair under the linear Markov Decision Process assumption.
Weaknesses: **Weakness 1**:
The depiction of Algorithm 1 could be clearer. There are several terms defined in the paper that aren't elaborated upon, making comprehension more challenging. Enhancing the clarity in Section 3.1.1 could greatly improve overall readability.
**Weakness 2**:
The authors introduce R-TD3 as a broader reinforcement learning method, not reliant on the linear MDP assumption. Could the authors shed more light on the advantages or enhancements that R-TD3 offers in comparison to the foundational method, L-UCBFair?
**Weakness 3**:
In the numerical experiments, the authors have designed synthetic simulation environments based on straightforward datasets. However, in the existing literature, public simulators dedicated to long-term simulation, such as "Fairness is not static: Deeper understanding of long term fairness via simulation studies", are readily available. Given that this has been mentioned in the related works, could the authors assess their proposed L-UCBFair and R-TD3 in these more complex simulation environments? For instance, situations involving lending, attention allocation, or college admissions could provide further insights.
**Weakness 4**:
Reinforcement learning utilizes long-term planning to maximize rewards and diminish distortion, whereas myopic-fair methods – the basis of all baseline methods used in the paper – only address a one-step optimization issue. Given this, the result presented in Figure 1 seems rather self-evident as one would expect a long-term planning-based RL method to surpass more short-sighted methods. Could the authors compare their proposed method with more robust baseline methods that also consider underlying dynamics?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The issue of long-term fairness can become quite resource-intensive when the agent interacts with the real-world scenarios. Within the existing body of reinforcement learning literature, model-based reinforcement learning methods have been devised to enhance sample efficiency. Could the authors discuss the potential of developing a model-based reinforcement learning approach to tackle the long-term fairness problem addressed in this paper?
Moreover, several queries were raised in the previously mentioned weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors followed the PaperChecklist by providing a discussion on the current limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Depiction of Algorithm 1 could be clearer. Same for Section 3.1.1.
We express our gratitude for this advice. In the event of acceptance, we intend to enhance the presentation of Algorithm 1 and Section 3.1.1 in the final version. To provide greater clarity, our planned revisions include:
1. Relocating relatively less crucial parameters, such as $\eta$ and $\chi$, from the algorithm itself. These parameters will be moved to a new section in the Appendix, where we will elaborate on their specifics.
2. Segregating the *policy search* details from the primary structure of Algorithm 1. We will introduce a new sub-function dedicated to this aspect. This restructuring aims to maintain the clarity and comprehensibility of the main algorithm. (A potential structure can be provided in .md format.)
3. Enhancing the alignment between Section 3.1.1 and Algorithm 1 by incorporating annotations in the algorithm using markers such as #LSVI-UCB, #Adaptive Search Policy, and #Dual Update, similar to the bolded text in Section 3.1.1. Furthermore, when introducing concepts like #LSVI-UCB, #Adaptive Search Policy, and #Dual Update in Section 3.1.1, we will explicitly reference the relevant lines in the algorithm to facilitate readers' better understanding of the content.
> The authors introduce R-TD3 as a broader reinforcement learning method, not reliant on the linear MDP assumption. Could the authors shed more light on the advantages or enhancements that R-TD3 offers in comparison to the foundational method, L-UCBFair?
The major advantage of R-TD3 over L-UCBFair is that state-of-the-art deep learning techniques may be used much like R-TD3 (i.e., with a Lagrangian objective subject to scheduled dual variable), potentially providing superior performance, albeit without strong theoretical guarantees. Unlike L-UCBFair, which relies on the linear MDP assumption and thus only trains the weights of the fully connected layer, R-TD3 does not have a requirement for a specific network structure. This makes it suitable for larger and more efficient neural networks, leading to powerful real-world applications involving complicated environments. However, unlike L-UCBFair, R-TD3, as a deep learning framework, does not offer regret guarantees.
> … public simulators dedicated to long-term simulation such as “Fairness is not static: Deeper understanding of long term fairness via simulation studies”, are readily available…, could the authors assess their proposed L-UCBFair and R-TD3 in these more complex simulation environments? For instance, situations involving lending, attention allocation, or college admissions could provide further insights.
While the cited package implements the model proposed by Liu’s “Delayed Impact” paper, it uses discrete action spaces (e.g., only seven possible credit score thresholds), in keeping with the capabilities of more established algorithms. To more appropriately demonstrate our work in addressing continuous action spaces, we required a different experimental environment. We agree that additional experimental evaluation may provide further insights, and hope that future work futhers both algorithmic development and experimental settings for our formulation of long-term fairness with continuous state and action spaces.
> Reinforcement learning utilizes long-term planning to maximize rewards and diminish distortion, whereas myopic-fair methods – the basis of all baseline methods used in the paper – only address a one-step optimization issue. Given this, the result presented in Figure 1 seems rather self-evident as one would expect a long-term planning-based RL method to surpass more short-sighted methods.
We agree that a long-term planning-based RL method should surpass more short-sighted methods in this problem domain (and confirm through experiment that this is so!), and this is why a formulation of long-term fairness, allowing such algorithms to be brought to bear in this domain, is so critical. The state of the art in algorithmic fairness is, sadly, based on myopic classifiers subject to fairness constraints (as listed in Section 1.1). The use of learning techniques to anticipate dynamic population responses to deployed policy and guarantee fairness is nascent.
It may be presumed that the difficulty of treating continuous action spaces, which are common in real-world settings dealing with fairness constraints affecting human populations, has been a primary reason for the delayed treatment of online or RL leaning methods in algorithmic fairness. Our primary contributions are thus an important step for bringing long-term fairness into focus within this research community.
> Could the authors compare their proposed method with more robust baseline methods that also consider underlying dynamics?
We take as a primary assumption that the dynamics with which the population reacts to deployed policy is unknown a priori. When dynamics are known, the field of optimal control may be brought to bear, but different dynamics can require different control policies, and this would have to be analyzed on a case-by-case basis. In general, it is true that knowledge of the dynamics can improve performance in the setting of long-term fairness, but at this early stage, we seek generality.
> Model-based RL can enhance model efficiency. What is potential of model-based methods?
Absolutely, model-based RL can indeed improve performance in specific settings. In particular, when knowledge of the dynamics is known a priori, the class of models can be restricted by this knowledge to improve sample efficiency and performance. Our purpose in this paper has been to provide a general treatment of long-term fairness with minimal assumptions about the specific transition dynamics (i.e., using a linear-MPD assumption), but there is ample room for future work (especially when specialized to specific settings) to adopt model-based RL or approaches deriving from control (e.g., data-driven control). | Summary: This paper considers a binary classification task with long-term fairness. The authors formulate the problem as a constrained MDP where the classifier is an action, the distribution is the state, the distribution will shift reacting to an action. The problem aims to optimize the expected long-term reward under the constraints for long-term fairness in expectation. They develop L-UCBFair which is a model-free algorithm to guarantee long-term fairness with continuous state and action spaces. Under the linear MDP assumption, they prove sublunar regret and disparity with high probability. Finally, experiment results are given to demonstrate the performance of the proposed algorithm.
Strengths: I believe the paper has the following strong points.
+ The paper formulates fairness constrained classification problem as a constrained MDP, which is new to me.
+ The paper applies linear MDP algorithms and TD3 to the considered problem and prove the regret bound with linear MDP assumption.
Weaknesses: The paper can still be improved in the following aspects.
- The paper seems to directly apply LSVI-UCB and TD3 algorithms to solve the considered constrained MDP problem. It would be much better if the authors can discuss how the algorithm design and proof techniques are different from original works in (Jin et al. (2020)) and Fujimoto et al. (2018).
- The paper formulates the fairness-constrained classification problem as a MDP problem, but does not give enough motivation.
- It usually requires a lot of exploration for the constrained RL to achieve low disparity, so I am afraid that the fairness violation can be very high at early stages.
- The paper is limit in addressing the complexity problem by using constrained RL for this problem. The action space (classifier) can be very large, which can cause a very large complexity.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. The paper considers a dynamic where the new distribution is affected by the previous distribution and classifier. Can the authors give some concrete motivation examples for such distribution shift?
2. Can the authors discuss the challenges of solving the MDP with fairness constraints? What are the differences comparing with standard constrained MDP problems?
3. How does the complexity of proposed algorithm rely on the space of state and action?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations are listed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The paper seems to directly apply LSVI-UCB and TD3 algorithms... It would be much better if the authors can discuss how the algorithm design and proof techniques are different from original works in (Jin et al. (2020)) and Fujimoto et al. (2018).
The L-UCBFair differs from LSVI-UCB (Jin et al. 2020) in important respects:
1. LSVI-UCB constitutes an RL framework devoid of constraints. The objective in Jin et. al.’s paper is to maximize cumulative utility, whereas ours is to optimize utility subject to a fairness constraint. Due to this constraint, we must monitor the value function associated with cumulative fairness violations (i.e., “distortion” in our paper), and derive bounds on the associated regret. Deviating from the cited work, we employ a primal-dual approach and formulate a policy that takes both the objective and fairness constraint into account (refer to algorithm 1, SM(...)). The resulting proof establishes regret and distortion bounds at O(H^2√d^3K), both with high probability.
2. LSVI-UCB exclusively applies to discrete action spaces. Without the ability to utilize continuously varying policies, It may be impossible to satisfy fairness conditions, just as it may be impossible to find the roots of an arbitrary polynomial when restricted to integer-valued inputs. For dealing with fairness, our extension of LSVI-UCB with an adaptive search policy is necessary.
Similarly, R-TD3 differs in an important way from direct application of TD3 (Fujimoto et al. 2018):
1. The objective on which we use TD3 is a Lagrangian relaxation of the constrained optimization problem with a schedule for the dual variable, where the schedule introduces an additional hyperparameter.
> The paper formulates the fairness-constrained classification problem as a MDP problem, but does not give enough motivation… The paper considers a dynamic where the new distribution is affected by the previous distribution and classifier. Can the authors give some concrete motivation examples for such a distribution shift?
*Regarding motivation and examples:* Deployment bias is a frequent occurrence in machine learning systems where the model interacts frequently with users and the model's outputs have a high impact on users' well-being (e.g., financial decision-making) and preferences (e.g., recommendation system).
Imagine two distinct user groups in the loan approval setting, namely Group $A$ and Group $B$. Let's assume that the deployed model has a much higher approval rate for the applicants from Group $A$, and that Group $A$ is in a better position to put loans to productive use through investment. Over time, as users from $A$ receive more financial support, they can further improve their financial position (e.g., by receiving education, by starting their own business) and are likely going to become even more qualified for loans in the future. On the other hand, users from Group $B$ lacked the opportunity to grow financially and were trapped in a low socioeconomic status, hurting chances for future loan applications. In this example, the policy (higher approval rates for Group $A$) and the state (Group $A$’s ability to use loans productively) jointly led to a new state where Group $A$ is even more able to use loans productively than Group $B$, which has become trapped in a state with few prospects and little opportunity for advancement.
*Regarding limitations:* The MDP formulation is quite general; the only restriction it imposes in our use case is an assumption of observability (i.e., that no state variables exist other than the distribution itself). While this assumption is not strictly necessary, it does simplify the formulation and eliminates the need for considering additional state information.
> It usually requires a lot of exploration for the constrained RL to achieve low disparity, so I am afraid that the fairness violation can be very high at early stages.
This fear is not unfounded, however the violation of fairness and increased loss at early stages may in fact be necessary when short-term incentives and long-term benefit may be misaligned in practice (such that myopic policies may steer the system toward undesirable equilibria, as in Fig 1 (b)). This aspect of our work is highlighted in the introduction and further demonstrated in experiments: To guide the system towards more desirable equilibria, an RL formulation of long-term fairness is imperative, and an online setting is unavoidable (See response to first weakness pointed out by Reviewer oXaL).
> Can the authors discuss the challenges of solving the MDP with fairness constraints? What are the differences compared with standard constrained MDP problems?
For verisimilitude, the key complications to a standard constrained MPD problem introduced by our consideration of fairness are
1. An assumption that the MDP transitions are unknown, necessitating the use of the RL method to derive the policy, given the usual lack of knowledge about real-world dynamics.
2. The use of continuous action spaces, which are often necessary to satisfy fairness constraints and are ubiquitous in the real-world (e.g., when actions correspond to policy parameters).
No prior model-free constrained MDP for continuous action spaces has been theoretically equipped with regret and distortion bounds. Our work breaks new ground by introducing the first algorithm of its kind, providing such guarantees.
> The action space (classifier) can be very large, which can cause large complexity. …. How does the complexity of the proposed algorithm rely on the space of state and action?
State: For state space, our memory requirements are linear in the dimensionality of the learned embedding space ($\phi$ in the paper).
Action: For action space, our memory requirements are linear in the number of action space regions ($M$ in the paper). As a trade-off, $M$ affects $\epsilon_I$ (Theorem B.4), and thus regret bounds for the algorithm (Equation 13 and line 526).
---
Rebuttal Comment 1.1:
Title: Thank you for rebuttals
Comment: I thank the authors for answering my questions. I acknowledge reading the rebuttal and have further questions.
**Difference from LSVI_UCB and TD3.** I am convinced that this work is different from LSVI_UCB and TD3 since they do not consider constraints.
**Motivation of RL formulation.** The example of loan approval is a good motivation for this problem, but the limitations about the observability should be discussed in the paper.
**Fairness violation.** I understand that the constant violation is not avoidable for the online setting.
**Difference from constrained MDP.** I understand that the analysis generalizes the constrained MDP with finite action space (Ghosh et al. (2022)) to continuous action space. The issue introduced by continuous action is solved by discretizing the action space which is shown in the proofs of Lemma B.10, B.11, B.12. Please response to me if I miss other technical challenges.
**Complexity** I understand that the regret bound depends on the dimensionality of the embedding instead of directly on the size of the action-state space. However, dimensionality of the embedding may also implicitly increases with the size of the action-state space. The authors may need to discuss this scalability issue, especially for this setting where the state includes the full distribution information.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's engagement with our rebuttal. Below, we provide the answers to the remaining questions.
> **Motivation of RL formulation.** The example of loan approval is a good motivation for this problem, but the limitations about the observability should be discussed in the paper.
Thank you for additional feedback. At present, we state on lines 123-125 that “We assume $s_\tau$ is fully observable at time $\tau$. In practice, $s_\tau$ must be approximated from finitely many empirical samples, though this caveat introduces well-understood errors that vanish in the limit of infinitely many samples.”
We will be sure to mention this limitation again in the “Limitation” paragraph at line 309.
> **Difference from constrained MDP.** I understand that the analysis generalizes the constrained MDP with finite action space (Ghosh et al. (2022)) to continuous action space. The issue introduced by continuous action is solved by discretizing the action space which is shown in the proofs of Lemma B.10, B.11, B.12. Please respond to me if I miss other technical challenges.
Indeed, this is our solution. However, besides the proofs of Lemma B.10, B.11, B.12,
1. We also analyze the distinction between $\left|Q^k_{j, h}(s, a)-Q^k_{j, h}(s, I(a))\right|$, imposing $\epsilon_I$ in Lemma B.14. With this Lemma we can then derive the first inequality of proof of Lemma B.10.
2. Lemma B.3 and Theorem B.4 are also needed for the adaptive searching policy.
The proofs of Lemma B.5, B.7, and Theorem 3.3, likewise, exhibit technical novelty.
> **Complexity.** I understand that the regret bound depends on the dimensionality of the embedding instead of directly on the size of the action-state space. However, dimensionality of the embedding may also implicitly increase with the size of the action-state space. The authors may need to discuss this scalability issue, especially for this setting where the state includes the full distribution information.
In general, it is true that larger joint state-action spaces admit more complexity, but this is not necessarily so: Koopman operator theory (which ensures the existence of a linear operator that describes the evolution of observables in a dynamical system) in general can require infinite dimensions independent of the dimension of the underlying state of a dynamical system. This is not an issue of scalability with regard to dimension, but with respect to complexity. The best way to measure this complexity is by the minimal necessary dimension of $\phi$, which can be independent of the dimensionality of the joint state-action space. | Summary: This paper introduces a new method to ensure long-term fairness in reinforcement learning. The method is compatible with different utility measures (e.g., the accuracy of the classifier when the goal is to predict a label) and different fairness constraints (e.g., demographic parity or equal opportunity). The method is an online reinforcement learning method, so the agent has to interact with its environment in order to improve the quality of the policy, either in the real world or in simulation. The authors introduce two measures of regret, one related to the primary utility goal and one related to the fairness measure, and present an upper bound for both regrets achieved by the proposed algorithm. The paper presents empirical results using a mixture of synthetic and real data.
Strengths: - This paper tackles the very important problem of ensuring fairness in a sequential decision-making setting. It uses insights from the - reinforcement learning literature to solve a supervised learning fairness problem.
- The proposed method extends existing ones to the continuous setting (i.e., continuous states—e.g., attributes—and continuous actions).
- The paper is very thorough with the mathematical formulation, assumptions, and definitions necessary to fully describe the method.
Weaknesses: - The proposed method is an online RL method, where the learning process occurs while the policy is applied. This is often not realistic in scenarios where we would like to ensure fairness, as intermediate policies could lead to unfair behavior.
- My understanding is that the proposed method is an extension of the method presented by Ghosh et al. (2022) for the continuous setting, where the method itself is fairly similar, and the main new contribution is showing that it is possible to achieve the same bound in this new setting. I believe that this could be better clarified in the paper to highlight the novelty of this work more clearly.
- In the introduction, the authors state that the primary contribution of this work is to consider the problem of long-term fairness as a reinforcement learning problem subject to a constraint. However, previous works have already considered this setting (e.g., the method on which the present work is based).
- I believe the experiments with R-TD3 are not strongly supporting the argument that an algorithm without theoretical guarantees can still be used for long-term fairness. Applying the method in just two scenarios does not present enough empirical evidence to support the claim that the final policy is still fair, and without any theoretical guarantees, this is essential.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Could the authors please provide some motivating examples for when an online RL method would be useful/applicable, considering the fact that the policy is updated while being deployed and the regret bounds are only for the final distortion and not the disparity at each timestep (possibly causing intermediate policies to be unfair)?
- The color scheme in Figure 1 is slightly unclear. The range for the demographic disparity in the first two plots is between zero and 0.005 or zero and 0.07 (a very small interval), but for the proposed method results it is between zero and 1. This makes it harder to compare the fairness measures. Could the authors please clarify this?
- Why did the authors select only the Myopic-fair baseline for the second experiment? From the first experiment, it seemed like this wasn’t one of the most competitive baselines, so it would be interesting to see how the other baselines perform in this second scenario as well.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors discuss the limitations of the work. No major concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive review. We hope that our rebuttal satisfactorily addresses the perceived weaknesses of the submission and questions you have.
### Weaknesses
> The proposed method is an online RL method… intermediate policies could lead to unfair behavior.
Unfortunately, there is likely a greater cost to attempting to learn to model the behavior of human populations in response to currently deployed (and potentially naive) algorithmic policies offline (and off-policy), while currently deployed algorithmic policies do not adaptively attempt to control for fairness. In addition, such a model might differ when we change the hypothesis space of the models, or the application contexts, and probably will change when the external environment for how human users interact with the models changes.
Ultimately, if we simultaneously wish to
* learn to model the how human populations respond to deployed policy,
* respect fairness,
and do so in a context-free and data-driven way. Then we fundamentally have an online reinforcement learning problem.
The real-world use of ML is already online: live experiments (intentional or not) are being run on human populations using current ML policies. It is difficult to maintain the position that we should not be seeking to learn from these live experiments and systematically adapting our algorithms.
In addition, we point out that short-term violations of “fairness” may be necessary, in certain settings, to drive the system towards desirable equilibria. Our formulation of long-term fairness takes the approach of constraining cumulative violations of fairness.
> My understanding is that the proposed method is an extension of the method presented by Ghosh et al. (2022) for the continuous setting… the main new contribution is showing that it is possible to achieve the same bound in this new setting. I believe that this could be better clarified…
This understanding is correct; we will adopt this recommendation. While the proof sketch inherits from prior work, the details are novel. Specifically, as Ghosh et. al. treat a discrete action space, their proof techniques (such as their value function bound, action-value function bound, softmax policy bound, etc.) are not applicable in a continuous setting, and neither is their regret bound (Lemmas 3, 8, 9, 11, 13, 15, 16 of Ghosh et. al.). We solve these issues with the following results:
1. In Lemma B.10, we derive a distinct bound for the difference between $\bar{V}{h}^{k}(s)$ and $V{h}^{k}(s)$.
2. In Lemma B.11, we present novel bounds for softmax policies based on different action-value functions and dual variables.
3. Lemma B.12 provides insights into the disparity of $\left|V_{j}^{k} - \widetilde{V}_{j}\right|$.
4. We analyze the distinction between $|Q^{k}{j,h}(s, a) - Q^{k}{j,h}(s, I(a))|$ in Lemma B.16.
The proofs of Lemma B.5, B.7, and Theorem 3.3, likewise, exhibit technical novelty.
> …the authors state that the primary contribution of this work is to consider the problem of long-term fairness as a reinforcement learning problem subject to a constraint. However, previous works have already considered this setting (e.g., the method on which the present work is based).
1. No prior work, to our knowledge, has applied reinforcement learning subject to constraint to the domain of algorithmic fairness, as discussed in Section 1.1 (Dynamics of Fairness in Machine Learning). One of our primary contributions is this formulation and framing of algorithmic fairness.
2. Formulating realistic settings for algorithmic fairness presents unique challenges (eg. continuous action spaces) which we address for the first time. No prior model-free constrained MDP for continuous action spaces has been theoretically equipped with regret and distortion bounds.
> I believe the experiments with R-TD3 are not strongly supporting the argument that an algorithm without theoretical guarantees can still be used for long-term fairness. … just two scenarios does not present enough empirical evidence to support the claim that the final policy is still fair, and without any theoretical guarantees, this is essential.
This belief is welcome, and we agree. We are not making the argument that this *specific* algorithm should be used for long-term fairness. Rather, we are claiming that the possibility still exists for a tradeoff between tractability and theoretical guarantees to be exploited, and anticipate additional future work in this direction. In particular, as pointed out in response to Reviewer kVGy, there is ample opportunity for future work to use model-based reinforcement learning or methods from data-driven control when knowledge of transition dynamics is given.
### Questions
> Could the authors please provide some motivating examples for when an online RL method would be useful/applicable, considering … [the approach may cause] intermediate policies to be unfair?
Our regret bounds apply to cumulative suboptimality. As such, the violation of fairness and increased loss at early stages may in fact be necessary, as short-term incentives and long-term benefit may be misaligned in practice (such that myopic policies may steer the system toward undesirable equilibria, as in Fig 1 (b)). This aspect of our work is highlighted in the introduction and further demonstrated in experiments: To guide the system towards more desirable equilibria, an RL formulation of long-term fairness is imperative.
> The color scheme in Figure 1 is slightly unclear…. Could the authors please clarify this?
We agree that normalizing each figure’s color scale for maximum dynamic range would lead to greater clarity. We have regenerated these figures, providing an example in the rebuttal pdf.
> … it would be interesting to see how the other baselines perform in this second scenario as well.
We give full comparisons to all baselines in the supplementary material. For this setting, see Column 1 of Table 2. | Summary: The paper demonstrates that reinforcement learning algorithms can be used to satisfy long term fairness constraints in dynamic environments in which a classifier and population interact while finding high-value equilibria.
The main approach is to treat the problem as a constrained optimization and optimize the Lagrangian function. The authors use a linear UCB-style algorithm to obtain provable regret bounds on both the cumulative value and constraint violations and demonstrate effectiveness of that algorithm and a less-principled-but-still-practical RL alternative using small simulation studies.
Adapting previous work to the interacting classifier population setting requires handling a continuous action space (setting per-group classifier thresholds) - which the authors handle by showing the continuous action space in this setting can be searched in a discretized manner without losing guarantees.
The simulation experiments demonstrate that the RL approaches can find good operating points where more myopic policies can end up exacerbating qualification rate disparities.
Strengths: Significance & Originality: The authors address an important question: How to design decision-making algorithms for dynamic settings where the underlying dynamics are not well known and the designers want to be both fair and encourage positive societal outcomes overall.
Adding fairness considerations to an RL algorithm with a suitably specified reward function is not a groundbreaking idea, but this work offers a concrete algorithm and practical model-free approach for this question that (as far as I can tell) thus far have not been proposed.
Clarity: Paper is very clear and easy to read up to the experiments, setting and notation are clear, as well as the paper’s goals and contributions.
Weaknesses: I found the experiments to be a little bit unclear in terms of what was being demonstrated with each figure / experiment. What is the role of the fully synthetic vs semi-synthetic experiments? I think the paper could be improved with a clearer exposition of the takeaways from each experiment.
The authors note that real-world experiments would be necessary to really assess impact of this algorithm. I strongly agree with this sentiment and while I understand that truly real-world experiments can be hard to find, the authors could potentially test across a range of dynamic synthetic environments to give a better sense of the generalizability of the proposed approach.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The shading in figures 1a-d suggests that the demographic parity violation can be mapped to the same state space as qualification parity? Is that really true? For any pair x,y on those plots is there a unique value for the demographic parity violation?
2. How would a purely RL-based long-term reward maximizing agent perform as a baseline (with no fairness constraints)? I would think it might also drive the qualification rates up to the top-right of figures 1a-d, in which case can we be sure it’s the fairness constraints that are actually doing the work here and not just the long-term reward optimization in general?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Authors addressed the limitations of their experimental setup with synthetic environments. They might also consider discussing what types of societal inequalities can and cannot be addressed by fairness-in-decision-making style constraints.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review, your constructive feedback, and your comments.
### Weaknesses
> I found the experiments to be a little bit unclear…
We will incorporate more succinct takeaways from the experiments in revision.
*What was demonstrated:* Our experiments seek to confirm our central hypothesis: algorithms for long-term fairness may sacrifice short-term incentives for long-term benefits by driving the system to desirable equilibria. More specifically, *in contrast to myopic algorithms, our algorithms are more consistent in achieving desirable states, less fairness violations in equilibrium, and lower mean loss.* We show this with three types of graphical results:
1. The first, exemplified by Fig 1 (a-d) show the state transitions (i.e., between group qualification rates, which are the x and y axes) driven by different algorithms. Streamlines indicate the next state the corresponding algorithm drives the qualification rates of the two groups to. In these figures, the upper-right corner of the depicted state space, corresponding to higher qualification rates for both groups, is more desirable. Color indicates the degree to which the policy violates fairness in each state, such that lighter color is more desirable (lower fairness violations).
2. The second type of graph, exemplified by Fig 1 (e), compares the mean loss sustained by different algorithms during deployment. In general, lower mean loss is possible only when the algorithm succeeds in first achieving a more desirable state.
3. The third, exemplified by Fig 3, shows how mean episodic loss and disparity evolve in time; That these quantities decrease in time validates our claim that L-UCB Fair achieves sublinear regrets.
We provide many additional experiments in the supplementary material.
*Synthetic vs semi-synthetic settings:* In general, it is difficult to model population response and distribution shift, and models must strive for verisimilitude while remaining interpretable. The synthetic setting provides an idealized setting in which to evaluate our algorithms, while the semi-synthetic setting adds additional real-world complications and serves as a more robust test of our claims, given that we are unable to ethically or realistically deploy our algorithms on real-world populations.
> …the authors could potentially test across a range of dynamic synthetic environments to give a better sense of the generalizability of the proposed approach.
This is non-trivial and an active area of ongoing research: We are unaware of existing implementations of synthetic environments that model policy-induced distribution shifts and combine continuous state and action spaces. While off-the-shelf synthetic environments (e.g., for continuous control) exist for reinforcement learning in general, for our setting, which combines fairness and distribution shift, it remains an open research question how to appropriately model how human populations respond to algorithmic policies.
### Questions
> …the demographic parity violation can be mapped to the same state space as qualification parity? ...For any pair x,y on those plots is there a unique value for the demographic parity violation?
This mapping from state to disparity is induced by the policy: Because the policy maps state to action, and disparity is a function of state and action jointly, each policy induces a mapping from state to disparity. The mapping is one-way however: it is not true in general that each state will result in a unique value of disparity.
> How would a purely RL-based long-term reward maximizing agent perform as a baseline (with no fairness constraints)? … can we be sure it’s the fairness constraints that are actually doing the work here and not just the long-term reward optimization in general?
Please find the results of running TD3 on (i.e., R-TD3 with lambda fixed to 0) in the rebuttal PDF. In general, it is true that long-term utility may be aligned with fairness, but it is possible that long-term utility alone may work against long-term fairness, in which case the schedule of ever increasing $\lambda$ is important. For a high-level example, consider college admissions, for which only a finite number of applicants can be admitted. When one “elite” group is privileged to be objectively better prepared for a college’s target curriculum, it is easy to imagine how long-term utility measured only by post-admission success rates would exclude disadvantaged groups, absent fairness interventions.
In either case, the point remains that myopically “fair” baselines can do active harm, while our formulation of long-term fairness addresses both cumulative utility *and* cumulative fairness violations. Moreover, ours is the first work we are aware of that addresses long-term utility subject to policy-driven distribution shift in human populations that we are aware of.
### Limitations
> Authors … might also consider discussing what types of societal inequalities can and cannot be addressed by fairness-in-decision-making style constraints.
An inherent strength of our formulation lies in its broad generality: Any real-valued function of state (i.e., distribution) and action (i.e., policy), jointly, is a valid measure of fairness in our work, though L-UCBFair assumes that there exists some (possibly high, but finite dimensional) space for which the measure can be represented as a linear map (Assumption 3.1), and R-TD3 similarly requires that the map is (implicitly) “learnable” by the chosen architecture.
Nonetheless, we share the concern that some socially consequential settings are too sensitive to risk to online learning algorithms: Stochastic sampling and exploration may create irreparable or intolerable harm. This is what we mean when we say in the paragraph “Limitations” (line 309) that “violations of fairness or decreased utility may be difficult to justify to affected populations and stakeholders”.
---
Rebuttal Comment 1.1:
Comment: I acknowledge reading the rebuttal. Thank you for addressing my comments.
I appreciate the additional result with TD3 (lambda=0). It seems from that new result that in the simulated settings used for the experiments here, simply optimizing for long-term reward is sufficient to move towards high-fairness results, but in practice the unconstrained approach has a higher cumulative disparity than the constrained setting.
I think the synthetic demonstrations would be a little bit stronger if they included a setting where fairness and long-term reward were mis-aligned (as described by the authors in the rebuttal, but not used in the experimental demonstrations), but I don't think this is critical.
---
Reply to Comment 1.1.1:
Comment: We appreciate your time and attention in reviewing and addressing our rebuttal.
> I think the synthetic demonstrations would be a little bit stronger if they included a setting where fairness and long-term reward were mis-aligned (as described by the authors in the rebuttal, but not used in the experimental demonstrations), but I don't think this is critical.
We agree that additional synthetic environments with natural misalignment between long term fairness and utility would allow further useful characterization of the potential consequences of the algorithms we explore. We hope to target such environments in future work.
We would greatly value your reconsideration of the rating, if critical concerns have been alleviated. | Rebuttal 1:
Rebuttal: We thank our reviewers for their constructive feedback and questions, as well as pointing out ways in which we could improve our submission (e.g., by highlighting the novelty of our proofs and improving our figures in subtle ways, which we have done).
With the attached PDF, we have regenerated Figure 1 c) as well as Figure 2 a1) and a2) to highlight that the instantaneous disparity achieved by L-UCBFair in these experiments was bounded below 0.32 in violation of Demographic Parity (Table 1).
To address common concerns among our reviewers, we would like to point out that our experimental setup was necessarily non-trivial, as we sought to model continuous state and action spaces. To our knowledge, no existing experimental settings implement continuous state and action spaces specific to the domain of algorithmic fairness, and our evaluation of our methods was not easily extended to other settings without significant effort. We have no doubt that additional synthetic environments will be proposed and implemented in the course of ongoing investigations into the line of inquiry we open with our formulation of long-term fairness.
Additionally, we would like to reiterate that short-term violations of utility, meaning both decreasing loss and increasing fairness, are expected and often necessary in the settings we explore. As stated in the paper, a central motivation of our work is the necessity of sacrificing short-term incentives in order to drive the classifier-population system to more desirable outcomes. We show in our experiments (e.g., Figure 1 a), b), and Figure 2 a)) that myopic fairness constraints, which prioritize short-term fairness, can lead to worse outcomes in the long-run.
Finally, we emphasize that our formulation of long-term fairness as an online reinforcement learning problem is in many ways unavoidable. In particular, while there are risks associated with online learning, the alternative in reality, where interacting with human populations is a slow and costly proposition, is not “offline” learning, but online non-learning, typified by myopic policies.
We appreciate your attention to our submission and the chance to respond to each of our reviewers individually.
Pdf: /pdf/3eccfec9fd14f3f8b041d521cfc423e92a7f4964.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
AbDiffuser: full-atom generation of in-vitro functioning antibodies | Accept (spotlight) | Summary: This paper introduces AbDiffuser, a new equivariant diffusion model, designed to generate full antibody structures efficiently and with high quality. By incorporating family-specific priors and utilizing a novel neural network architecture called APMixer, AbDiffuser demonstrates improved performance in modeling the sequence and structural properties of natural antibodies. Experimental validation shows that the generated antibody binders have high affinity, improving the affinity of a cancer drug. These findings highlight the potential of generative models in creating high-affinity antibody binders without the need for post-selection by binding predictors.
Strengths: * The proposed approach outperforms other baseline methods across various metrics related to antibody designing. This could indicate potential effectiveness in generating high-quality antibody structures.
* The novel architecture introduced in this paper exhibits a lower memory footprint during training, which is a significant advantage compared to the baseline architectures. This not only reduces resource requirements but also enhances the efficiency of sample generation, making it a more practical and scalable solution.
* The design choices made in this work are backed by theoretical results and are further supported by in silico experiments. The alignment between theoretical motivations and experimental outcomes adds credibility to the overall approach.
Weaknesses: * The core components of the proposed model, including the diffusion model and the mixer model, bear a strong resemblance to corresponding reference models. While there are some minor additions, such as the physics-informed residue representation, AHo-specific residue frequencies, and encoding conditional atom dependencies, these modifications may be viewed as relatively insignificant. This could raise questions about the extent of novelty and originality introduced by the proposed model.
* In lines 138-139, it would be beneficial for the authors to provide further insights or explanations regarding why the second choice is expected to improve the performance. This would enhance the clarity and understanding of the motivations behind the model design decisions.
* A minor typographical error is present at line 173, where a missing space between "\AA" and "while" is observed. While this is a minor issue, attention to detail and thorough proofreading are important aspects of a scientific publication.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Table 2 demonstrates that having more parameters is beneficial, contrary to the general notion that fewer parameters are desirable. It raises the question of why an increased number of parameters is advantageous in this specific context and how it leads to improved performance. Could the authors elaborate on the reasons behind the observed benefit of having a larger parameter count in their model?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The choice of HER2 binder design presented in the paper could potentially be limited to being just one successful case study. It is important to acknowledge that the success achieved in this particular case study does not guarantee similar outcomes in other design scenarios. Providing a clear remark about this limitation would be valuable for the scientific community, considering that the drug discovery process is known to be expensive and challenging. It would help manage expectations and promote a realistic understanding of the potential generalizability and applicability of the proposed methodology beyond the specific case study presented.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We address your comments in turn.
> The core components of the proposed model, including the diffusion model and the mixer model, bear a strong resemblance to corresponding reference models. While there are some minor additions, such as the physics-informed residue representation, AHo-specific residue frequencies, and encoding conditional atom dependencies, these modifications may be viewed as relatively insignificant. This could raise questions about the extent of novelty and originality introduced by the proposed model.
From a neural network architecture perspective, our main contribution lies in the “table” representation we propose and not in the specific combinations of layers we employ. Our approach stands in contrast to current relational approaches (GNNs and transformers), is proven to be universal (whereas for usual relational models such as equivariant GNNs universality is not guaranteed (https://arxiv.org/pdf/2301.09308.pdf), and yields demonstrable practical benefits (1-order of magnitude less memory, improved generative model quality, seamless handling of sequence length changes).
At the same time, it is crucial to stress that our work goes beyond the proposal of a neural network and innovates both in the generation aspect (with the introduction of the informative priors and the proof of the benefit they bring in Theorem E.1) and the physics informed residue representation (with the projection layer). The former is shown to carry strong empirical benefits, and the latter makes it for the first time possible to respect the known geometry constraints of protein residues while operating in an external coordinate frame (i.e., coordinates and not angles), thus allowing us to use Gaussian diffusion while respecting the constraints and avoiding the more inaccurate IGSO3 diffusion (https://openreview.net/pdf?id=BY88eBbkpe5).
In our view, the current antibody generative models are more direct adaptations of graph or sequence generative models, while in this work we aimed to design an approach where each component was specifically made with antibodies in mind.
> In lines 138-139, it would be beneficial for the authors to provide further insights or explanations regarding why the second choice is expected to improve the performance. This would enhance the clarity and understanding of the motivations behind the model design decisions.
The main reason for applying the frame averaging on every layer separately was that it performed better in our experiments. We don’t have a mathematical reason to explain this.
> A minor typographical error is present at line 173, where a missing space between "\AA" and "while" is observed. While this is a minor issue, attention to detail and thorough proofreading are important aspects of a scientific publication.
Thank you for pointing this out. We will make sure to proofread the paper a few more times for the final version.
> Table 2 demonstrates that having more parameters is beneficial, contrary to the general notion that fewer parameters are desirable. It raises the question of why an increased number of parameters is advantageous in this specific context and how it leads to improved performance. Could the authors elaborate on the reasons behind the observed benefit of having a larger parameter count in their model?
As far as we are aware, the good performance of deep neural networks is attributed to overparameterization, as it is presumed to smooth the loss landscape (e.g. see http://proceedings.mlr.press/v119/buhai20a/buhai20a.pdf).
In addition to this, the biggest risk of having too many parameters is higher potential of overfitting but this is less of an issue for generative/diffusion models.
It is also the case that, for example, large language generative model performance increases with size (e.g. Table 3 in https://arxiv.org/pdf/2307.09288.pdf).
> The choice of HER2 binder design presented in the paper could potentially be limited to being just one successful case study. It is important to acknowledge that the success achieved in this particular case study does not guarantee similar outcomes in other design scenarios. Providing a clear remark about this limitation would be valuable for the scientific community, considering that the drug discovery process is known to be expensive and challenging. It would help manage expectations and promote a realistic understanding of the potential generalizability and applicability of the proposed methodology beyond the specific case study presented.
It is true that drug design is a complicated endeavor, that there are many variables that can impact the outcome, and that success is not guaranteed. Our model, like any other, is not exempt from this: naturally, the achieved binding rates and the required dataset sizes could differ quite a bit depending on the problem at hand. We believed that this was already clear, but we will be happy to make it explicit.
The fact that our model does well on general antibody generation (paired OAS) gives some hope that it is able to properly capture many different aspects of antibodies and that the results would transfer to some extent.
To also evaluate our model in different scenarios, we tested our model on antigen conditioned CDR inpainting on the SAbDab split used in DiffAb. You can find the results in the general response. It can be seen that our model does quite well on this task, which further increases our hope of it generalizing to other antigens and problems.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarification
Comment: I extend my gratitude to the authors for their comprehensive response that effectively addresses my previous concerns. The additional details provided have clarified the points of contention, ensuring a more robust understanding of the manuscript. Consequently, I find it appropriate to raise the score for this submission. | Summary: The paper proposes AbDiffuser, a new diffusion model for antibody structure and sequence generation. AbDiffuser proposes several ideas to improve generation: (i) Euclidean diffusion with frame averaging to achieve SE(3) equivariant diffusion, (ii) antibody sequence alignment to standardize sequence length across antibodies, (iii) atom to rigid body projections to maintain inter-residue physical constraints, and (iv) informative priors when sampling sequence and structure. The method is evaluated on the Observable Antibody Space and HER2 binder dataset against a selection of prior works but not state-of-the-art baselines. Performance is measured with how close the generated sequences match statistics of the test set as well as some physics based metrics. The authors ablate most components of the method to find it improves performance. Lastly, they demonstrate 16 of their designs fold into antibodies and have high binding affinity to HER2.
Strengths: - AbDiffuser explores many ideas that are different to current state-of-the-art protein generation diffusion models. (1) Frame averaging for conferring SE(3) equivariance to MLPMixer over positions. (2) Aligning antibodies to a reference sequence for handling variable lengths. (3) Projecting coordinates to rigid body constraints on input and output of the denoising model. (4) Informative priors over the coordinates and sequence based on training set statistics. Each component of the method seems to help based on the task and metrics defined in the paper.
- I appreciate the wet-lab validation of their designs which is almost never present in protein papers at machine learning conferences.
Weaknesses: - **Biggest issue: poor presentation**. There are simply too many new ideas that comprise AbDiffuser and all the details are left to the appendix. The reader should not have to go through 28 pages (main text and appendix) to understand the method. Even after looking through the appendix, I am unsure what the full algorithm is. In fact, there is no algorithm or figure of how AbDiffuser works from training to sampling.
- **Missing motivation for frame averaging**. Frame averaging (FA) is introduced without motivation. On line 37-38, the authors state previous methods that confer SE(3) equivariance and physical constraints come with "increased complexity". These complexities are never spelled out. How is AbDiffuser simpler? The paper proposes frame averaging with informative priors *and* the rigid body projection step. This seems quite complicated to me. Additional points on FA:
- **Missing related work**.
- FA for proteins. the authors fail to cite [1] which already proposed FA for proteins.
- The projection step seems related to the projection step, "Equivariant Consensus Structure from Weighted Inter-residue Geometries", in Appendix L of Chroma but this connection is not mentioned.
- **Gaussian diffusion plus projection step vs. SE(3) diffusion**. The projection step seems to me as a way to circumvent working with rigid body frames by always projecting the gaussian diffusion over coordinates onto the rigid body constraints. But this seems close to SE(3) diffusion [2] over frames where the C-a coordinates diffuse along with the backbone atoms which always have the rigid body constraints. Why is the gaussian diffusion and projection step necessary to develop if we have SE(3) diffusion? Furthermore, SE(3) diffusion is not compared with so we do not know which is better.
- **Unjustified claims**. I have concerns with several claims in the paper.
- Line 3 claims AbDiffuser uses a new representation of protein structure which as I understand is the FA component. But as stated this has already been done [1].
- Line 5-6 "Our approach improves protein diffusion ... handles sequence-length changes". This is not true. The authors achieve this by aligning all antibodies to a reference alignment (the AHo numbering) that has been specifically developed for antibodies. This does not exist for nearly all proteins so their method cannot be applied. Even if it can be applied, the authors cannot claim it works for other proteins until it is demonstrated.
- Line 67 - 69 "These results provide the first experimental evidence that a generative model trained on mutagenesis data can create new antibody binders...". This is not true. [4] is just one paper that showed new antibody binders can be created with generative models. In addition, RFdiffusion [5] which can generate de novo binders is not mentioned.
- Title "AbDiffuser: full-atom generation of in-vitro functioning antibodies". This a bold claim that is **not justified** by the results of generating 16 functioning antibodies for a single target HER2. Furthermore, it is unclear how novel these antibodies are.
- Line 87. The authors claim APMixer is a novel neural network for processing proteins. The claimed novelty seems very incremental. MLPMixer is not new, FA is not new. The only component which I believe is novel is the sequence alignment of each antibody such that they all have the same sequence length as input. But this does not warrant technical novelty. Furthermore, I don't understand why SE(3) universality isn't already provided by FA.
- **Unclear experimental set-up**. The experimental set-up is poorly written. The metrics are not clearly defined in the main text. The training dataset is unclear: OAS does not have structures so how are the structures obtained for training? Do the authors train RefineGNN and MEAN baselines only on the CDR which they were designed for or also give them the full sequence? How are the splits performed? The Wasserstein distance metric has no equation so I am unclear what it is exactly measuring. Why is there no metric on diversity and novelty of the sequences? Perhaps this information is buried in the appendix but it should be up front in the main text.
- Line 250-252. Why would we expect IgLM to have the best accuracy just because it was trained on more data? IgLM is not trained on the structure like AbDiffuser or the other baselines.
- **Experimental results**. The authors claim they have designed binders which have been tested in the wet-lab but this result must be accompanied by experimental results in the supplementary or plots in the supplement. This is standard in bio journals. In addition it is unclear how novel each design is. If only a few mutations were necessary to obtain a tight binder then this task is not difficult.
- **Lack of baselines**. The authors miss out on state-of-the-art. On top of the ones already discussed there have been diffusion models for antibody generation [6, 7].
[1] https://arxiv.org/pdf/2301.10814.pdf
[2] https://arxiv.org/abs/2302.02277
[3] https://www.biorxiv.org/content/10.1101/2022.12.01.518682v1.full.pdf
[4] https://www.biorxiv.org/content/biorxiv/early/2023/03/29/2023.01.08.523187.full.pdf
[5] https://www.biorxiv.org/content/10.1101/2022.12.09.519842v1
[6] https://www.biorxiv.org/content/10.1101/2022.07.10.499510v2
[7] https://arxiv.org/abs/2302.00203
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: My questions have been written above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: Limitations are not discussed though I believe there are many issues with the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your in depth questions. We urge you to carefully consider our reply as it addresses the issues you raised and used to motivate your low score.
> Biggest issue: poor presentation. There are simply too many new ideas that comprise AbDiffuser and all the details are left to the appendix. The reader should not have to go through 28 pages (main text and appendix) to understand the method. Even after looking through the appendix, I am unsure what the full algorithm is. In fact, there is no algorithm or figure of how AbDiffuser works from training to sampling.
Thank you for appreciating the number of contributions the paper makes. Due to the limited space we naturally focus on our main contributions in the main paper and delegate less innovative parts, such as the definition of the metrics and the analysis of the results to the appendix. To improve the at-a-glance understanding of the method in the PDF attached to the main response we included sampling and training pseudo algorithms. We will also include them in the final version.
> Frame averaging (FA) is introduced without motivation.
The motivation for frame averaging is twofold 1) the original paper has shown it to compare favorably to alternatives 2) it allows us to conveniently deal with SE(3) equivariance and frees us to utilize a non-relational equivariant model using well understood an easy-to-train blocks (MLPs).
> On line 37-38, the authors state previous methods that confer SE(3) equivariance and physical constraints come with "increased complexity". These complexities are never spelled out. How is AbDiffuser simpler? The paper proposes frame averaging with informative priors and the rigid body projection step. This seems quite complicated to me.
We refer to the fact that building neural networks that work on rotation manifolds is complicated by the non-euclidean structure of the manifold. There are many options for how the rotations can be represented and what losses can be used. These choices have big implications on the model performance, and usually rely on approximations. We elaborate on these approximations for the SO(3) diffusion in our answer to one of your follow up comments.
Another point is that SE3-equivariant GNNs are limited in their ability to capture global geometry (https://arxiv.org/pdf/2301.09308.pdf). Further, these GNN and transformer-based models are well known to require a lot of memory which hinders them from scaling to the full-atom generation we do here. We also witnessed the expressivity issues of GNNs in our numerical experiments, where the GNN models consistently achieved worse RMSD than AbDiffuser variants.
Admittedly, “complexity” was too general a term here. In the final version we will write that “ .. they can require various approximations and architectural considerations that tend to lead to an increased memory complexity.”.
On the other hand, informative priors do not have anything to do with the model complexity or the complexity of the overall approach. They are add-ons that could potentially be added to any diffusion model. We just show (theoretically and practically) that it makes sense to add this and how to build such data-driven priors. As we show in Table 1, models with such priors removed still function well, albeit with reduced performance.
> FA for proteins. the authors fail to cite [1] which already proposed FA for proteins.
Thank you for the reference, we will include it. Note that the authors of this work apply FA to the usual relational self-attention model. We use FA to build a non-relational model, which, as shown in Table 2, has significantly smaller memory/time consumption.
> The projection step seems related to the projection step, "Equivariant Consensus Structure from Weighted Inter-residue Geometries", in Appendix L of Chroma but this connection is not mentioned.
As the title of the referenced chapter suggests, the paper you referenced concerns interactions between residues. This is different from our projection step, which handles the constraints within each residue. Specifically, their approach changes the model to output a specification for an optimization problem that is then solved to build the protein structure (similar to how AlphaFold1 outputs a distance matrix that is later used to find atom positions by gradient descent). In our case the model directly outputs the large-scale protein structure, similarly to for example AlphaFold2. So we don’t really see any similarity between our projection layer and the chapter you reference: the two approaches solve different problems in different ways.
> Line 3 claims AbDiffuser uses a new representation of protein structure which as I understand is the FA component. But as stated this has already been done [1].
The representation we propose goes beyond frame averaging (which can also be employed with GNN & transformers as we and others have demonstrated).
We are proposing to represent members of an aligned protein family by a fixed size “table”, which holds amino acid types and all atom positions and can accommodate any antibody and any side-chains. To achieve this, we rely on a global family alignment scheme and the 4-atom sidechain representation we introduce in Figure 1 and lines 176-200. A key distinction of our representation is that it goes against the common practice of thinking of residues as a set (attention / GNNs) or a variable-length sequence (when adding positional encodings). As we explain in the paper, our representation simplifies learning by allowing the model to associate specific structural roles to particular table rows as well as to easily change the length of a protein mid-generation. The practical advantages of our representation are consistently illustrated in our experiments.
As you have raised a lot of questions there is not enough space in the rebuttal to answer them all. We will answer the remaining ones in the comments bellow, as soon as they are enabled.
---
Rebuttal Comment 1.1:
Title: Additional Answers
Comment: > Gaussian diffusion plus projection step vs. SE(3) diffusion. The projection step seems to me as a way to circumvent working with rigid body frames by always projecting the gaussian diffusion over coordinates onto the rigid body constraints. But this seems close to SE(3) diffusion [2] over frames where the C-a coordinates diffuse along with the backbone atoms which always have the rigid body constraints. Why is the gaussian diffusion and projection step necessary to develop if we have SE(3) diffusion? Furthermore, SE(3) diffusion is not compared with so we do not know which is better.
Indeed, the projection layer provides a way to avoid using SE(3) diffusion over the frame rotation. We want this as diffusion on SE(3)/SO(3) is imprecise (see below), which contrasts with the usual Gaussian diffusion we are using.
Note that technically the diffusion should be SO(3) not SE(3) as chirality of amino acids is fixed in the body (reflections are not allowed).
The proper distribution for diffusion on SO(3) is the IGSO(3) (https://openreview.net/pdf?id=BY88eBbkpe5). Unfortunately, the latter does not come with an analytical expression for the density, which is expressed as an infinite series. Thus, to employ diffusion on IGSO(3) one is practically forced to truncate the infinite series rendering the computation cumbersome and imprecise (RFDiffusion retains 2000 terms of that infinite series at every step hinting to a non-convergent series). Note further that RFDiffusion uses a MSE training loss that, by their admittance, is not the mathematically correct choice for the reverse process — possibly because MSE, which is the correct loss for Gaussian diffusion, works better. Also, their formulation depends on an approximation of q(r^(0) | r^(t)) by a point mass to recover an approximate gradient on IGSO(3). So SO3 diffusion in practice comes with multiple approximations and inaccuracies.
DiffAb uses SO(3) diffusion and MEAN also uses the Ca + rotation representation of the frame. As can be seen in Table 3 in the paper and the main response here, we do outperform those approaches.
> Line 5-6 "Our approach improves protein diffusion ... handles sequence-length changes". This is not true. The authors achieve this by aligning all antibodies to a reference alignment (the AHo numbering) that has been specifically developed for antibodies. This does not exist for nearly all proteins so their method cannot be applied. Even if it can be applied, the authors cannot claim it works for other proteins until it is demonstrated.
Thank you for pointing this out. We meant to say antibodies in this line. We will correct it.
As mentioned in the conclusions, although we focused here on antibodies, there is no fundamental obstacle for our method to be applied to any sufficiently large protein domain family, as organized in CATH and Pfam. Some of the more ubiquitous enzyme families have similar folds and catalytic functions but substantial substrate diversity (e.g., SAM-dependent methyltransferases, long chain alcohol oxidases, zinc-dependent alcohol dehydrogenases, amine dehydrogenases, and many more enzyme families each have several hundred thousand to millions of sequences in their annotated family each). The TIM-barrel fold, a fold with vast biosynthetic potential, has >20 distinct families in CATH, most sharing stereotypical anatomy with 8 parallel strand alpha-beta (n=8, S=8) fold; our model could be also applied to this family. We intend to explore these possibilities in future work.
> Line 67 - 69 "These results provide the first experimental evidence that a generative model trained on mutagenesis data can create new antibody binders...". This is not true. [4] is just one paper that showed new antibody binders can be created with generative models. In addition, RFdiffusion [5] which can generate de novo binders is not mentioned.
Our precise claim is more nuanced than what was depicted in the comment:
“These results provide the first experimental evidence that a generative model trained on mutagenesis data can create new antibody binders of high affinity without post-selection by learned or physics-based binding predictors”
Paper [4] provides very little information about how the generation works but admits to filtering based on an ensemble of models. Other papers have already achieved functional abs with sequence based methods and using filtering (e.g., Mason et al, DOI: 10.1038/s41551-021-00699-9). RFDiffusion has designed binders that were verified in the lab, but they designed ones based on helical bundles and not antibodies. We will elaborate on this in the manuscript.
---
Rebuttal Comment 1.2:
Title: Additional Answers
Comment: > Title "AbDiffuser: full-atom generation of in-vitro functioning antibodies". This a bold claim that is not justified by the results of generating 16 functioning antibodies for a single target HER2. Furthermore, it is unclear how novel these antibodies are.
We would like to draw the attention to that we rely on the gold standard SPR assay which provides high confidence measurements and is orders of magnitudes more expensive than high-throughput experiments. So the binding results we report are reliable and reproducible. The high cost of SPR experiments makes a higher-throughput experimentation prohibitive. Even bio journal papers often only perform SPR on tens of their best designs (e.g., Mason et al, DOI: 10.1038/s41551-021-00699-9 tested 30 sequences this way).
All of the binders we found were novel. You can see the corresponding analysis in Appendix K. There, it is shown that the best binder found is equidistant to both the closest binder and the closest non-binder. Finding a better or even a competitive binder to trastuzumab is no trivial task, as a lot of engineering and optimization went into the latter’s development. Please also note that the best binder reported by [4] which also focused on the HER2 target has pKD of 9.03 which is half a log-unit below our best binder that has a pKD of 9.5.
> Line 87. The authors claim APMixer is a novel neural network for processing proteins. The claimed novelty seems very incremental. MLPMixer is not new, FA is not new. The only component which I believe is novel is the sequence alignment of each antibody such that they all have the same sequence length as input. But this does not warrant technical novelty.
It is true that the model is largely built from known components. However, almost all new models used for proteins (and even in other domains) also rely mostly on existing components with some improvements. In our eyes, the key distinction of APMixer is that it’s not a relational model (GNN or Transformer), which contrasts with existing structure-based antibody and protein models. As we show in our work, there is merit to using such a non-relational model as it offers better computational efficiency (Table 2), better memory complexity of O(nd + d^2) compared to O(n^2 + nd + d^2) for traditional transformer or O(Δnd + d^2) for GNNs with message functions (here Δ is degree, n number of nodes/residues and d is the embedding dimension), and better results (Tables 1 and 3). APMixer also avoids the potential limitations of expressive power (see our next comment) that as discussed before can be an issue to GNNs. Furthermore, due to the antibody representation we introduced, APMixer does not need to be conditioned on sequence length. Our aim here is to provide the community with a valid alternative to the usual GNN or Transformer models, which so far has not really been explored.
Nonetheless, as you have also stated, the paper introduces many new concepts. So the overall novelty of the work is high and transcends the APMixer architecture alone.
> I don't understand why SE(3) universality isn't already provided by FA.
FA only provides universality if the model it wraps is already universal. Relational models, such as GNNs, are known to have limitations in terms of their expressive power (see e.g., https://arxiv.org/pdf/1810.00826.pdf, https://arxiv.org/pdf/2301.09308.pdf). As we prove in Appendix G, APMixer does not suffer from these limitations and is actually universal. To the best of our knowledge, this was not previously known for MLP-Mixer and it was not discussed in the original paper. As you can also see in Appendix G, the proof is not trivial.
> Line 250-252. Why would we expect IgLM to have the best accuracy just because it was trained on more data? IgLM is not trained on the structure like AbDiffuser or the other baselines.
We only claim that IgLM should have close to the best accuracy achievable by the *sequence-only* models (line 254), especially because it was trained on more data and on data that include our test set. Note that the transformer we use in our benchmarks is also a sequence-only transformer, but trained in our framework and with the usual training set.
> The authors miss out on state-of-the-art. On top of the ones already discussed there have been diffusion models for antibody generation [6, 7] (dyMEAN and DiffAb)
The code for dyMEAN was not available at the time of the submission, but it is true that it's a worthy baseline to include (even though dyMEAN is not a diffusion model). We have included results for dyMEAN and DiffAb in the main response which showcase the superiority of our approach.
---
Rebuttal Comment 1.3:
Title: Additional Answers
Comment: > Unclear experimental set-up. The experimental set-up is poorly written.
>> The metrics are not clearly defined in the main text.
You can find a comprehensive description of the metrics in Appendix I. We will expand the description in the main text (lines 256-263) in the final version, subject to space limitations.
>> The training dataset is unclear: OAS does not have structures so how are the structures obtained for training?
As stated in line 271, they are folded with IgFold and optimized with Rosetta.
>> Do the authors train RefineGNN and MEAN baselines only on the CDR which they were designed for or also give them the full sequence?
As RefineGNN and MEAN were built to do CDR inpainting, in this case we train them only on CDR H3 redesign. So, at evaluation time, they receive the test antibody structure and sequence and then fill in the removed CDR H3 (its sequence and structure). In contrast, the other models are also tasked with re-designing the framework.
>> How are the splits performed?
The experiments on OAS and HER2 binders rely on an i.i.d. splits, which is the mathematically correct choice when aiming to test the ability of a generative model to sample from the same distribution. (A perfect generative model should produce samples that are indistinguishable from true hold out i.i.d. samples.) For the SAbDAb experiment (see general reply) where we aim to test generalization, we adopt the DiffAb split which clusters all antibodies in SAbDab with up-to 50% CDR H3 identity, selects 5 clusters containing 19 distinct antibodies. This ensures that the training set does not overlap with the test set, as SAbDab has many repeating or highly similar entries.
>> The Wasserstein distance metric has no equation so I am unclear what it is exactly measuring.
The Wasserstein distance, also known as Earth Mover's Distance, is a standard metric to compare distributions (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wasserstein_distance.html) and is generally considered as one of the best ways to evaluate generative models. For example, the popular FID score used to evaluate image generators is effectively a Wasserstein distance under an extra Gaussian assumption. We will elaborate on the definition in the appendix.
>> Why is there no metric on diversity and novelty of the sequences? Perhaps this information is buried in the appendix but it should be up front in the main text.
All the generated sequences were 100% novel. As we state in line 327, MEAN was the only tested model which did not generate perfectly unique sequences. In its case, only 38.9% of the generated sequences were unique. We will highlight that further in the main text.
We also measure novelty through the ``closeness’’ metric, which compares the edit distances of the generated sequences to any of the training sequences with the edit distances of the test sequences to any of the training sequences. Put simply, we compute the edit distance histogram (to the training set) for generated data and test data, and then check how close the two histograms are (using the standard Wasserstein distance). Intuitively, when we are aiming to fit some distribution, what our model produces should be indistinguishable from a hold-out i.i.d set: the generated sequences should neither be too close, nor too far from the training data, but the their distance should follow the same statistics as the i.i.d. hold-out set, corresponding to a small closeness value.
> The authors claim they have designed binders which have been tested in the wet-lab but this result must be accompanied by experimental results in the supplementary or plots in the supplement. This is standard in bio journals. In addition it is unclear how novel each design is. If only a few mutations were necessary to obtain a tight binder then this task is not difficult.
We include the generated binder sequences as well as the experimentally measured binding affinity values (expressed as Kd - dissociation constants) and experimental result statistics in Figures 3 and 2. We also provide additional information in Appendix K, such as the edit distances to closest known binders and non-binders in the HER2 dataset we have used. For example our best discovered binder has CDR H3 edit distance of 2 from both the binder and non-binder sets. So it's not a trivial mutation, because it is easy to ‘ruin’ the binding. Also, none of our generated 1k binder candidates appeared in the training set.
We believe this covers what is usually presented in comparable experimental papers in bio journals (see e.g., [4]). At the same time, please consider that you are reviewing for a ML conference not a bio journal, and that no other comparable ML conference paper has presented in vitro experimental results as we do.
---
Rebuttal Comment 1.4:
Title: Additional Answers
Comment: > Limitations are not discussed though I believe there are many issues with the paper.
As we stated in the paper (e.g., Chapter 3 lines 87-88), the model we introduce can only work on large homologous protein families and that, while there potential are quite a few such families, we have only demonstrated its utility for antibodies. We see this as the main limitation of our work. Our reply provided thorough answers to the issues you have raised. | Summary: The authors propose a new approach, called AbDiffuser, that improves protein diffusion by leveraging domain knowledge and physics-based constraints. They follow MLPMixer's architecture to reduce memory complexity by an order of magnitude, enabling backbone and side chain generation. They validate AbDiffuser in silico and in-vitro.
Strengths: - APMixer architecture significantly reduces GPU memory, which is an interesting improvement since many related works rely on a triangular update mechanism that has O(N3) complexity. It is interesting whether such an approach can be adopted for more general tasks, such as protein folding and design.
- The projection layer looks interesting, as it helps maintain the intermediate diffusion state on the "manifold" of valid structures. Similar to the above, it remains unknown whether this may benefit general protein design.
- The authors validate the method in-vitro, increasing the solidness and usefulness.
Weaknesses: - AbDiffuser does not directly use epitope information, which may harm generalization. For example, given a newly seen antigen, how can this method be applied to design an antibody that has a high binding affinity?
- Since the sequence dataset is folded with IgFold (line 271), it may indicate that AbDiffuser's "folding" performance is bounded by IgFold. The model is ranked with another classifier trained with binding probability (line 316). It appears that the IgFold and classifier act as the model's "teacher." How does the model outperform its "teacher"? If we have such a "teacher", why not try to use it directly, with MCMC, Bayesian optimization, etc?
- In line 314, a random mutation approach may generate 9k binders and 25k non-binders, indicating that random mutation causes a success rate of 9/(9+25)=26%. In line 66, AbDiffuser has a success rate of 22.2%. Although the number may increase to 57.1% by post processing, does this mean that "AbDiffuser performs worse than a random mutation baseline"?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - During forward diffusion, is the residue type continuous or discrete? If it is continuous, how do you perform "backbone/side-chain projection" (section 3.4) since the type of residue is undecided? If it is discrete, during the forward process (by interpolating between a prior distribution and the final one-hot distribution), will the residue type remain unchanged? For example, assuming there are 5 amino acid types, the true state is (0, 0, 0, 0, 1) and the prior distribution is (0.2, 0.2, 0.2, 0.2, 0.2), the residue type will always be the 5th state.
- Are the priors computed over the entire dataset, instead of some antibody family? What would be the performance if directly sampling from the sequence and structure prior?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments, we answer them below.
> AbDiffuser does not directly use epitope information, which may harm generalization. For example, given a newly seen antigen, how can this method be applied to design an antibody that has a high binding affinity?
This is a pertinent question. As explained in lines 359-361, to create diversified and improved binders for a new antigen, we would need a starting dataset of potential binders. Thankfully this can be relatively easily acquired via high-throughput experiments (yeast & phage display, immunogenic campaigns). Due to their speed and low-cost, these experiments are the 1st step in drug discovery taken to roughly scope the Ab landscape when looking for a new drug. Starting with this collection of potential binders, the aim of the HER2 experiment is to select diversified high-binding molecules that, down the line, can be tested for good developability properties. On the other hand, co-crystal structure determination (needed by the current CDR-redesign methods such as DiffAb or MEAN) is usually done only for valuable identified leads.
Nevertheless, to prove that AbDiffuser can also bring benefits to the conditional generation task, we performed a new experiment on SAbDab CDR-redesign, where we compare against DiffAb and Rosetta (see general reply). The method extension was achieved by adding a GNN that performed information exchange between Ag and Ab as was done previously.
> ... AbDiffuser's "folding" performance is bounded by IgFold. The model is ranked with another classifier trained with binding probability (line 316). It appears that the IgFold and classifier act as the model's "teacher." How does the model outperform its "teacher"? Why not try to use such a "teacher" directly, with MCMC, Bayesian optimization, etc?
Indeed, the model’s folding capacity is upper bounded by IgFold folding performance when trained only on folded structures, though as we also show in the new experiments it’s also possible to train on crystal structures. Also, the classifier is not used as a teacher but strictly for evaluating how well the generative models model the binder distribution (note that the generative models are trained only on binders, while the classifier sees both binders and non-binders).
Nevertheless, it is interesting to consider the spirit of your question. In general, there are two main arguments why employing a generative model to model the distribution is a better choice than using search:
First, there is an intimate interaction between sequence and structure which motivates building a model that generates sequence and structure jointly: structural features are known to be quite informative of what amino acid types are acceptable in a given structural position (e.g., due to charge complementarity of nearby amino acid side chains). So the hypothesis is that generating structure as well, should make the model generate better sequences. This is confirmed by our experiments, since the structure+sequence models outperformed sequence-only models, even when they were trained on much more data.
Second, a key pitfall of directly searching (MCMC/Bayesian opt) for Abs that abide to some properties according to some classifiers is that the classifier predictions p(y|x) are only trustworthy close to their training data distribution and do not reveal the all-important data likelihood p(x). Unfortunately, without controlling p(x) design is doomed to fail! That is because we can only trust what the classifier predicts if we have the ability to select likely samples according to the distribution it was trained on. The latter is what a good generative model (like a denoising diffusion) excels at, but is not achieved if we search over the space of Abs that the folding model and the classifier predict as positives.
> During forward diffusion, is the residue type continuous or discrete? (...).
The residue type is always discrete (both in forward and reverse processes) and is selected by sampling. In your example, if at t=500 the categorical probability distribution was (0.1, 0.1, 0.1, 0.1, 0.6), we would sample from it to produce the noised discrete sample which could result in a residue in any state (it would end up at state 1 with probability 0.1 and so on).
> Are the priors computed over the dataset? What would be the performance if directly sampling from the sequence and structure prior?
We compute the priors over the training set. But it could make sense to say compute the priors over all known antibody structures and sequences.
Sampling the structure prior still gives you a random (Gaussian) point cloud. Atom positions in it are of course correlated, but as a whole it looks nothing like a true antibody. So the performance of any structural metrics is very poor.
In sequences, the positions in the antibody framework regions can be highly conserved (e.g., we can have just a couple of different amino acid types observed in a particular AHo position). So randomly sampling the prior would give you a highly statistically similar sequence in the framework region (though it might not achieve high expression in vitro). CDR regions are much more diverse, especially CDR H3 and CDR L3. So there the sampling is more random. When we use the priors to sample sequence and structure for paired OAS, we get WD(Naturalness): 0.4004, WD(Closeness): 0.2969, WD(Stability): 0.3871, RMSD: 17.3944. These results are an order of magnitude worse than what we can achieve with AbDiffuser: WD(Naturalness): 0.0916 , WD(Closeness): 0.0520, WD(Stability): 0.0186, RMSD: 0.4962.
> Random mutation approach success rate
In short, this is not true as the dataset considered does not contain random mutations but combinations of carefully selected mutations determined by experts using a deep mutagenesis experiment (see Mason et al, DOI: 10.1038/s41551-021-00699-9). Due to the space limit of the rebuttal we will expand on this in the discussion period.
---
Rebuttal Comment 1.1:
Title: Random mutation success rate
Comment: > In line 314, a random mutation approach may generate 9k binders and 25k non-binders, indicating that random mutation causes a success rate of 9/(9+25)=26%. In line 66, AbDiffuser has a success rate of 22.2%. Although the number may increase to 57.1% by post processing, does this mean that "AbDiffuser performs worse than a random mutation baseline"?
Details on why this is incorrect:
1. Random CDRH3 mutations taken from OAS are known to have binding rates close to 2.68% as reported by Shanehsazzadeh et al (DOI: 10.1101/2023.01.08.523187).
2. In contrast, the dataset considered does not contain random mutations but combinations of carefully selected mutations determined by experts using a deep mutagenesis experiment (see Mason et al, DOI: 10.1038/s41551-021-00699-9). The dataset was also built using noisy high throughput yeast display experiments and our analysis suggests that ~21% of the reported labels are wrong (deduced from sequences with both positive and negative labels, which we drop in pre-processing). So, if an unconditional generative model is trained on both binders and non-binders and exactly captures the training distribution, it is expected to achieve a binding rate of ~26% +/- 5%. (The SPR wet-lab experiments we performed are precise and are not susceptible to such issues.) One can use guidance or in silico filtering to increase the binding rate, but we cannot generally expect to do better without them.
3. Our analysis in Appendix K demonstrates that many of our binders validated in SPR experiments, including the best one, are the same number of mutations away from both the binder set and the non-binder set. This highlights that it is very easy to make wrong mutations, especially if the negative examples are not known (as is the case in our experiments).
---
Rebuttal 2:
Comment: Dear Reviewer is6j, since the discussion period is coming to an end soon we wanted to ask if there are any remaining questions that we can address?
---
Rebuttal Comment 2.1:
Comment: I have read all the discussions and most of my concerns have been addressed. Thank you for your effort to make things clearer and I am willing to raise my score. | Summary: Antibody design has an extreme importance for both fundamental and application biologic science. The submission intriduces an equivariant and physics-informed diffusion model called AbDiffuser
Strengths: Originality: There are few tools that have already addressed the same problem (for example, DiffAb, https://github.com/luost26/diffab). However, due to the lack of systematic evaluation and comparison between the approaches, AbDiffuser remains useful and important new tool.
Quality: The submission provides both in silico and in vitro validation that is crucial for antibody design. Authors provide a detailed supplementary with clear explanation of used methods and approaches. However, the paper lack a proper discussion about potential weaknesses and limitations of AbDiffuser.
Clarity: The test is clear and all the sections are presented in structures manner.
Significance: The results are important and useful for biology and drug discovery in particular. AbDiffuser can be used by researchers for a fast and efficient design of antibodies to particular antigen.
Weaknesses: Even though the paper provides comparison of AbDiffuser with other models, it lacks comparison to specific to antibody-antigen diffusion models (for example, https://github.com/luost26/diffab). It can be useful to discuss AbDiffuser compared to these tools. Also, there are several new protein-focused diffusion models such as RFdiffusion (https://github.com/RosettaCommons/RFdiffusion) or DiffDock-PP (https://github.com/ketatam/DiffDock-PP), the discussion of or comparison to can be valuable. I suggest extending the "Related Work" section to highlight the mentioned above issues.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: It can be useful to discuss AbDiffuser compared to other diffusion tools focused on proteins and antibodies such as RFdiffusion, DiffDock-PP, and DiffAb ((https://github.com/RosettaCommons/RFdiffusion, https://github.com/ketatam/DiffDock-PP, https://github.com/luost26/diffab)
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Authors need to include a section about limitations and weaknesses of AbDiffuser as well as potential improvement of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and recognising the potential of our work. We answer your comments below.
> There are few tools that have already addressed the same problem (for example, DiffAb, https://github.com/luost26/diffab). (..) Even though the paper provides comparison of AbDiffuser with other models, it lacks comparison to specific to antibody-antigen diffusion models (for example, https://github.com/luost26/diffab). It can be useful to discuss AbDiffuser compared to these tools.
From a method standpoint, DiffAb and similar tools (e.g., MEAN) aim to re-design the CDR loops in a known co-crystal structure, and not the full antibody as we do. Often, as in the case of DiffAb, they are only tested at re-generating one CDR loop at a time. We discuss the works using such approaches in more detail in the related work (Appendix A, line 650 and briefly in lines 41-42 of the main text).
To test how well these methods work in comparison to AbDiffuser, we retrained DiffAb and dyMEAN for the HER2 binder design task (see our general response above). For completeness, we also added experiments on conditioned CDR redesign using the SAbDab split from DiffAb. The results showcase that in the conditional CDR redesign AbDiffuser performs much better in terms of CDR sequence recovery than DiffAb with an impressive 1.58x improvement in CDR H3 amino acid recovery.
> However, the paper lacks a proper discussion about potential weaknesses and limitations of AbDiffuser.
The two main limitations that we see are that 1) our method as is can only be applied to proteins that belong to a conserved fold (such as the Abs, TCRs, TIM-barrels, etc), and 2) that the original experiments focused on unconditional generation, so each new target will require a new dataset (obtained e.g., by a high throughput display experiment). We mention these limitations in the paper, but we will make sure to highlight them further in the final version.
We also note that the second limitation was mitigated by the new experiment we performed on SAbDab CDR inpainting task – the same experiment performed by DiffAb (see general response).
> Also, there are several new protein-focused diffusion models such as RFdiffusion (https://github.com/RosettaCommons/RFdiffusion) or DiffDock-PP (https://github.com/ketatam/DiffDock-PP), the discussion of or comparison to can be valuable. I suggest extending the "Related Work" section to highlight the mentioned above issues.
The paper discusses RFDiffusion and explains that the latter’s utility for Abs is limited as the method is not built to generate the sequence but only the protein backbone (which is less challenging for Abs). We will extend the discussion accordingly and include DiffDock-PP in it, explaining that there are interesting advances in protein design that address different problems from generation (that we focus on).
---
Rebuttal Comment 1.1:
Comment: Thank you for the changes and additions to the paper. I am satisfied with the rebuttal and have no further comments to add. | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoughtful comments and suggestions. In response, we performed additional experiments, adding two state-of-the-art baselines to our current comparison and showcasing how AbDiffuser significantly outperforms previous methods in SAbDab CDR inpainting.
### HER2 binder generation task – DiffAb and dyMEAN (Table 3).
We found that DiffAb trained on HER2 binders focused on high-likelihood modes of the distribution and did not generate perfectly unique samples, but achieved good binding probabilities. To produce a head-to-head comparison, we also generated samples from AbDiffuser by removing the additional noise added during every step of the reverse process, similarly to what was proposed in RFDiffusion. As shown in the table below, the binding probability was greatly increased (surpassing DiffAb) at the cost of focusing more on high-likelihood modes.
| Model | \\(W_1(\text{Nat.})\downarrow\\) | \\(W_1(\text{Clo.})\downarrow\\) | \\(W_1(\text{Sta.})\downarrow\\) | \\(W_1(\text{PSH})\downarrow\\) | \\(W_1(\text{PPC})\downarrow\\) | \\(W_1(\text{PNC})\downarrow\\) | \\(W_1(\text{CSP})\downarrow\\) | \\(W_1(\Delta G)\downarrow\\) | \\(\text{RMSD}\downarrow\\) | \\(p_{\text{bind}}\uparrow\\) | \\(\text{Uniq.}\uparrow \\) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| DiffAb | 0.0074 | 0.0014 | 0.0498 | 0.5481 | 0.0097 | 0.0067 | 3.4647 | 6.7419 | 0.4151 | 0.8876 | 99.7% |
| AbDiffuser | 0.0013 | 0.0018 | 0.0028 | 0.4968 | 0.0205 | 0.0113 | 0.1588 | 6.4301 | 0.3822 | 0.5761 | 100% |
| AbDiffuser (side chains) | 0.0010 | 0.0005 | 0.0062 | 1.2909 | 0.0115 | 0.0029 | 0.0948 | 32.0464 | 0.4046 | 0.6848 | 100% |
| AbDiffuser (no noise) | 0.0008 | 0.0014 | 0.0265 | 2.5944 | 0.0206 | 0.0053 | 0.2378 | 15.2200 | 0.3345 | 0.9115 | 99.7% |
| AbDiffuser (side chains, no noise) | 0.0015 | 0.0024 | 0.0159 | 1.5043 | 0.0210 | 0.0126 | 0.5173 | 114.4841 | 0.6795 | 0.9436 | 91.4% |
We also trained dyMEAN on the HER2 dataset (code became available after the NeurIPS submission deadline). Unfortunately, we found that dyMEAN was very prone to collapse. Over 4 distinct runs, the model with the best validation loss always generated only a single example.
### Full-Ab generation on paired OAS - dyMEAN (Table 1).
In contrast to MEAN and DiffAb, the latest version of dyMEAN has been shown to be able to generate full antibodies. We thus also trained dyMEAN on the paired OAS generation (Table 1) and got the following results:
| Model | \\(W_1(\text{Nat.})\downarrow\\) | \\(W_1(\text{Clo.})\,\downarrow\\) | \\(W_1(\text{Sta.})\downarrow\\) | \\(W_1(\text{PSH})\downarrow\\) | \\(W_1(\text{PPC})\downarrow\\) | \\(W_1(\text{PNC})\downarrow\\) | \\(W_1(\text{CSP})\downarrow\\) | \\(W_1(\Delta G)\downarrow\\) | \\(\text{RMSD}\downarrow\\) | \\(\text{Uniq.}\uparrow \\) |
|---|---|---|---|---|---|---|---|---|---|---|
dyMEAN | 0.1319 | 0.1600 | 0.0423 | 3.9145 | 0.1566 | 0.2929 | 2.3711 | 601.1153 | 3.8157 | 58.2% |
AbDiffuser | 0.1979 | 0.0921 | 0.0662 | 2.3219 | 0.0314 | 0.0285 | 0.6662 | 13.3051 | 0.5230 | 100% |
AbDiffuser (side chains) | 0.0916 | 0.0520 | 0.0186 | 6.3166 | 0.0209 | 0.0754 | 0.8676 | 16.6117 | 0.4962 | 100% |
Only 58.2% of the 1k generated sequences were unique, which is problematic considering that paired OAS has 100k antibodies in the training set. Our results indicate that further advances would be needed to avoid mode collapse and to accurately model paired OAS with dyMEAN.
### SABDab CDR inpainting
We evaluated AbDiffuser on the SABDab co-crystal structure CDR inpainting task from DiffAb using their splits and experimental setup (Section 4.1 in https://www.biorxiv.org/content/10.1101/2022.07.10.499510v5.full.pdf). We conditioned AbDiffuser on the antigen by replacing the initial embedding MLP in APMixer by the output of a GNN layer that passes messages from the closest 10 antigen residues to the CDR residues. We will add further details in the supplement.
The table below shows that AbDiffuser outperformed Rosetta and DiffAb on amino-acid recovery with a wide margin. The CDR RMSD results are quite comparable between DiffAb and AbDiffuser, but importantly AbDiffuser does better than DiffAb on CDR H3 and CDR L3 which have the most variability in their structure and can influence binding the most. The good performance of AbDiffuser in amino-acid recovery could be attributed to the usefulness of AHo numbering and APMixer likely being better at modeling sequences than a GNN. Our ablation experiments on paired OAS generation (Table 1) corroborate this hypothesis.
| Model | AA CDR H1 | AA CDR H2 | AA CDR H3 | RMSD CDR H1 | RMSD CDR H2 | RMSD CDR H3 | AA CDR L1 | AA CDR L2 | AA CDR L3 | RMSD CDR L1 | RMSD CDR L2 | RMSD CDR L3 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Rosetta (RAbD) | 22.85% | 25.5% | 22.14% | 2.261 | 1.641 | 2.9 | 34.27% | 26.3% | 20.73% | 1.204 | 1.767 | 1.624 |
| DiffAb | 65.75% | 49.31% | 26.78% | 1.188 | 1.076 | 3.597 | 55.67% | 59.32% | 46.47% | 1.388 | 1.373 | 1.627 |
| AbDiffuser | 78.48% | 63.68% | 42.19% | 1.507 | 1.296 | 3.431 | 83.84% | 90.35% | 70.98% | 1.268 | 1.433 | 1.312 |
AA stands for Amino Acid recovery and RMSD measures the CA position difference to the CDR CA positions in the original co-crystal structure.
DiffAb also included the estimated improvement in binding score (Rosetta ddG) in their metrics. Since Rosetta ddG has been proven to be an unreliable metric for determining binding affinity (see e.g., Figure 2b in https://pubs.acs.org/doi/10.1021/acs.jpcb.7b11367) we avoid using it for model comparison.
Note that we used the DiffAb split as it properly ensures that antibodies similar to the ones in the test set are removed from the training set (sequences are clustered for up to 50% CDR H3 identity). dyMEAN in their split does not ensure this non-overlap so their reported results are not comparable.
Additionally we attach a PDF with training and sampling pseudocode.
Pdf: /pdf/36dee63f82cd001565f8d554803572cf82d59d64.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning Directed Graphical Models with Optimal Transport | Reject | Summary: This paper uses optimal transport for learning the parameters of a DAG structure that represents a Bayesian network given samples drawn from it (data). In the proposed algorithm the data can be incomplete (i.e. some random variables may be latent).
The algorithm can be seen as a generalization of the existing "Optimal transport-based divergence minimization" on target distributions that are factorized as a product of conditional univariate densities.
Strengths: 1. The generalization of the Optimal Transport to Bayesian networks is interesting and as far as I can say novel (though, more or less straightforward).
2. The paper is well-written (In particular the first two sections).
3. Several experimental models are presented.
Weaknesses: 1. This method is only suitable for learning the parameters of Bayesian network DAGs. It can not be used for learning the DAG structure.
2. The description of the proposed algorithm (section 3 of the draft) is quite compact and many details are missing.
More details are presented in the supplementary material (Section B). Still, even there, the cost function and push-forward divergence measure that are used in the paper's experiments are not spelled out.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I would suggest moving section B of the supplementary material to the main text and moving the details of the experimental models to the supplementary material.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the novelty of our work. We address the reviewer's concerns as follows:
**1. This method is only suitable for learning the parameters of Bayesian network DAGs. It can not be used for learning the DAG structure.**
The primary goal of this work is to introduce a new approach to learning graphical models based on optimal transport, with parameter estimation (given the DAG structure) as the foundation stone. Parameter estimation in graphical models is a general, long-standing, and challenging problem in machine learning, with numerous specialized methods developed to address it. However, we note that our framework is sufficiently general to serve as a powerful backbone to tackle other graph structures and learning problems.
Although we leave the discussion on structural learning for future works, as mentioned in Section 6, extending OTP-DAG for this task is feasible. A starting point is to parameterize the DAG structure with a learnable (weighted) adjacency matrix and consider it as part of the model parameters to be learned. However, one needs to deal with further constraints specific to structural learning tasks, including the acyclicity of the DAG and important model assumptions such as faithfulness or sufficiency. Handling these aspects altogether deserves a separate contribution.
**2. The description of the proposed algorithm is quite compact and many details are missing. Still, even there, the cost function and push-forward divergence measure that are used in the paper's experiments are not spelled out.**
We have reported the formulation and other experimental technicalities for each application in Appendix C, including the choices of cost function and push-forward divergence. We recap these details in the following:
- In the topic modeling task (see Appendix C.1, page 5), we use the cross-entropy loss as the cost function and exact Wasserstein distance as the push-forward divergence measure.
- For hidden Markov models (see Appendix C.2, page 7), we use the KL divergence to optimize the push-forward constraint. As for the cost metric, the smooth $L_1$ loss and cross-entropy loss functions are used respectively for the Poisson time-series data segmentation task and the Polyphonic music modeling task.
- In the discrete representation learning task (see Appendix C.3, page 9), the cost function is chosen to be the mean squared error. The push forward constraints are realized in $3$ additional loss terms, for which we use a combination of Wasserstein distance and KL divergence.
We hope such diversity in the experimental setup can demonstrate the versatility of our method. Subject to space constraints, we will attempt to bring some of these details to the main paper.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thank you for this feedback authors. This will be taken into account. | Summary: In this submission, the authors propose an optimal transport-based method to infer the parameters of probabilistic directed graphical models from partial observations.
In particular, given a DAG associated with the target model, the proposed method reparameterizes the probability of a node conditioned on its parents by an encoder with external perturbation.
Accordingly, a stochastic decoder is applied to map each node to the conditional density of its parents.
The above two modules lead to a model with an auto-encoding architecture, which can be learned like a Wasserstein autoencoder (WAE), as shown in Eqs. (2, 3).
Experiments on the inference of LDA, HMM, and discrete representation models demonstrate the potentials of the proposed method.
--- After rebuttal ---
Thanks for the authors' efforts in the rebuttal phase. After reading other reviewers' comments, my main concern about this work is still its similarity to WAE, especially the theoretical part. Although the authors claimed that WAE can be viewed as a special case of this work, in my opinion, it is more likely that this work is a special case of WAE. I am satisfied with the other part of this work, so my final score is kept as "borderline accept".
Strengths: 1. The paper is well-written and easy to follow.
2. The proposed method is reasonable --- the objective function is based on the theoretical part of WAE, whose rationality has been guaranteed. In addition, the implementation of the proposed method is simple.
3. The authors consider various applications, demonstrating the universality of the proposed method.
4. The limitations of the proposed method are discussed, and the potential solutions are provided at the same time.
Weaknesses: 1. If my understanding is correct, Theorem 1 in this submission is a special case of Theorem 1 in the WAE work [a]. The final objective (Eq.(3)) approximates the Wasserstein distance by relaxing the constraint of phi_i to a regularizer, which is also similar to the strategy of WAE. The authors should discuss the connections and the differences between the proposed method (and theory) and that in [a].
[a] Tolstikhin, Ilya, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. "Wasserstein Auto-Encoders." In International Conference on Learning Representations. 2018.
2. As the authors mentioned, the proposed method leverages the amortization strategy to reparameterize the conditional distributions. Therefore, in the experimental part, the authors should consider some amortization methods as baselines, e.g., those in [b, c, d].
[b] Kim, Yoon, Sam Wiseman, Andrew Miller, David Sontag, and Alexander Rush. "Semi-amortized variational autoencoders." In International Conference on Machine Learning, pp. 2678-2687. PMLR, 2018.
[c] Agrawal, Abhinav, and Justin Domke. "Amortized variational inference for simple hierarchical models." Advances in Neural Information Processing Systems 34 (2021): 21388-21399.
[d] Huang, Chin-Wei, Shawn Tan, Alexandre Lacoste, and Aaron C. Courville. "Improving explorability in variational inference with annealed variational objectives." Advances in neural information processing systems 31 (2018).
3. The datasets used in the experimental part are relatively simple and small. Especially in the experiments of discrete representation learning, I wonder 1) whether the proposed method can deal with images with larger sizes, e.g., face images in CelebA, and 2) besides reconstruction, whether the proposed method can generate images with tolerable qualities.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the comments in the section on weaknesses.
To demonstrate the novelty of the proposed method, the authors should clarify the connections and differences between the proposed method and WAE.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors discussed the limitations of using the amortization strategy at the end of the submission.
Some potential solutions are proposed at the same time.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the novelty of our work. We address the reviewer's concerns in the following.
**1. Connection with WAE:** WAE can be viewed as an application of OTP-DAG on a simple graphical model with only $2$ (sets of) nodes: the observed node $X$ and latent variables $Z$ (See Figure 1 in Appendix B). In this case, the backward mapping $\phi$ and forward mapping $\psi$ respectively play the role of the encoder and decoder. Likewise, both functions are jointly learned by minimizing the reconstruction loss and the push-forward divergence reduces to the prior matching term where $P_Z$ is part of the model generative process. OTP-DAG remains applicable when there are more parameters and hidden variables interplaying in a more complex structure. In terms of theoretical contribution, we offer an alternative, yet more straightforward approach to proving Theorem 1. This is achieved through Gluing lemma, which conclusively demonstrates the equivalence between the OT objective and the minimization of the reconstruction loss of observed samples.
**2. Amortization baselines**
We investigated two amortized variational inference (VI) methods: (1) Auto-encoding VI for topic modelling [e] and (2) Semi-amortized VAE [b]. We have published the codes for this part in our anonymous repository.
**2.1. Auto-encoding VI for LDA:** Prod LDA is the proposed method in [e] that replaces the mixture model in LDA with a product of experts. We ran Prod LDA on 2 tasks: parameter estimation and topic inference. We also considered Neural LDA, which is standard LDA within the same variational auto-encoding approach. Implementation of these models is provided in the OCTIS library (https://github.com/MIND-Lab/OCTIS). We also use OCTIS to standardize evaluation for all models on the topic inference task.
***Parameter Estimation:*** To ensure a fair comparison, we train our model again using the architecture of the encoder of Neural/Prod LDA for our backward map, while keeping the other setting the same as reported in Appendix C.1. Table 3 reports the fidelity of the estimates. Prod LDA clearly outperforms Neural LDA since it is an improvement of the latter. Our method is generally on par with Prod LDA and we achieve such level of performance using a single learning procedure and without fine-tuning the hyper-parameters. Note that Prod LDA directly optimizes the ELBO, implicitly minimizing the KL divergence between the data and model distribution. This explains why it outperforms significantly in terms of this particular measure. For tractability, Prod LDA introduces a closed-form optimization objective that is specific to the choice of prior distributions, which is however not a requirement for our method.
Furthermore, although Prod LDA can achieve a better estimation of the parameters numerically, it often fails to recover the correct distributions of words to topics. We refer to Figure 2 in the attached PDF file for the qualitative evidence, where we illustrate topic-word distribution patterns of randomly selected topics from all models. While our method may exhibit some inconsistencies in recovering accurate word distributions for each topic, these discrepancies are comparatively less pronounced when compared to Prod LDA and Neural LDA. This observation indicates a certain level of robustness in our approach.
***Topic Inference:*** This task assesses the quality of the inferred topics on real-world datasets. Note that the computation of topic coherence score using normalized pointwise mutual information in OCTIS is different than in the baseline paper. Concurring with what was reported in this paper, Prod LDA is generally superior to Neural LDA. However, compared to our OTP-DAG, the topics from Prod LDA achieves better Diversity yet with a much higher cost of Coherence on this task. Table 4 in the PDF file provides the empirical evidence for this task.
[e] A. Srivastava and C. Sutton. Autoencoding Variational Inference For Topic Models. ICLR'17.
**2.2. Semi-amortized VAE for Parameter Estimation:** We now study the capability of recovering the true parameters of semi-amortized VAE (SA-VAE) in comparison with our OTP-DAG. Borrowing the first setting in the paper (See Appendix B.1 in [b]), we create a synthetic dataset from a generative model of discrete sequences according to the following oracle generative process:
$$z \sim \mathcal{N}(0,\mathbf{I}), \quad h_{t} = \text{LSTM} (h_{t-1}, x_{t} ), \quad x_{t+1} \sim \text{softmax}(\text{MLP}([h_t, z]))$$
We here assume the architecture of the oracle is known and the task is simply to learn the parameters. Table 5 reports how well the estimated parameters approximates the ground-truth in terms of $L_1$ and $L_2$ distances, along with negative log-likelihood loss ($NLL$) of the reconstructed samples from the generative model. We also report the performance of a randomly initialized SA-VAE model to highlight the effect of learning. This empirical evidence again substantiate our competitiveness with popular amortization methods, while exhibiting more desirable properties than VI-based methods as already discussed in the main paper.
**2.3. Experiments on CelebA**
We further conducted an additional experiment on the CelebA to showcase the capacity of DAG-OTP on large datasets. The quantitative results are reported in Table 2. We here again show our approach achieves a remarkable improvement in the codebook utilization, surpassing the baseline VQ-VAE by a substantial margin. Our better rFID scores in particular indicate our reconstructed images have higher quality at the dataset level. In Figure 3 in the PDF file, we present the generated samples from the CelebA dataset using Image transformer [f] as the generative model. These samples demonstrate that the discrete representation from the our method can be effectively utilized for image generation with acceptable quality.
[f] Parmar, Niki, et al. Image transformer. ICML'18.
---
Rebuttal Comment 1.1:
Comment: I keep my score unchanged after reading the authors' rebuttal and other reviewers' comments.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We sincerely appreciate the reviewer's time to engage in this interesting discussion. We will update our paper to include these valuable insights. | Summary: The authors propose a method for learning parameters $\theta$ of a DAG within an optimal transport (OT) framework, minimizing the Wasserstein distance between the data distribution $P_d$ and the model distribution $P_\theta$ in $\theta$. The Kantorovich formulation of this problem is a minimization of an expected cost $c(X,Y)$ over all joint distributions on $X,Y$ such that marginally $X \sim P_d, Y\sim P_\theta$; this is implemented in practice by empirically drawing $X_i$ from the data, and then $PA_{X_i}$ conditionally on $X_i$ to satisfy $PA_{X_i} \mid X_i \sim P_\theta(\textrm{PA}_{X_i})$ through the use of stochastic “backward” mappings. This makes optimization tractable over the space of backward mappings $\phi$ and model parameters $\theta$, so long as the constraint above is relaxed to a regularization term.
Strengths: * DAGs represent an extremely rich family of models, and the work also generalizes to settings with unobserved variables, making this method easy to apply in a variety of settings.
* Unlike variational methods, evidence bounds need not be computed; the proposed method is more “direct” in this sense.
* The final optimization objective is easy to compute and computationally cheap.
* The framing of the problem from the lens of OT is novel to my knowledge, and provides an interesting formulation for optimization.
* The most significant strength of the paper is the evaluation of the method on a rich test suite of interesting problems such as LDA and Poisson time series segmentation. Comparisons show that the proposed method outperforms existing methods such as Batch EM and SVI in a variety of scenarios and metrics.
Weaknesses: * The method reuqires that the random variables be reparameterizable in the sense of the equation at line 122 (this equation should be numbered); this may limit the family of joint distributions that can be considered. Though the authors say on line 167 that discrete variables can be used, reparameterization is tricky in these cases (as the authors acknowledge in the limitations section).
* The backward maps $\phi_i$ are a confusing quantity (and potentially hard to fit); see the Questions section. This could be due to a lack of understanding of my part. As I understand it, however, it raises questions about the efficacy of the proposed method.
* The formality of the OT framing is appealing, yet this formality is ultimately dropped for a regularized analogue that does not solve exactly the same problem that is posed.
* Some notation is confusing; for example, does $PA_{X_i}$ include exogenous variable $U_i$? The discussion around line 120 and line 147 suggest so, but then notationally the equations at line 122 and 136 suggest otherwise.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Is the data distribution $P_d(X_O)$ (line 125) a mixture of point masses, or a continuous (but unknown) distribution? This might be worth mentioning explicitly. In the case of the former, it could motivate use of the Kantorovich formulation instead of the Monge formulation of the OT problem.
* Line 128 can be expressed more precisely, and means to define $\mathcal{P}(X \sim P_d, Y \sim P_\theta)$ as the set of joint distributions on $X, Y$ with $P_d$, $P_\theta$ as marginals.
* In line 137, is this meant to say “backward” as it’s a backward map? The # notation should be defined somewhere.
* The constraint set on $\phi_i$ requires the following: that for $X_i \sim P_d$, the random variable $\phi_i(X_i) \sim P_\theta(\textrm{PA}_{X_i}, U_i)$. This ensures that $(X_i, \phi_i(X_i))$ are a joint distribution that marginally follow the requirements of the Kantorovich formulation. Is this all correct so far?
* If so, the constraint on the backward maps seems potentially ill-posed: by construction the distribution $P_\theta(PA_{X_i}, U_i)$ does not depend on the *value* of $X_i$ (it does depend on $X_i$ structurally, as we must know what nodes are its parents). For any given $\theta$, we are able to directly sample from $P_\theta(PA_{X_i}, U_i)$ via ancestral sampling. It seems to be then that training a neural network decoder such that $\phi_i(X_i) \sim P_\theta(PA_{X_i}, U_i)$ could result in a collapse to the “prior” defined the model, in other words the decoder could learn to ignore the input $X_i$ and simply mimic ancestral samples from $ P_\theta(PA_{X_i}, U_i)$, which presumably is what the stochastic backward maps are trained with. In such a case, the optimization over the space $\mathcal{P}(X \sim P_d, Y \sim P_\theta)$ would only occur over a very restricted subset of joint distributions, namely those where $X, Y$ are independent or nearly so. This seems like a possibility to me that is not addressed in the paper.
* In this extreme case, the final optimization problem becomes very simplistic: the regularization term disappears; and one simply draws ancestral samples from the model $P_\theta$, computes the cost function with a draw from the data, and minimizes the result in $\theta$. For simple models (or low dimensional $\theta$) this still might perform well. To ensure the situation above is not occurring maybe the quantity $KL(P_d P_\theta \mid \mid P_d \phi)$ could be evaluated as a metric, or included via an additional regularization term to ensure that the backward maps $\phi$ are indeed resulting in distributions with dependence. This quantity should be significantly different from zero in that case.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The settings where the work can be applied are discussed by the authors. A limitations section is included that proactively assesses some limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewers for acknowledging the richness of our experiments. Our responses to the reviewer's questions are as follows:
*Part 1: Questions*
**1. Is the data distribution $P_d(X_{O})$ a mixture of point masses, or a continuous (but unknown) distribution?**
The data distribution is the empirical one over observed dataset. To develop the theories, we indeed depart from the Kantorovich formulation in Eq. (1) where we consider the set of all couplings $\mathcal{P}(X \sim P_d(X_O), Y \sim P_{\theta}(X_O))$ instead of the deterministic couplings as in the Monge formulation.
**2. The constraint set on $\phi_i$ requires the following that for $X_i \sim P_d$, the random variable $\phi_i(X_i) \sim P_\theta(PA_{X_i}, U_i)$. This ensures that $(X_i, \phi_i(X_i))$ are a joint distribution that marginally follow the requirements of the Kantorovich formulation. Is this all correct so far?**
$\phi_i(X_i)$ might contain hidden variables, while the couplings $\mathcal{P}(X \sim P_d(X_O), Y \sim P_{\theta}(X_O))$ only contain observed variables from data aspect $P_d(X_O)$ and model distribution aspect $P_{\theta}(X_O))$. Therefore, $(X_i, \phi_i(X_i))$ is not a coupling in $\mathcal{P}(X \sim P_d(X_O), Y \sim P_{\theta}(X_O))$.
**3. About the backward mapping:**
Elaborating on the previous questions, we here aim to minimize the Wasserstein distance between $P_d(X_O)$ and $P_{\theta}(X_O)$ where $P_d(X_O)$ is the empirical data distribution and $P_{\theta}(X_O)$ is the model distribution over set of observed nodes $O$. Therefore, the coupling $\mathcal{P}(X \sim P_d(X_O), Y \sim P_{\theta}(X_O))$ is the set of joint distributions of $(P_d, P_{\theta})$ defined over the observed set $O$ that marginally follow the requirements of the Kantorovich formulation.
The constraint set on $\phi_i$ maps $X_i$ (an observed variable where $i \in O$) to the marginal distribution (defined by the model) over *all* of its parent nodes which can include both hidden and observed nodes. Therefore, the couple $(X_i,\phi_i(X_i))$ does not necessarily follow the marginal constraints. If the parent set include observed nodes, the constraint set should also involve push-forwarding to the data distribution over such nodes. The stochastic backward maps are then trained to respect the dependencies in the data between $X_i$ and its parents.
It is worth noting that our optimization objective in Eq. (3) includes a reconstruction term where we aim to find each backward map $\phi_i$ such that we can reconstruct the observed input $X_i$ from the forward aka the "model" direction effectively. In this sense, $X_i$ and its parents must obey the dependencies induced by the model.
In an extreme case where all variables in $PA_{X_i}$ are latent, to the best of our understanding, the reviewer refers to a situation that resembles posterior collapse in training variational auto-encoders (VAEs). In the standard setting of VAEs where the underlying graphical model only contain two nodes, the parent variables are equivalent to the latent variables $Z$. In this case, our objective can reduce to the objective of Wasserstein auto-encoders (WAE) [1]. While VAE forces $Q(Z|X=x)$ to match the prior $P_Z$ for all the different input examples $x \sim P_X$, WAE aims to match $P_Z$ with the continuous mixture $Q_Z := \int Q(Z | X)dP_X$, thus encouraging the latent representation of examples to be far away and preventing them from collapsing ($Q$ denotes the variational distribution). Our push-forward divergence term shares the same purpose and in a general case $D$ is chosen to be Wasserstein distance. This additionally allows us to capture desirable properties of the Wasserstein distance, specifically in avoiding mode collapse problem which is known to occur commonly to f-divergences such as KL or JS [2].
**4. Does $PA_{X_i}$ include exogenous variable $U_i$?**:
We apologize for the confusion. The mention for $PA_{X_i}$ to include exogenous variable $U_i$ applies notationally to the discussion at line 147 to simplify the notations. The discussion above this line considers $PA_{X_i}$ and $U_i$ separately.
**5. Line 128 can be expressed more precisely, and means to define $\mathcal{P}(X \sim P_d, Y \sim P_{\theta})$ as the set of joint distribution $X,Y$ with $P_d, P_{\theta}$ as marginals.**
Thank you for pointing this out. We will update it in the revised paper.
**6. Line 137, is this meant to say “backward” as it’s a backward map?**:
It is meant to be "forward" as in the notion of "push forward" w.r.t the operator #. We will attempt to add a background introduction about this operator in the main paper.
*Part 2: Weaknesses*
**7. Reparametrization constraint:**
In terms of the requirement for reparametrization, related approaches experiences similar inflexibility, i.e., advanced VI-based ones e.g., reparametrized VI and amortized VI. The proposal by Ruiz et al. (2016) was shown to accommodate a wider class of variational distributions, which can be used in our framework straightforwardly to alleviate the above issue.
**8. The formality of the OT framing is appealing, yet this formality is ultimately dropped for a regularized analogue that does not solve exactly the same problem that is posed.**
Relaxing the push-forward constraints into a regularized objective allows us to solve the learning problem efficiently with amortized optimization and to utilize the universal approximation capability of deep neural networks. This is a common practice in deep learning and also in OT formulations, which are shown to work effectively in practice, namely [1].
[1] Tolstikhin et al. Wasserstein autoencoders. ICLR'18.
[2] Arjovsky et al. Wasserstein generative adversarial networks. ICML'17.
---
Rebuttal Comment 1.1:
Title: Rebuttal reply
Comment: Thanks for the detailed response. I’m not sure the authors have addressed my main concern about the backward maps $\phi$, which I’ll try to restate more clearly here. Let’s assume for ease there are no hidden variables and so all variables are observed. Further, let’s assume that there are only two nodes $X_c$, a child, and $X_p$ its sole parent. $X_p$ itself has no parents and is drawn from some prior in the generative model. My main concern is essentially:
It is possible to satisfy the constraint below line 136 trivially if the decoder/backward map $\phi$ learns to imitate the prior, ignoring the numeric value of the input $X_c$ given to it (which $\phi$ might do, depending on how it is trained). In this case, the remainder of the minimization to occur by optimizing eq. (3) essentially will use the model distribution $P_\theta(X_c, X_p)$ and data distribution $P_d(X_c, X_p)$ which are independent of each other. In this case, the family of distributions used is severely diminished compared to that suggested by eq. (1): rather than using all joint distributions which admit $P_d, P_\theta$ marginally, only the product of these two distributions is used. The problem would collapse to the following:
1) Draw data pairs $(X_c, X_p)$ from the dataset $P_d$.
2) Draw $(X_c, X_p)$ independently from the model $P_\theta$ for current value of $\theta$.
3) Evaluate some cost between these draws; update $\theta$.
The procedure becomes somewhat akin to approximate Bayesian computation (ABC) methods, which simulate data from the model and compare to real data, except in ABC parameter values are sampled from a distribution while in this case $\theta$ would be updated by gradient steps. This procedure could still work well on some problems; but nevertheless it’s totally different from the optimal transport problem posed. So a primary question to the authors is:
**In construction of the backward maps $\phi$, how do the authors ensure optimization occurs over a *rich* family of joint distributions as used in eq. (1)? In particular, how does the training procedure of $\phi$ prevent the situation described above from occuring?**
---
Reply to Comment 1.1.1:
Title: Our responses (1/2) - Push-forward Divergence
Comment: We thank the reviewer for an insightful question.
Let us first examine the reviewer's example. If the graphical model contains such two nodes and $X_p$ is also **observed**, this means for every sample, we know precisely which value of $X_c$ corresponds to which value of $X_p$. In this case, the backward map is used to transport $P_d(X_c)$ to $P_d(X_p)$ (i.e., not the prior on $X_p$).
The reconstruction of $X_c$ can be evaluated by sampling $X_p$ directly from the data distribution $P_d(X_p)$. Moreover, in this case, if we want to parameterize and learn $P_d(X_p)$, we can further define $P_\theta(X_p \mid U_p)$ where $U_p$ is an exogenous variable sampled from its prior. We then have two backward maps from $X_c$ and $X_p$ to their parents respectively.
If $X_p$ is **hidden**, we define a backward map $\phi$ over $X_c$ such that $\phi\\#P_d(X_c) = P(X_p)$ where $P(X_p)$ is the prior over $X_p$. The process described by the reviewer does not entirely agree with our algorithm. Our training procedure in this case is precisely as follows:
(i) Draw $X_c \sim P_d(X_c)$.
(ii) Draw $X_p \sim \phi(X_p | X_c)$.
(iii) Draw $\tilde{X_c} \sim P_{\theta}(X_c | X_p)$.
(iv) Evaluate the costs according to Eq. (3); update $\theta$.
Our cost function specifically minimizes two terms:
* push-forward divergence $D[P_{\phi}, P(X_p)]$ where $D$ is an arbitrary divergence. We use the WS distance for $D$ in our experiments.
* the reconstruction loss between $X_c$ and $\widetilde{X}_c$.
We now share some thoughts why this should work as we expect. Given $X_p \sim P(X_p)$ (i.e., its prior distribution), as $\phi\\# P_d(X_c) = P_\phi = P(X_p)$, this $X_p$ is sampled from $\phi\\# P_d(X_c) = P_\phi$. This further means that $X_p \sim \phi(X_p | X_c)$ for some $X_c \sim P_d(X_c)$ and $\tilde{X_c} \sim P_\theta(. | X_p) $ is close to this $X_c$ due to minimizing the reconstruction term. Therefore, $\tilde{X}_c$ is learned to follow the data distribution $P_d(X_c)$.
When the backward $\phi$ mimics the prior, to our best understanding, the reviewer in fact refers to the posterior collapse problem notoriously occurring to VAE. We here explain why VAE is prone to posterior collapse and how our OT-based objective mitigates this issue.
**1. The push-forward divergence**
In the case of two nodes $X_p$ and $X_c$ where $X_p$ is latent, our framework OTP-DAG reduces to WAE [1] and the backward map corresponds to the variational encoder. Both WAE and VAE objectives entail the prior matching term. However, the two formulations are different in nature.
Let $Q$ denote the set of variational distributions. The VAE objective is given as
$$\inf_{\phi(X_p | X_c) \in Q} \mathbb{E_{P(X_c)}} [ \text{KL}(\phi(X_p | X_c), P(X_p))] - \mathbb{E_{\phi(X_p | X_c)}} [\log P_{\theta}(X_c | X_p)]$$
By minimizing the above KL divergence term, VAE basically tries to match the prior $P(X_p)$ for all different examples drawn from $P_d(X_c)$. Under VAE objective, it is thus easier for $\phi$ to collapse into a distribution independent of $P_d(X_c)$, where specifically latent codes are close to each other and reconstructed samples are concentrated around only few values.
Meanwhile, for OTP-DAG/WAE, the regularizes in fact penalizes the discrepancy between $P(X_p)$ and $P_{\phi} := \mathbb{E}_{P(X_c)}[\phi(X_p | X_c)] $, which can be optimized using GAN-based, MMD-based or Wasserstein distance. For every sample $X_c \sim P_d(X_c)$, it is not $\phi(X_p | X_c)$ specifically that matches the prior, which encourages it to maintain the dependency between the latent codes and the input. Therefore, it is more difficult for $\phi$ to mimic the prior and trivially satisfy the push-forward constraint. | Summary: The authors propose a framework for learning the parameters of directed graphical models based on the idea of fitting by selecting the parameter values that minimize the Wasserstein distance (WD) between the data and model distributions. They prove (Thm. 1) that these distances can be characterized as the result minimizing a cost functional over a family of constrained stochastic 'backwards mappings', which yields a solvable objective when the constraint is relaxed and a regularization term is added to the objective.
The approach is illustrated via a series of experiments, including 3 real-world datasets, in which the proposed OPT-DAG method outperforms various baselines across disparate tasks.
Strengths: I'm am not particularly familiar with the literature on DAG learning, nor on learning with 'optimal transport' objectives, so the extent of the originality of this paper is impossible to gauge. Operating under the assumption that this is the first work to introduce an OPT objective in this context, I would feel confident in stating that this is a fairly significant innovation.
The presentation of the paper is, in my opinion, of a high standard (modulo one or two concerns that I will voice in the next section). The appendices provide a considerable depth of exposition of their procedure, and I am satisfied that the most pressing immediate concerns are addressed, either there or in the main body.
Weaknesses: I believe that the experimental section as well as the discussion could be improved. Again, I am unaware of what constitutes the benchmarking standard for DAG learning methods, but it seems to me as though the following are not adequately addressed:
a) The authors mention that their goal is not to achieve state-of-the-art performance, but rather to demonstrate the inherent versatility of their method. Situations in which their method might not be feasible are alluded to in Section 5 (Limitations), but the discussion here is extremely terse. Do these situations pose problems for competing alternatives as well? The chosen examples strike me as being somewhat simple. Do the authors claim that these examples are roughly representative of DAG learning problems generally?
b) For the topic evaluation example (239-250) OPT-DAG is outperformed by the baselines in the 'diversity' metric in all three datasets, and outperformed on 'coherence' on the DBLP data. These results are reported with precisely no discussion or explanation.
c) Section 4.2, first part (251- 266) could use some clarification. For example, the authors say "we generate a synthetic dataset D with 200 observations at rates {12, 87, 60, 33} with change points occurring at times (40, 60, 55)." If the model is as they say, (with a change of state happening with probability $1-p$, of which the lowest value is when $p = 0.95$, then a) why are there so few change points? At $p = 0.95$ would we not expect 10? Perhaps I am confused about what the authors are doing. Have they fixed the dataset, and computed the estimates of the rate parameters assuming that $p$ is fixed at the values indicated in the table (i.e. they fit the model 6 times, each time with a different assumed p)? Also, averaged over just the values $p = 0.75, 0.95$, OPT-DAG is actually inferior to MAP. Again, no explanation or discussion is forthcoming.
d) Generally, it is not clear to me if the baselines being compared against are the most appropriate. For instance, in the final example, VQ-VAE is used as a baseline, and its poor performance is linked to a phenomenon called 'codebook collapse'. The paper provided as a reference in fact proposes an extension to the vanilla version (of VQ-VAE) that they seem to be comparing against, which seems to indicate that not only is their baseline not state-of-the-art, it is in fact very well known to not be so...
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are discussed, but as per my comment above, it does not seem to me as though this discussion is adequate. The authors seem to have a variety of situations in mind in which their method will either not work or not be competitive, and a more explicit discussion here would be welcome.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewers for acknowledging the originality of our proposed method. We respond to the reviewer's questions as follows:
**1. Limitations:**
In terms of the requirement for reparametrization, related approaches experiences similar inflexibility, i.e., advanced VI-based ones e.g., reparametrized VI and amortized VI. The proposal by Ruiz et al. (2016) was shown to accommodate a wider class of variational distributions, which can be used in our framework straightforwardly to alleviate the above issue.
**2. Simplicity of applications:**
Our paper focuses on the development of a fundamental and general framework for learning parameters for DAGs with latent variables. As agreed by Reviewer s48C, our evaluation has been conducted on *a rich test suite of interesting problems*, more concretely a wide range of models with different types of latent variables (continuous and discrete) and for different types of data (texts, images, and time series). Some of them might look simple, but all of them are fundamentally important, widely used, yet still challenging. For example, Learning HMMs remains fairly challenging, with known optimization/inference algorithms (e.g., Baum-Welch algorithm) often too computationally costly to be used in practice. For learning discrete representations, despite the simple graph, the true generative function is unknown and it is often approximated with a deep neural network with a number of parameters that can scale up to millions. Solving those problems with EM or MAP would be both expensive and generally intractable.
We believe that the models used in our experiments are representative and demonstrate our frameworks' applicability well. With the versatility of our framework, it has great potential to be applied to more complex models.
**3. Topic Evaluation Results:**
Diversity and Coherence metrics are mentioned in lines 244-248. These are popular metrics in assessing the performance of topic models in the unsupervised setting. There exists a trade-off between Diversity and Coherence: words that are excessively diverse greatly reduce coherence, while a set of many duplicated words yields higher coherence yet harms diversity. A well-performing topic model would strike a good balance between these metrics. If we consider the two metrics comprehensively, our method achieves comparable or better performance than other learning algorithms. The datasets used for topic modeling in our experiments are quite diverse in terms of average document length : 20 News Group: 48, BBC News: 120, DBLP: 5 (all the numbers are rounded). The documents in DBLP are significantly shorter than the other two datasets and short texts are known to be challenging for both modeling and evaluation. Our method has comparable performance with others, but we agree that more study needs to be done to adapt our method to short texts better, which we leave to our future works.
**4. Change point probabilities:**
We first clarify the setting of time-series data segmentation. The HMM models a time series of integer counts associated with $K$ states, each of which follows a Poisson distribution of rate $\lambda_k$. The true transition probabilities are unknown and the value $p$ is treated as a hyper-parameter. The dataset is fixed and we fit HMM with 6 choices of $p$, for each of which we report the average results over 5 initialisations. Figure 5a reports the median of the most probable states inferred from 30 models at each step.
The question now is how to choose $p$. Observing the data, one can assume $p$ to be relatively high, 0.75 - 0.95 seems most reasonable. This explains why MAP estimation at $p = 0.05$ is terrible. Meanwhile, for our OTP-DAG, the effect of $p$ is controlled by the trade-off coefficient $\eta$ (which regularizes the effect of the push-forward constraint). To make it comparable, in the experiments, we avoid tuning the parameters for our method and fix $\eta = 0.1$. The effect of $p$ on our performance is fairly minor, which explains OTP-DAG estimates across $p$ are less variant. However, if we increase the $\eta$ weight and strongly force the model to fit $p = 0.05$, the model fits the data poorly and the performance degrades, in terms of both estimation and reconstruction quality. Table 1 in the PDF file provides empirical evidence for this claim, where we report OTP-DAG estimates and reconstruction losses at each $\eta$ value under the original settings. Figure 1 therein illustrates the predictions at each $\eta$, which also validates this claim.
**5. Choice of baselines:**
To clarify the rationale behind the chosen baselines, we must stress that our work focuses on parameter learning - that is to point estimate the model parameters given its known structure. Learning in the presence of latent variables often resorts to using EM or VI, which are fundamentally based on likelihood maximisation. Consequently, our experiments naturally entail a comparison with these approaches. In relation to VQ-VAE, the underlying technique is amortized inference, which is in fact a sub-class of VI. Note that discrete representation learning cannot be solved with EM or MAP. We are aware that VQ-VAE is not the state-of-the-art (SOTA), more importantly, nor do we aim to be one. The goal here is thus **not** to propose any SOTA model to discrete representation learning, rather to demonstrate the applicability of OTP-DAG to a problem that traditional methods such as EM, MAP or mean-field VI cannot simply tackle. Yet motivated by the reviewer's suggestion, we additionally investigated a recent model called SQ-VAE [1] proposed to tackle the issue of codebook collapse. Table 2 in the PDF file reports the performance of SQ-VAE compared with our OTP-DAG on the same task where we again demonstrate our competitiveness with this SOTA model.
[1] Takida et al. SQ-VAE: Variational Bayes on discrete representation with self-annealed stochastic quantization. ICML'22.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your rebuttal. I am satisfied that my concerns have largely been addressed; after reading all the rebuttals, I leave my score unchanged.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We sincerely appreciate the reviewer's time to engage in this interesting discussion. We will update our paper to include these valuable insights. | Rebuttal 1:
Rebuttal:
We thank the reviewers for acknowledging the novelty of our method and richness of the experimentation. We immensely appreciate your support for acceptance of our paper. We here summarize the key points of our discussion with the reviewers.
Diverging from the existing approaches, we propose a new line of thinking to learning directed graphical models through the lens of optimal transport. Our method, OTP-DAG, is a versatile framework capable of addressing various problems with a single learning procedure that importantly can be automated. We demonstrate these merits across applications while maintaining competitive performance with prominent related approaches.
In this rebuttal, we reaffirm this message through supplementary clarifications and additional experimental investigations.
1. *To Reviewer uiRc and Reviewer HQH3:* we elaborate on our experimental results with further insights about the flexibility of our model through an ablation study.
2. *To Reviewer uiRc:* we investigated SQ-VAE, a recent model proposed to tackle codebook collapse issue; we show competitive quality of learned representations and reconstructed images with this SOTA model.
3. *To Reviewer UL8t:* we additionally compare our OTP-DAG with popular amortization baselines on parameter estimation and LDA topic modeling tasks; OTP-DAG competes on par with these models while bypassing the inconvenience of analytical derivation of the ELBO and its derivative.
4. *To Reviewer s48C:* we clarify our OT formulations, which sheds light on the merits of our method compared to competing approaches and the intuition behind our capability of recovering the true parameters that can respect the dependencies among variables in the graphical structure.
**Attached here is the PDF file including the figures and result tables from our experiments.**
Pdf: /pdf/d66cfcf1246c6d7db1db975be7a733817dd4e24b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Change point detection and inference in multivariate non-parametric models under mixing conditions | Accept (poster) | Summary: This paper studies the problem of localization of multiple change points for offline multivariate time series. A non-parametric kernel-based CUSUM statistics is used together with the SBS algorithm. Moreover, a two-step estimation procedure is proposed where the initial estimate is further modified to the final estimate. The consistency result and the limiting distribution of the change point estimators are proved.
Strengths: The main strength of the paper is its theoretical findings. Although the SBS and the kernel CUSUM statistics are both well-known, the theoretical results on the consistency and especially the limiting distribution under the setting of multivariate time series seem to be novel.
Weaknesses: The presentation can still be improved, and the numerical results is a bit limited.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In the current theoretical results, the consistency result is proved for the initial estimate, while the limiting distribution is provided for the final estimate. It would be better to also present the consistency result for the final estimate \tilde{\eta}, and to elaborate more on whether or not we still have such limiting distribution for the initial estimate.
And it would be better if the author could illustrate in the numerical results the advantage of the second step (refined estimators), i.e., to show the improvement in estimation accuracy after using the refined estimator.
In the numerical results, the standard deviation of the misestimation rate (fig1) should be provided. The report of std can be very helpful here for comparing different methods since the average misestimation rate of MNSBS (blue) seems to be larger than or only slightly smaller than the other four baseline methods in various scenarios.
In the final estimate, line 126 on page 4, there is a typo in the subscript of F_{s_k,\eta_k}, \eta_k should not appear here since it is the unknown true change-point.
It would be better if the author could provide a pictorial illustration of the SBS algorithm.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your constructive comments. We reply to your comments below, with corresponding edits in the revision.
**Weakness**
Thank you for your suggestions. We will endeavor to improve our presentation and enhance numerical results in the revision. For this response, extra simulations were conducted. We examined Scenario 1 with $p = 3$, $n \in \{150, 300\}$, and $X_t$ as i.i.d.~$N(0_p, I_p)$. In Scenario 3, we added Random Forest for Change Points (RFCP) as a competitor. Runtime comparisons for methods with $p=3$, $n\in{150, 300}$ are considered. All results are in Tables 1 and 2 (extra page).
**Initial estimators**
As for the initial estimators, the consistency results are presented in Theorem 2, while the limiting distributions are not obtainable based on the current proof techniques. The current proof relies on a uniform tightness property and the uniqueness of the minimizer in loss functions' population counterpart. Neither of these two points holds for the initial estimator.
As for the numerical results, in Scenario 1 (additional simulation), we compared our method with and without refinement. Results indicate MNSBS excels in change point localization, and refinement enhances performance, see Table 2 (extra page).
**Report of standard deviations**
We have included stds for all experiments we performed in our paper. See the extra page for more details about the stds in the extra simulations.
**$F_{s, \eta_k}$**
Thank you for pointing out this typo, which we will edit in the revision.
**Pictorial illustration of SBS**
Thank you for your suggestion. We will refer to the plot in the original SBS paper in the revision. | Summary: This paper proposes algorithm to localize the multiple change points in nonparametric, short-term dependent time series. Assumptions required on the time series are certain mixing conditions and the smoothness in terms of density. The core idea of the algorithm is based on CUSUM and seeded binary segmentation. A two stage estimator is proposed, both of which are consistent, with the refined one in the second step achieving minimax rate under certain smoothness cases. Limiting distribution of the estimators are also given, which allows inference on the change points. Numerical results demonstrates the potential of the proposed method.
Strengths: This is a solid work with nice theoretical results. The problem tacked is a difficult one: localization of multiple change points in a nonparametric, locally dependent time series. The paper establishes not only consistent results, but also gives limiting distributions which can be beneficial for inference.
Weaknesses: Although this is a solid work with nice theoretical results, I feel the related literature is not sufficiently reviewed which are crucial for determining the significance and novelty of this work. I only found in the last paragraph of page 1 a short summary of the existing literature, which I pasted below and raised some of my questions.
"Firstly, to the best of our knowledge, temporal dependence, which commonly appears in time series, has not been considered."
-- I actually don't think this is true, as far as I know, there are a lot of existing work considering temporal dependence of time series (although I agree most of the exiting change point literature still assumes the iid setting). For example, I randomly searched for "change point dependent process" and there seems a bulk of related works, e.g., [1][2]. I believe the authors probably know better than I about these papers, and I wonder whether they could add some clarification on the difference between their setup and the one in this work.
"Secondly, there is no localization consistency result for data with the underlying densities being Hölder smooth with arbitrary degree of smoothness." -- it seems this is comparing to the work of Padilla et al. (2021) which only focuses on Lipschitz smooth densities, is my understanding correct?
"Lastly and most importantly, the limiting distributions of change point estimators and the asymptotic inference for change points have not been well studied." -- Do the authors specifically here refer to the papers on nonparametric, multiple change point literature? I believe the limiting distribution of change point estimators is one of the core questions in change point problems, and I am a bit surprised that the existing papers do not study them. Do the authors mean that most papers have been focusing on consistency results instead of the limiting distributions of change point estimators?
[1] Ray, Bonnie K., and Ruey S. Tsay. "Bayesian methods for change‐point detection in long‐range dependent processes." Journal of Time Series Analysis 23.6 (2002): 687-705.
[2] Dehling, Herold, Aeneas Rooch, and Murad S. Taqqu. "Non‐parametric change‐point tests for long‐range dependent data." Scandinavian Journal of Statistics 40.1 (2013): 153-173.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: My questions have been raised in the "weakness" section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: While this work has nice and solid theoretical results, I wonder if practitioners can truly find the method useful, both due to the high computational complexity, and the somewhat complicated limiting distributions (which might prevent the full deployment of this methods' inference capability).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your constructive comments. We reply to your comments below, with corresponding edits in the revision.
**On temporal dependence**
Thank you for bringing these references to our attention. We indeed overlooked them in the literature review stage. We will modify our claim correspondingly. We however believe the problems studied are still substantially different, especially that our focus is in the change point inference.
**On the smoothness**
You are indeed right that we are comparing with Padilla et al.~(2021).
**On the inference literature**
You are indeed right that we mean specifically nonparametric change point inference works.
**More literature review**
We provide some more literature review here. More will be included in the final version in due course.
There exist works such as [1] and [2], which estimate the location and size of structural breaks in nonparametric time series regression models. However, these studies do not address inference, time dependence or varying degrees of smoothness. Our proposed method is distinct in its adaptability to temporal dependence and its capability to handle a broad range of smoothness levels, providing a limiting distribution under these conditions.
There has been prior research on limiting distributions for nonparametric estimators but these mainly focus on single change point detection within independent data. For instance, [3] proposed a nonparametric change point detection and estimation model under the assumption of independent designs and the presence of a single change point. They also present a limiting distribution of the estimators. In contrast, our proposed method permits dependence with an arbitrary degree of smoothness and does not require prior knowledge of the number of change points. Furthermore, unlike [3], we analyse the optimality of our approach.
[4] considers the estimation of change points in a mean detection problem where errors exhibit long-range dependence. Although they derive a limiting distribution of the estimator like our method, their model requires the knowledge of the number of change points, assuming a single change point exists. In contrast, our model does not make such assumptions.
The recent work by [5] investigates the limiting distributions of multiple change point estimators. However, their analysis differs from ours primarily in two areas. First, they assume the number of change points to be known for deriving their theoretical results and propose a data-driven choice for it. Unfortunately, there are no guarantees provided for the estimation of the number of change points when a sequential test is proposed. Their model only ensures the estimation of the number of change points when this quantity is upper-bounded by a finite integer, and the dependence of the estimators on the estimated break fractions is suppressed. The second significant difference is our method's achievement of an optimal rate for estimating the change points, eliminating the log term, whereas [5] does not present such a rate.
[6] has also a derivation of limiting distribution for the CUSUM estimator based on $\alpha$-mixing sequences. This work is different from our work as their analysis is for univariate non-negative α-mixing non-negative random variables.
Past research has studied the limiting distributions of change point estimates in the parametric setting. A key difference here is our extension of these previous works from finite to infinite dimensions, i.e. in nonparametric problems.
**Practical limitations**
We will make our code public available in due course. Limiting distributions and their consequence of constructing confidence intervals, obtaining $p$-values are of high demands from practitioners. In the more challenging case when the jump size vanishes, the limiting distributions are based on standard Brownian motions, echoing the universality.
**References**
[1] Mohr, M., & Selk, L. (2020). Estimating change points in nonparametric time series regression models. Statistical Papers, 61(4), 1437-1463.
[2] Delgado, M. A., & Hidalgo, J. (2000). Nonparametric inference on structural breaks. Journal of Econometrics, 96(1), 113-144.
[3] Dumbgen, L. (1991). The asymptotic behavior of some nonparametric change-point estimators. The Annals of Statistics, 1471-1495.
[4] Horváth, L., & Kokoszka, P. (1997). The effect of long-range dependence on change-point estimators. Journal of Statistical Planning and Inference, 64(1), 57-81.
[5] Fu, Z., Hong, Y., & Wang, X. (2023). On multiple structural breaks in distribution: An empirical characteristic function approach. Econometric Theory, 39(3), 534-581.
[6] Gao, M., Ding, S., Wu, S., & Yang, W. (2022). The asymptotic distribution of CUSUM estimator based on α-mixing sequences. Communications in Statistics-Simulation and Computation, 51(10), 6101-6113. | Summary: The submission studies offline multivariate non-parametric change point detection.
The submission proposes a method for this task by combining 1) CUSUM estimator/statistic with 2) seeded intervals and 3) a refining procedure. The proposed estimator has 1) an improved error bound with weaker assumptions and 2) a limiting distribution for inference.
Both empirical and theoretical results are provided supporting the advantages of the proposed method.
Strengths: The submission extends the existing theoretical results from the setting with Lipschitz smooth densities and temporal independence to the case with Hölder smooth densities with $\alpha$ mixing, and also with a limiting distribution for the non-parametric case.
The writing is clear, with notations well defined.
Weaknesses: The submission is not presented in a very motivating way. For example, SBS appears abruptly without the background or reasoning. The methodology is also presented without explaining the motivations.
The methodological contribution of the proposed method is not clear. Combining CUSUM and SBS is nothing new and also mentioned in the original SBS paper. As a result, the refining step in the proposed method is important for the methodological novelty of the submission. However, unfortunately, there are not enough discussions to motivate this refining step and emphasizing the novelty of this refining step.
The experiments can also be improved. The figures are hard to read since it is hard to match the competing methods to different colors. The advantages of the prosed method are not clearly demonstrated: some bars are very close to each other without showing significant advantages of the proposed method.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The theoretical results seem to be a good improvement to the existing theory. However, I did not dig deep enough into the proof to evaluate the technical novelty. It is also unclear how much novelty and significance there is in the methodology.
1. Would it be possible for authors to compare to an ablation method, which just combines CUSUM with SBS without the refining step? Or, is this ablation just the SBS method compared in the experiments? If so, would it be authors to explain why this method (which is quite close to the proposed one just without the refining step) performs so badly?
2. Would it possible for authors to provide stds of the experiment results to make sure that the proposed method really provides significant performance improvement?
As a result, this submission is really on the borderline to me. I tend to accept the submission for the theoretical contributions.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your constructive comments. We reply to your comments below, with corresponding edits in the revision.
**Intuition on algorithms**
Thank you for your valuable comments. We will include all in the camera ready version where an additional page is allowed. For this revision, we refrain from the edits and just respond to your comments below.
Our method consists of a two-step procedure that incorporates the use of SBS, which is an approach for efficient change point detection in large-scale data. SBS determines the best-split point for various search intervals in a greedy manner, forming a deterministic pre-computed set of search intervals. This deterministic nature of SBS allows for computational efficiency, achieving a total length for the search intervals that's linear, up to a logarithmic factor, irrespective of the number of change points.
Algorithm 1 is designed to yield consistent change point estimations with high probability. To further refine the precision of our final estimators, we apply a local refinement step. Notably, the interval
$(\hat \eta_{k-1}, \hat{\eta}_{k+1})$
is anticipated to contain a single true change point, $\eta_k$. To ensure that we can exclude any other change points within
$(\hat \eta_{k-1}, \hat{\eta}_{k+1}),$
we trim this interval to $(s_k, e_k)$ as outlined in Equation (4).
**Novelty**
We do not intend to claim much novelty from the algorithmic aspect. The contributions of our paper lies in the general framework and the theoretical results. To reiterate, the nonparametric change point inference with temporal dependence is first time studied in the literature. On top of this, our refining step leads to optimal localisation errors.
**Numerical results**
We will enhance the clarity of figures in the revision. We would like to highlight that the strengths of our approach not only lie in the estimation precision, but also in its capability to produce statistically valid confidence intervals for these change points, a feature notably absent in many existing methodologies.
**Methods without refining**
The refining step is necessary in two aspects.
Firstly, the first step leads us to an adaptive bandwidth, tailored to each individual change point $\eta_k$. The optimal bandwidth for each change point $\eta_k$ is of order $O(\kappa_k^{1/r})$, where $\kappa_k$ represents the difference in density functions measured in the $L_2$ norm. In a multiscale setting where the size of the changes can significantly vary across different change points, a non-adaptive choice of bandwidth parameter could lead to sub-optimal estimation errors for the change points.
Secondly, the proof techniques we derive for the limiting distribution relies on the loss function to contain one and only one change point. This is unachievable without a two-step method.
**Question 1**
Using only SBS and CUSUM in the first stage will not adaptively set bandwidths for each change point $\eta_k$, risking errors; hence, a two-step procedure is advised for non-parametric time series change points.
**Question 2**
We have included stds for all experiments we performed in our paper. See the extra page for more details about the stds in the extra simulations. | Summary: The authors consider offline change point detection for multi-variate data where there could be multiple change points (specifically changes in marginal distributions from one time step to the next). The marginal densities of the underlying generative model are assumed to be smooth (specifically Hölder continuous, which includes Lipschitz continuous functions as a special case). Building on a procedure using binary-segmentation search with kernel density estimation and consistency (Padilla et al (2021)), the authors analogously show consistency of change-point estimators for time-dependent data. Furthermore, the authors derive limiting distributions of the change point estimators, both for the situation where the minimal jump size vanishes and for the situation where it remains constant.
Strengths: - The authors study a challenging problem, non-parametric estimation of multiple changepoints. Many prior works considered independent data and univariate, as well as stronger assumptions on the generative distributions.
- The authors prove consistency and analyze limiting distributions of the estimated change points. (also this is with less knowledge (such as of optimal bandwidths) than Padilla et al (2021))
- The authors include experiments and show its performance against baselines (designed for independent data).
Weaknesses: My main concerns regard writing/presentation and discussion of related works. I list a number of specific points below (many of which, by themselves, are fairly minor; list is not exhaustive).
#### Writing/presentation
- There is no discussion on motivating the time-series model class considered ($\alpha $-mixing sequences of random vectors with unknown marginal distributions), or even mention what types of common parametric classes belong to this class.
- The property of $\alpha $-mixing is never explained formally let alone any discussion on what that implies at a high-level for the time-series. There is a large literature on parametric time-series models – it would be valuable to mention example model classes that satisfy the assumptions here.
- What types of potential applications could this benefit (the time-series of which could plausibly be modeled with this class of models)? Lines 20-24 mention application areas with time-series data, which is fine, but there should be more of a connection made between (some of) those applications and the specific problem considered.
- Algorithm 1 and Section 3
- Provide discussion in the main text about the steps of Alg. 1 and intuition behind the design.
- SBS is mentioned but is not described (a brief high-level description should suffice)
- There is notation used in the algorithm that is not mentioned in Section 3, explain what those are.
- Include discussion about kernel properties in Section 3. Alg 1 uses kernel estimators implicitly in the line with $\tilde{F}$, but up to this point no $\mathcal{K}$ is defined or taken as input, or described in Section 3.
- (minor) Def 1 text $\mathcal{K}$ has not been described (or listed in the def. set up)
- (4) and (5) give some verbal description of what this is doing (i.e. what is the intuition behind how you are improving the estimates, possibly with a line or two about the way(s) in which the MNSBS might give poor(er) estimates that leads to this design). Presumably it has to do with that the choice of interval locations and sizing in MNSBS was using a simple pattern (oblivious to where the estimated change points would be). Now you have ‘naturally defined’ intervals between the estimated change points (so size and location of the intervals are adapted to the initial estimates).
- line 124-125 talk about the choice of weights 9/10 and 1/10.
- line 125 ‘an estimator of $\kappa_k$’ I would suggest to add verbal descriptors for notation to remind the reader, such as ‘an estimator of the jump size $\kappa_k$ at the $k$th change point’ and similarly for other notation. Also, several lines later (to explain differences with prior work) there is mention that it corresponds to the ‘optimal bandwidth’ and ‘ we use $\hat{\kappa}_k$ as bandwidth for the kernel density estimator’ though in line 127 the bandwidth used is $h_1$ which depends on, but is not set equal to, $\hat{\kappa}_k$. A clarification of that part, and a couple sentences shortly after Definition 1 about the bandwidth $h$ to lead the reader to anticipate why $h_1$ in line 127 is good would help.
- while bandwidths are mentioned earlier (Def. 1 for instance) there is no discussion on bandwidth optimality. Padilla et al (2021) have nice discussions regarding the choice of the bandwidth; I would suggest summarizing the key points to give intuition to the reader.
- line 132 ‘smaller interval’ – provide brief intuition about why the intervals here would be smaller than the
#### Experiments
- Include plots of the realizations of the time-series (to give intuition for how hard/easy the task is for those generative models)
- Report the run-times for the proposed method and baselines. With that also briefly describe the platform.
- It would also be valuable to run experiments with independent data, to see whether the proposed method suffers compared to baselines designed for that setting specifically.
#### Related works
- “Random Forests for Change Point Detection” by Londschien et al https://arxiv.org/abs/2205.04997 a multivariate nonparametric
change point detection methods for independent data (with an R package available) – in their experiments, their method generally was similar or better than ECP (the best baseline in your experiments) and had subquadratic time complexity
- There are works non-parametrically estimating the location and size of structural breaks in non-parametric time-series regression models. “Estimating change points in nonparametric time series regression models” by Mohr and Selk (2020) https://doi.org/10.1007/s00362-020-01162-8, “ Nonparametric inference on structural breaks “ by Delgado and Hidalgo (2000) https://doi.org/10.1016/S0304-4076(99)00052-4 among others. Given some (at least superficial) relation in the problem considered and some methodological components collectively in that body of work and the present work, which extends methods and results for non-parametric changepoint detection for independent data (esp. Padilla et al (2021)) to allowing for dependencies, I think some discussion on the how the problems, methods, and results of changepoint detection for non-parametric time-series regression relate would be helpful. Xu et al (2022) is mentioned in lines 181-182, though what the paper actually studied and how it related was not described.
- A (brief) discussion of similarities and differences between the problem considered here and that of (non-parametric) online change point detection would be good.
- One of the main contributions is characterizing the limiting distributions for multiple change point estimates in this non-parametric setting. To my knowledge, that is novel for the specific problem considered. However, similarities/differences in analyses and results (class of distributions that the limiting distribution belongs to) for past works that have investigated limiting distributions of change point estimates in different by related settings are not well-discussed. Unless I overlooked it, the only other work mentioned in the context of identifying limiting distribution of change point estimators is Xu et al (2022) in lines 181-182 and that was brief and implicit.
- There are prior works that studied limit distributions for non-parametric estimators for single change point detection with independent data (as well as works in the parametric setting), although from what I can tell under simpler changes (such as changes in the mean). For example, “The Asymptotic Behavior of Some Nonparametric Change-Point Estimators” by Dumbgen (1991) and Horváth and Kokoszka (1997) "The effect of long-range dependence on change-point estimators." Also “Optimal change-point estimation in time series” by Chan et al. 2021 for limiting distribution of single time-series change point (Bayes) estimators. There is a recent work “On multiple structural breaks in distribution: An empirical characteristic function approach” by Fu et al (2022) that analyzes the limiting distributions for estimates of multiple change point. Also potentially relevant is “The asymptotic distribution of CUSUM estimator based on $\alpha$-mixing sequences” by Gao et al (2022), though the analysis is for univariate non-negative $\alpha$-mixing non-negative random variables.
- Limiting distributions of change point estimates have been studied in the parametric setting. While those results do not diminish the significance of the results in this paper, there should be some mention and preferably a discussion on similarities/differences between the derived distributions.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: #### Questions
- Assumption 3 and Theorem 1. The $\gamma_T$ does not explicitly show up in the Theorem 1 statement – which constants depend on $\gamma_T$? Given the assumption statement is only “arbitrarily slow diverging sequence” that seems to me impressive though almost too mild for anything other than statements about the limit $T \to \infty$
- For the theorems, is $\Delta$ *required* to be growing as a function of $T$? From the assumptions it looks like as long as the jump size is increasing fast enough, we could fix the location of $K$ changepoints and the assumptions would hold. $\kappa$ growing quickly should make detecting change points easier, but would there still need to be some minimum value of $\Delta$?
- Contribution 1 (lines 84-87) – The first major contribution claimed is the development of a novel algorithm and statement “To the best of our knowledge, we are the first to innovatively adapt SBS to a multivariate non-parametric change point model”. From a (perhaps superficial) understanding, the Algorithm 1 proposed here seems to be an incremental adaptation of the procedure proposed in Padilla et al (2021), with the random binary segmentation used in the latter replaced with the recently proposed deterministic binary segmentation method SBS (Kovács et al. (2020)). Perhaps the authors can add more discussion on why that change is not straightforward.
#### Spelling, Grammar, etc.
- Line 98 ‘A … estimators’
- Theorem 2 statement – The formal statement could be shortened and I think easier to read if the notation (eg 188-189, 192- (7)) were introduced and discussed (add simple description of $P_k$’s formula when it is introduced) before the formal theorem statement.
- For theorem 3, maybe $\max_{1\leq k \leq K} \dots$
- line 291 ‘additional additional’
- lines 291-293 – that is great you had further experiments; mention that earlier both in introduction and early on in Section 5.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: It is fine (theoretical contribution)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your constructive comments. We reply to your comments below, with corresponding edits in the revision.
**Motivation on $\alpha$-mixing sequences.**
Thank you for pointing this out. We have included the following in the revision.
The $\alpha$-mixing condition with exponential decay as specified in Assumption 1.e is a commonly held assumption in time series analysis. A broad spectrum of multivariate time series satisfies this condition, including linear/nonlinear VAR models [e.g. 1], a comprehensive class of GARCH models [e.g. 2], and various Markov processes [e.g. 3].
**Algorithm 1 and Section 3**
Thank you for all your valuable comments. We will include all in the camera-ready version where an additional page is allowed. For this revision, we refrain from the edits and just respond to your comments below.
*Design and intuition of Algorithm 1*
Algorithm 1 has the SBS as its skeleton and a nonparametric version of the CUSUM statistics as its organs. This is tailored to its nonparametric and potentially multiple change points nature.
*High-level description of SBS*
SBS is a multiscale version of a moving-window scanning method. To conquer the potentially multiple change points with unknown spacing, instead of using a fixed window width, SBS uses a collection of window width choices, each of which is applied to a moving-window scanning.
*Notation not used in Section 3*
Thanks for pointing it out. These pieces of notation are only local in the algorithm and are irrelevant to the rest of the paper.
*Kernel function*
For Algorithm 1 to execute, there are no further theoretical assumptions needed for the kernel function. In Definition 1, which is at the beginning of Section 3, the kernel function is already mentioned. In the revision, we have specified that $\mathcal{K}$ is a kernel function.
*Equations (4) and (5), the choice of the weights and ``smaller interval''*
Given the consistency of the initial change point estimators procured from Algorithm 1, the interval $(\widehat \eta_{k-1},\widehat \eta_{k+1})$ is anticipated to contain merely one undetected change point. By conservatively trimming this interval to $(s_k, e_k) $, we can safely any change points previously detected within $ (\widehat \eta_{k-1},\widehat \eta_{k+1}) $. Consequently, the trimmed interval $ (s_k, e_k)$ is likely to contain only true change point $ \eta_k$ with high probability. Due to the same reason, our choice of weight in (4), 1/10, is a convenient choice. Any constant weight between 0 and 1/2 would suffice.
*The choice of the bandwidth and its optimality*
Inspired by Padilla et al. (2021), who proposed to use $O(\kappa_k) $ as an optimal bandwidth in the context of Lipschitz densities, we adopt $h_1 = O(\hat{\kappa}_k^{1/r})$ as the bandwidth for our kernel density estimator. This choice incorporates the broader scope of our work, which studies a more general degree of smoothness. Notably, if the underlying density functions strictly adhere to the Lipschitz criterion and $r = 1$, our bandwidth selection aligns with that recommended by Padilla et al. (2021).
**Experiments and Random forests**
See general rebuttal.
**Related works**
*Literature in nonparametric CP, online CP, and CP inference*
Thank you very much for bringing up these valuable suggestions. Due to the limited time and space, we will defer the detailed comparisons along with corresponding edits to the revision.
In this response, we briefly comment on the comparisons.
In the nonparametric change point literature, different kernel-based methods are adopted for change point localisation and testing. Compared to the existing work, we follow the suit of using kernel-based CUSUM statistics but incorporate temporal dependence, which is rarely seen in the literature. Most importantly, we are unaware of existing work on nonparametric change point inference, which is the main selling point of our paper.
In terms of online and offline CP comparisons, the core methodology is largely shared, but with different goals and performance measurements. It is also unclear how to conduct inference in the online CP context.
Most CP inference work focuses on fixed-dimensional parameters as well as lacks tracking of many model parameters. Xu et al.~(2022), in terms of style, is indeed the most closely related. but tackles high-dimensional linear regression, fundamentally distinct from our nonparametric density estimation.
**$\gamma_T$**
In Assumption 3, we require $\kappa^2 \Delta \log^{-1}(T) T^{-p/(2r+p)} > \gamma_T$, with $\gamma_T$ diverging arbitrarily slow. Although $\gamma_T$ does not appear in the theorem statements explicitly, its impact can be seen. For instance, Theorem 1 shows that, with large probability
$$
|\widehat{\eta}_k - \eta_k| \lesssim \kappa_k^{-2}T^{p/(2r+p)} \log(T),
$$
which implies that
$$
|\widehat{\eta}_k - \eta_k|/\Delta \lesssim \gamma_T^{-1} \to 0,
$$
as $T$ diverges. This regulates that each CP estimator is close to one and only one true CP. This is the key to the success of deriving limiting distributions.
**Conditions on $\Delta$**
For $\Delta$, we only require that the signal-to-noise ratio functions of $\kappa$ and $\Delta$ satisfy Assumptions 3 and 4. In nonparametric literature, the $L_2$-norms of densities are assumed to be bounded, which leads to bounded $\kappa$.
**References**
[1] Eckhard Liebscher. Towards a unified approach for proving geometric ergodicity and mixing properties of nonlinear autoregressive processes. Journal of Time Series Analysis, 26(5):669–689, 2005.
[2] Farid Boussama, Florian Fuchs, and Robert Stelzer. Stationarity and geometric ergodicity of bekk multivariate garch models. Stochastic Processes and their Applications, 121(10):2331–2360, 2011.
[3] Kung-Sik Chan and Howell Tong. Chaos: a statistical perspective. Springer Science & Business Media, 2001. | Rebuttal 1:
Rebuttal: We are grateful for your constructive comments. Taking advantage of the extra page submission we include the following extra simulations to help us to address each of the particular reviews.
**Independent data**
We examined Scenario 1 with $p = 3$, $n \in \{150, 300\}$, and $X_t$ as i.i.d.~$N(0_p, I_p)$. Results indicate MNSBS excels in change point localization, and refinement enhances performance, see Table 2.
**New competitor (Random Forest for change point)**
We compare the random forest change point (RFCP) method in [1] with our proposed method and the rest of the competitors, using Scenario 3 of our simulation studies. The results are summarized in Table 1, which shows that our MNSBS is generally outperformed.
**Runtime comparison**
We compared our method's runtime with others on a machine equipped with an Apple M2 chip (8-core CPU) for $p=3$ and $n\in\{150, 300\}$ in the independent setting. Our method is comparable at $n=150$ but slower at $n=300$ due to CUSUM's computational demands. See Table 2 for more details.
**Plots**
We include a plot of realizations of the time series under the independent setting, in Figure 1. See the supplementary materials for a plot illustrating our considered real data.
**References**
[1] Malte Londschien, Peter Buhlmann, and Solt Kovacs. Random forests for change point detection. arXiv preprint arXiv:2205.04997, 2022
Pdf: /pdf/d53171a4c46bb5fe136abcb43a3281cc284f89b3.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies non-parametric offline change-point detection with the assumption that the probability density functions are Holder continuous on some compact support, and the time series is $\alpha$-mixing. The authors propose a two-stage algorithm, first roughly divide the time series into different segments, then refine the change-point locations. The asymptotic distribution of the estimated change-points are derived, and as a third step, the author also proposes an algorithm to compute a confidence interval for each change-point. Numerical experiments show the effectiveness of the proposed methods.
Strengths: 1. The non-parametric model used in this paper is Holder continuous class and is general.
2. The asymptotic distribution of the estimated change-point location is derived for the vanishing and non-vanishing regime respectively.
3. The time series can have temporal dependencies under mixing conditions.
4. The writing and the structure of the paper is clear.
Weaknesses: 1. The authors claim their model is under mixing conditions but the theory parts' dependency on the $\alpha$-mixing coefficient $c$ in Assumption 1.e is hardly discussed in the main text.
Minor issue:
1. The equation under line 126: should remove $\arg\min$.
2. The equation under line 126: why is there $\eta_k$? Isn't $\eta_k$ unknown?
3. Line 131: Why is $\widetilde \eta_k$ referred to as the `kernel density estimator'? From previous context it is the change-point estimator.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In Assumption 2.a, is it for any $h>0$?
2. In Assumption 2.b, the end of line 155, what is $v$ on the exponent? Do you mean $\nu$?
3. In Theorem 2.a the non-vanishing regime, do you need $f_{\eta_k},f_{\eta_{k+1}}$ to converge in distribution as well or they can be arbitrary as long as the jump size $\kappa_k$ converges to a constant? How does the $\alpha$-mixing works here? What is $F_{t,h_2}$ for $t<0$? Why the limiting distribution of $\widetilde \eta_k$ depends on $f_0,f_1,f_2$ when $\eta_k$ is far from 0?
4. In equation 7, should it be $\kappa_k^{p/r+2}$ on the numerator or just $\kappa_k^{p/r-2}$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As stated in the submitted paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your constructive comments. We reply to your comments below, with corresponding edits in the revision.
**$\alpha$-mixing coefficients**
We appreciate the reviewer's comments regarding the need for a more detailed discussion on $\alpha$-mixing coefficient $c$. We will include the following discussion in our revised manuscript.
Throughout this paper, we focus on multivariate time series that exhibit $\alpha$-mixing behavior with exponential decay coefficients. This condition is denoted as Assumption 1.e. While the constant $2c$ is present in the exponent of the exponential function, it plays a non-essential role in our theoretical framework. We include it solely for the sake of convenience during verification.
The $\alpha$-mixing condition with exponential decay as specified in Assumption 1.e is a commonly held assumption in time series analysis. A broad spectrum of multivariate time series satisfies this condition, including linear/nonlinear VAR models [e.g. 1], a comprehensive class of GARCH models [e.g. 2], and various Markov processes [e.g. 3]. To further elaborate, consider the
$p$-dimensional stationary VAR(1) model:
$$X_t = A X_{t-1} + \epsilon_t,$$
where $A$ is the $p \times p$ transition matrix whose spectral norm satisfying $||A|| \in (0,1)$ and the innovations $\epsilon_t$ are i.i.d. Gaussian vectors. Denote $\Sigma = cov(X_1)$, and let $\lambda_{\max}$ and $\lambda_{\min}$ be the largest and smallest eigenvalues of $\Sigma$. Then by Theorem 3.1 in [4], we have that for any $k \geq 0$, the $\alpha$-mixing coefficient of the time series $X_t$ satisfying
\begin{align*}
\alpha_k \leq \sqrt{\frac{\lambda_{\max}}{\lambda_{\min}}}\|A\|^k \leq e^{-C\log(1/\|A\|)k},
\end{align*}
where $C > 0$ is some constant depending only on $\sqrt{\lambda_{\max}/\lambda_{\min}}$. In this example, the constant $C\log(1/\|A\|)$ corresponds to the constant $2c$ in Assumption 1.e.
Essentially, Assumption 1.e is useful to unlock several technical tools under temporal dependence, which include a Bernstein’s inequality [5], a moment inequality [see Proposition 2.5 in 6], maximal inequalities (see Section G.1) and a central limit theorem (see Section G.2). For instance, we utilize the moment inequality to bound the autocovariances of a dependence process with all lags by $\alpha$-mixing coefficients, thereby demonstrating the existence of the long-run variance, which is the sum of all the autocovariances.
**Minor issues**
Thank you for pointing these out. The quantity $\eta_k$ should be $\hat{\eta}_k$ and we have changed in our revision. Regarding Line 131, we have edited to "*we use $\widehat{\kappa}_k$ as bandwidth for the kernel density estimator in deriving $\widetilde \eta_k$*".
**Questions 1 & 2**
Thank you for pointing these out. We have corrected them correspondingly.
**Question 3**
In Theorem 2.a, you are right that we do need the density function $f_{\eta_k} $ to remain as a constant or converge asymptotically, we will include this condition in our revision.
The $\alpha$-mixing condition is imposed on the data sequence, while $f_{\eta_k}$ and $f_{\eta_{k+1}}$ are population marginal density functions.
For $F_{t, h_2}$, you are right that it is a typo. For each change point $\eta_k$, it should be $ F_{\eta_k+t,h_2}$ and $ t<0$ corresponds to the time series before the change point $\eta_k$. With this correction, the distribution of $ \widetilde \eta_k$ does not depend on $f_0, f_1, f_2$ when $\eta_k$ is far from 0. To be more precise, as a byproduct of the limiting distribution in Theorem 2, $|\widetilde \eta_k -\eta_k| =O_p(\kappa_k^{-r/p-2})=o(\Delta) $. With high probability, the distribution of $ \widetilde \eta_k$, therefore, only depends on data in the interval $(\eta_{k-1},\eta_{k+1})$.
**Question 4**
We appreciate the reviewer's observation. In equation (7), the correct expression is indeed $\kappa_k^{p/r-2}$.
**References**
[1] Eckhard Liebscher. Towards a unified approach for proving geometric ergodicity and mixing properties of nonlinear autoregressive processes. Journal of Time Series Analysis, 26(5):669–689, 2005.
[2] Farid Boussama, Florian Fuchs, and Robert Stelzer. Stationarity and geometric ergodicity of bekk multivariate garch models. Stochastic Processes and their Applications, 121(10):2331–2360, 2011.
[3] Kung-Sik Chan and Howell Tong. Chaos: a statistical perspective. Springer Science & Business Media, 2001.
[4] Fang Han and Wei Biao Wu. Probability inequalities for high-dimensional time series under a triangular array framework. In Springer Handbook of Engineering Statistics, pages 849–863. Springer, 2023.
[5] Florence Merlevède, Magda Peligrad, Emmanuel Rio, et al. Bernstein inequality and moderate deviations under strong mixing conditions. High dimensional probability V: the Luminy volume, 5:273–292, 2009.
[6] Jianqing Fan and Qiwei Yao. Nonlinear time series: nonparametric and parametric methods. Springer Science & Business Media, 2008. | null | null | null | null | null | null |
Exact Optimality of Communication-Privacy-Utility Tradeoffs in Distributed Mean Estimation | Accept (poster) | Summary: The authors consider the problem of high dimensional mean estimation with communication and privacy constraints. For federated learning or distributed SGD, model updates must be communicated to central server, but as models become much larger this can be a bottleneck within the computation. As a result, previous works has considered the setting in which there is a restriction on the number of bits that can be communicated, which the authors also follow. Furthermore, the authors consider the differentially private setting adding another constraint. This setting was also considered in previous work, some of which achieved optimality up to constant factors. The authors improve upon that work and achieve exact optimality and this theoretical improvement is backed by their empirical experiments.
Strengths: Improves upon previous work for a reasonably well-studied problem and achieve optimal tradeoff between communication-privacy-utility and further show how the previous work are special cases in their method.
Weaknesses: Minor gripe that some of the notation could have been expanded upon more clearly (for example: P_U-almost) but can understand that page limits can add difficulty.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: None
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully reading our manuscript and for providing positive and constructive feedback. We are glad that the reviewer acknowledged the improvements we provided over the prior order-optimal solutions and the importance of unifying the previous schemes under our exact-optimal solution. We also agree with the reviewer that some notations might need further clarification for some readers. We will provide more notational comments, e.g., $p$-almost means that it happens with probability 1 under measure $p$. We thank the reviewer for pointing out this.
---
Rebuttal Comment 1.1:
Comment: Thanks for clarifying some of the notation and adding more detail in future versions of your work! The theorem statements in another rebuttal also added further clarity for me, and I appreciate the authors adding these details to future versions.
---
Reply to Comment 1.1.1:
Title: Thank you for your response to our rebuttal.
Comment: We thank the reviewer for their time reading our rebuttal. We will include the suggested details in the final manuscript. | Summary: The paper studies the distributed mean estimation (DME) problem with communication & privacy constraints. The goal is to construct an unbiased estimate of a unit vector $v$ using $b$-bits that minimizes the mean squared error and provides $\epsilon$-LDP. It is well known that any scheme achieving the above communication will have to quantize the unit sphere using $M=2^b$ points. Further, to achieve $\epsilon$-LDP, a particular quantization point is chosen and returned according to an appropriate distribution.
The main contribution of this work is characterizing DME schemes that achieve optimal error in the presence of a communication budget and privacy constraints. The authors show that a random set of points generated using a rotationally symmetric distribution will achieve the optimal error. The intuition is that such a point set will be maximally separated and will most efficiently cover the sphere. $\epsilon$-LDP is achieved using a randomized response mechanism.
Strengths: - Presents a DME scheme under exact communication constraints instead of asymptotic error bounds that are optimal.
- The idea of treating the k-nearest codewords equally instead of just the closest seems to provide improvements over the prior works. This clean trick can be of independent interest.
Weaknesses: - The work provides sufficient conditions for a DME scheme to be optimal, i.e., an optimal scheme has a particular canonical setup.
These conditions are not necessary, and not all schemes satisfying the canonical setup achieve optimal error.
- Either some proofs have (probably fixable) errors, or I have misunderstood. So at least rephrasing or an elaborate explanation is required. Details are provided in the next section (Questions)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) As mentioned in the paper, the prior works of SQKR, MMRC, and FT21 can be seen as specific instantiations of random codebooks. Could it be possible that their schemes are exact-optimal as well and satisfy the conditions mentioned in this work? While a discussion is provided, can you comment on what condition is not met by each of these that confirms that they are not exact-optimal?
2) Line 217-220 leading to Eq 29 needs further justification. Intuitively, this holds if the k points are equidistant, but I am not sure why the k-nearest neighbors will give an unbiased estimate. In the current manuscript, this is just a statement and not a formal proof of the fact that "k-closest encoding consistently yields an unbiased scheme for any rotationally symmetric codebook." Further, it would be good to mention that for a rotationally symmetric codebook, the $\sum_m U_m = 0$.
3) Proof of Lemma 3.7 - What is top-k set? I am assuming it is the set of k-nearest neighbors to random a in s^M
Should the summation have only $s_m$ and not $a^Ts_m $?
Typo in eq 94 - the second summation should be over s_i.
Eq 96 shows that you get an unbiased estimate of $e_1$ and not $a$. Also, other coordinates cannot be ignored.
$r_k$ value is roughly $M$, so the error of the RRSC scheme will be $\sim M^2$. So for $M = d$, there is an error of $d^2$ which is higher than the order optimal ones $(O(d/\log d))$ for the same amount of allowed communication. A brief clarification on this would be great.
4) For the shared randomness setup, will the server/nodes have to regenerate the codebook for each invocation of the algorithm?
5) Some suggestions to improve the readability in my opinion would be:
- Fix errors in proofs or provide better justifications
- In Equation 26 the probability of choosing the k+1-th closest codeword is zero if k is an integer. This seems to be inconsistent with Algorithm 1, and the analysis that follows in Section 3.4
- Clarify what the `=' in Def 3.3 means. It can be confusing to think that the two codebooks are equal (permutations of each other) rather than their distributions being identical.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully reading our manuscript and providing positive and constructive feedback. The reviewer appreciated going beyond the asymptotic error bounds and found our k-nearest algorithm worthy of independent interest. The reviewer also asked clarifying questions, which we try to address below.
### **Exact optimality conditions**
The reviewer is right that we show a sufficient condition as follows
"There exists an exact-optimal canonical scheme with rotationally symmetric codebook." We will revisit our abstract and discussion to reflect this more accurately.
### **SQKR,FT21,MMRC**
While it is clear from our experimental results that SQKR and MMRC fail to achieve exact optimality as RRSC significantly outperforms them, we now provide some intuitive reasoning why this is the case.
**SQKR** produces a subset of indices $\mathcal{I}$ from $\lbrace1,2, \dots, d \rbrace$ with a size of $|\mathcal{I}|=k$
utilizing the shared randomness. Then, the codebook defined as $\lbrace\sum_{i\in \mathcal{I}} a_i u_i: a_i=c/\sqrt{d} \mbox{ or } -c/\sqrt{d} \rbrace$ represents a tight frame for Kashin's representation. It is essential to recognize that the distribution generating the codebook is not rotationally symmetric; it is only permutation symmetric.
Moreover, the codewords are not maximally separated. Thus, we strongly believe that the reason why SQKR does not reach exact optimality is that it does not generate maximally separated codewords.
**FT21** aims to simulate the continuous PrivUnit scheme via pseudo-random generator (PRG) and rejection sampling. As FT21 operates without employing shared randomness, it requires a minimum communication cost of $\Omega\left(\log d\right)$ bits to attain an unbiased estimator. In addition, as we noted in the paper, the codebook associated with FT21 depends on the PRG it uses. Although the specific PRGs in FT21 are not explicitly outlined, we believe our simplex code would outperform codebooks that correspond to standard PRGs. For high communication budget (large b), codewords based on PRGs may offer computational efficiency, but our focus in this paper centers on scenarios with smaller or moderate values of b, and hence our approach can yield a better (non-asymptotical) MSE at a manageable computational cost.
**MMRC** introduces an importance sampling approach where samples are drawn based on a uniform distribution across the sphere,
supplemented with a truncation technique. This is equivalent to producing i.i.d. codewords, which does not necessarily ensure maximal separation. We strongly believe that maximal separation of codewords, and hence the most effective coverage of the sphere, is important for exact optimality.
### **Eq (29)**
We apologize for the confusion around lines 217-220. We try to clarify the step leading to Eq (29) here and will also revise it in the final manuscript. We note that in Eq (28), $\mathbb{E}[\sum_{m\in T_k}U_m]$ can only have $e_1$ as the remaining component since we pick the k-closest codewords $U_m$ (i.e., $m\in T_k$) and all directions other than $e_1$ are canceled out due to the rotational symmetry of the codewords. We hope this explanation clarifies the transition from Eq (28) to Eq (29). As suggested by the reviewer, we will clearly state in the final manuscript that $\mathbb{E}[\sum_m U_m]=0$ for all rotationally symmetric codebooks.
### **Proof of Lemma 3.7**
We apologize for the confusion due to some typos. We will update the proof as follows. From (28), we want to find $r_k$ that satisfies $r_k \times \mathbb{E}[\frac{e^\epsilon-1}{ke^\epsilon+M-k}\sum_{m\in T_k(e_1,U^M)}U_m]=e_1$, where $U_m=As_m$. The condition $m\in T_k(e_1,U^M)$ implies that $<e_1,As_m>$ is one of the k-largest. Let $a_1^\intercal$ be the first row of the matrix A. Since $s_m$ is a vertex of the simplex, $<e_1,As_m>=a_1^\intercal s_m=\sum_{i=1}^M a_{1,i} s_{m,i}=a_{1,m} \sqrt{\frac{M}{M-1}}-\sum_{i=1}^M\sqrt{\frac{1}{M(M-1)}}a_{1,i}$. Since $\sum_{i=1}^M \sqrt{\frac{1}{M(M-1)}}a_{1,i}$ is not a function of m, the condition $m \in T_k(e_1,U^M)$ is equivalent to "$a_{1,m}$ is top-k among $\lbrace a_{1,i}\rbrace_{i=1}^M$". Also, due to the rotational symmetry, $\mathbb{E}[\sum_{i=1}^M \sqrt{\frac{1}{M(M-1)}}a_{1,i}]=0$. Thus, $r_k$ should satisfy $r_k \times \mathbb{E}[\frac{e^\epsilon-1}{ke^\epsilon+M-k} \sum_{m\in top-k}a_{1,m}]=1$, where $m\in$ top-k means that "$a_{1,m}$ is top-k among $\lbrace a_{1,i}\rbrace_{i=1}^M$".
### **Order of $r_k$**
Since the role of k is similar to that of the threshold in PrivUnit, the optimal choice of k grows with O(M). Please also see Figure 1 in the attached pdf, which demonstrates how the optimal k (found by Algorithm 1 in the attached pdf) changes with M. On the other hand, $C_k$ is an expected sum of top-k coordinates of the unit vector, which is roughly $O(k \sqrt{1/d})$ for $k=O(M)$. Thus, unlike the reviewer's suspicion, $r_k$ **does not** scale with M; instead, it scales with $\sqrt{d}$. We will emphasize this better in the final version.
### **Non-Integer k**
We note that in Eq (26), if k is an integer, the probability of selecting k+1-th closest codeword is simply $1/(ke^\varepsilon+M-k)$, which corresponds to the probability of the case "otherwise". The reason why we allowed k to be non-integer as well in Section 3.3 is to simplify the proof. In practice, however, both in Algorithm1 and in the experiments, we always pick an integer k. We also note that RRSC achieves impressive performance that matches PrivUnit even with an integer k.
### **Clarification on the Notation**
The reviewer is indeed right that $\stackrel{(d)}{=}$ implies that the distributions are identical, which we will clearly state in the final manuscript.
---
We again thank the reviewer for carefully reading our manuscript and for providing positive feedback. We hope we have addressed all the questions. We are happy to engage in further discussion with the reviewer to resolve any remaining concerns.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response.
MMRC:
"MMRC introduces an importance sampling approach where samples are drawn based on a uniform distribution across the sphere, supplemented with a truncation technique. This is equivalent to producing i.i.d. codewords, which does not necessarily ensure maximal separation. We strongly believe that maximal separation of codewords, and hence the most effective coverage of the sphere, is important for exact optimality."
iid codewords sampled from unit sphere = normalized high-dimensional Gaussians are with high probability good covers for the unit sphere.
Eq 29:
For Eq29, my concern remains. It is still not clear to me without proof why the sum of the top $k$ closest vectors to $e_1$ in a rotationally symmetric codebook will cancel out in all other directions. Even if trivial, please include it for people like me.
For instance, in 2-dimensions, if I choose the codebook to be ${(0,1), (0,-1), (1,0), (-1,0)}$, and for $k=2$, the top $k$ closest to $e_1 = (1,0)$ will consist of ${(1,0), (0,1)}$ (or $(0,-1)$ depending on how you break ties.) which do not cancel out. For $k=3$, you will get this property.
So I am guessing you mean to say that there exists a $k$, where the top-k will work. But even this will need proof and a bound on the value of $k$. Moreover, as stated in Algorithm 1, $k$ cannot be a part of the input.
Order of $r_k$: I apologize for being technically challenged, but I could not find any attached pdf. However, I understand that the scaling of $r_k$ is roughly $(k + M)/C_k = O(M/C_k)$ ignoring the $\epsilon$ terms. While for large $k$, i.e., $k=O(M)$, $r_k$ scales as $\sqrt{d}$, for small $k =O(1)$, the scaling of $r_k$ seems to be roughly $O(M \sqrt{d})$.
Non-integer $k$: Equation 26 will translate to choosing the top $floor{k}+1$, and the probabilities will not all sum to 1. There is a tiny typo here that needs to be fixed.
---
Reply to Comment 1.1.1:
Title: Thank you for your response to our rebuttal.
Comment: We thank the reviewer for reading our rebuttal in detail and following up with further questions. We respond to each question below. The **pdf** with a new algorithm (to find the optimal k) and a new figure (that shows how k changes with increasing M) is **attached to our general response titled "Author Rebuttal by Authors" in the top -- before the reviewers' comments.**
-----
##### **MMRC: "iid codewords sampled from unit sphere = normalized high-dimensional Gaussians are with high probability good covers for the unit sphere."**
This holds true when sampling a large number of Gaussian vectors. However, when sampling a relatively small number of these vectors (as is our interest with a small $M$), it is essential to select the vectors judiciously to effectively cover the sphere.
---
##### **Eq 29:**
The codebook is derived from a rotationally symmetric distribution through shared randomness,
and any expectations are taken with respect to this distribution. Consider the codebook $\lbrace e_1, e_2, -e_1, -e_2\rbrace$ where $e_1 = (1,0)$ and $e_2 = (0, 1)$. This codebook is not rotationally symmetric. However, if $A$ is a uniformly sampled $2\times 2$ random rotation matrix, the codebook $\lbrace A e_1, A e_2, -A e_1, -A e_2\rbrace$ is rotationally symmetric. Given that $Ae_1$ has a uniform distribution on the sphere, the conditional expectation $\mathbb{E}[A e_1 |\mbox{$Ae_1$ is the closest to $e_1$}]$ can only contain the $e_1$ component. This is because the $e_2$ components cancel out due to symmetry.
The reviewer is right that $k$ is not part of the input. As we show in Algorithm 1 in the attached pdf in our general response, the optimal $k$ is determined by $\varepsilon$, $d$, and $M$ by minimizing $r_k$.
----
#### **Order of $r_k$:**
For every $M$, we determine the best value of $k$ that yields the smallest $r_k$. We contend that the optimal selection of $k$ increases linearly with $M$, as evidenced in the provided PDF in general response. This leads to $r_k = O(\sqrt{d})$.
---
#### **Non-integer $k$:**
We apologize for the confusing notation. For top-$\lfloor k\rfloor$ candidates, we assign probability $e^\epsilon / (ke^\epsilon+M-k)$.
For the ($\lfloor k\rfloor +1$)-th candidate, we assign probability $\frac{(k-\lfloor k\rfloor)(e^\epsilon-1)+1}{ke^\epsilon+M-k}$.
For the rest $M-\lfloor k \rfloor -1$ candidates, we assign probability $1 / (ke^\epsilon+M-k)$.
The sum of the probabilities is 1.
\begin{align*}
\lfloor k \rfloor \times \frac{e^\epsilon}{ke^\epsilon+M-k}
+ \frac{(k-\lfloor k\rfloor)(e^\epsilon-1)+1}{ke^\epsilon+M-k}
+ (M-\lfloor k\rfloor -1) \times\frac{1}{ke^\epsilon+M-k} = 1
\end{align*}
---
We thank the reviewer for their time in reading the manuscript and the rebuttal in detail. We hope our response addresses the reviewer's points. If there is any other concern or confusion remaining, we are more than happy to discuss them. If our response is satisfactory for the reviewer, we kindly ask them to consider revisiting their score. | Summary: This paper studies the mean estimation problem under communication and local differential privacy constraints in the non-asymptotic (exact optimal) setting, and proposed a randomization mechanism that satisfies the identified necessary property of the exact optimality.
Strengths: 1. The authors proved a necessary condition that the codebook-generating distribution needs to be rotationally symmetric.
2. The authors further proposed the first exact optimality algorithm Random Rotating Simplex Coding (RRSC) that matches the necessary condition.
3. Empirical results in the paper showed that the proposed RRSC outperforms the state-of-art benchmarks (order-optimal) for the task.
4. The authors proposed interesting conjectures and clear future directions based on the design and properties of the algorithm.
Weaknesses: 1. line 177, "an random" to "a random"
2. The design of k-closest encoding and Theorem 3.6 (along with its proof in the Appendix) lacks the intuition on k, which seems to be valuable to explore both theoretically and empirically (as stated in the conjecture).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: For the choice of $k$, in the experiments are to minimize $C_k$ based on different $k$'s. How can such choice of $k$ guarantees the requirement in Theorem 3.6 (and its proof)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Lack of intuition on k as described in weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully reading our manuscript and providing positive feedback on our contributions. Specifically, we are happy to see that the reviewer recognized our proposed algorithm, Random Rotating Simplex Coding (RRSC), as the first exact-optimal scheme for distribution mean estimation problem under privacy and communication constraints; appreciated the significant empirical improvements over the existing order-optimal schemes; and found the future directions and conjectures valuable to the community. The reviewer also raised some important questions that we would like to clarify below.
### **The optimal choice of $k^\star$**
We understand that it was not very clear how to select the right value for $k$ that satisfies the conditions in Theorem 3.6. We will try to elaborate on this here and will add this discussion in the final version of the manuscript as well. The optimal $k^\star$ is the minimum achiever of Equation (30) that minimizes $r_k = \frac{ke^\epsilon + M-k}{e^\epsilon-1} \sqrt{\frac{M-1}{M}}\frac{1}{C_k},$
since the distortion is $r_k^2-1$ (as given in Eq (90) in the proof of Theorem 3.6. in Appendix E) -- i.e., the distortion is minimized if we minimize $r_k$.
In practice, to address the optimization problem outlined above, an approximation of $C_k$ is necessary, defined as the anticipated sum of the top-$k$ coordinates of a uniformly random vector $a\in\mathbb{S}^{d-1}$. For every value of $k$, we can effectively estimate $C_k$ by obtaining a sufficient number of samples of uniform random vectors $\{a_i\}$.Then, we calculate the average of the sum of the top-$k$ components for each individual vector $a_i$. Given this efficient approximation of $C_k$ for $k$, the algorithm we provide in the pdf rebuttal file (Algorithm 1) finds $k^\star$ that minimizes the $r_k$. We also provide a plot of the optimal $k^\star$ (found by Algorithm 1 in the attached pdf) as a function of $M$ for a combination of parameters in Figure 1 of the attached pdf.
In the final version of the manuscript, we will add this algorithm to show the precise procedure to find the optimal $k^\star$ value. We thank the reviewer for pointing out the unclarity around the choice of $k^\star$.
### **Intuition on $k$**
The intuition on the choice of $k$ can be developed from PrivUnit [1] -- the exact-optimal scheme for the same problem without the communication constraint. For a specific private vector $v\in\mathbb{S}^{d-1}$, PrivUnit assigns a higher probability to vector $w$
that is near $v$, i.e., to the vector $w$ that satisfies $\langle v, w\rangle > \tau$ for some chosen threshold $\tau$. In a similar fashion, our suggested scheme, Random Rotating Simplex Coding (RRSC), assigns a higher probability to the $k$-nearest codewords. The parameter $k$ in our scheme essentially corresponds to a threshold $\tau$ in PrivUnit.
In our previous response above, we have explicitly stated that we find the optimal value $k^\star$ by choosing $k$ that minimizes $r_k$,
as described in equation (30). To identify such a $k^\star$, we must be capable of calculating $C_k$ for each $k$, a task that can be carried out simply and efficiently through Monte Carlo methods as described above and in Algorithm 1 in the attached rebuttal in pdf format. We also would like to note that estimating the optimal $k^\star$ in RRSC is significantly more straightforward and efficient
than estimating the optimal parameters in PrivUnit.
[1] A. Bhowmick, J. Duchi, J. Freudiger, G. Kapoor, and R. Rogers. Protection against reconstruction and its applications in private federated learning. arXiv preprint arXiv:1812.00984, 335 2018.
### **A typo on line 177**
We sincerely thank the reviewer for the careful reading of our manuscript. We will fix the typo ("an random") on line 177 to "a random".
---
We thank the reviewer for the careful reading of our manuscript and for pointing us to the unclear parts of the proposed scheme. We hope we have addressed the reviewer's comments, and we will add the necessary discussions in the final version of the manuscript, which we believe will improve the manuscript. We are happy to discuss any remaining questions or concerns the reviewer may have.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and addressing my concerns, I will keep my score.
---
Reply to Comment 1.1.1:
Title: Thank you for your response to our rebuttal.
Comment: We thank the reviewer for their time reading our rebuttal. | Summary: This paper focuses on the problem of distributed mean estimation under local differential privacy (DP) and communication constraints, with shared randomness between users and the server. Previous works either achieve exact optimal mean squared error (MSE) using $O(d)$ bits or achieve order-optimal MSE with a large constant using $O(\varepsilon)$ bits. In this paper, the authors aim to tackle the same problem under shared randomness, aiming for both exact optimal MSE and communication efficiency. The proposed solution approaches the problem as a lossy compression problem. The authors demonstrate that the optimal scheme can be represented by a codebook through random coding. Additionally, they establish that the exact-optimal codebook-generating distribution must be rotationally symmetric. Empirically, the authors demonstrate that the proposed methods outperform existing approaches.
Strengths: 1. This paper is very technical. The paper is clearly written and lays out its contributions succinctly.
2. The proposed framework achieves exact optimality in terms of both MSE and communication efficiency.
Weaknesses: It would be better if the authors could discuss why shared randomness is necessary to achieve exact optimality.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See weaknesses
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations are discussed. This paper does not have a negative societal impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time in reading our manuscript carefully and for providing positive feedback. We are glad that the reviewer found our contributions valuable and liked the organization and writing of the manuscript. We discuss their point on shared randomness below.
### **Shared randomness**
In our Randomly Rotating Simplex Coding (RRSC) scheme, the shared randomness is used to generate random rotation matrices that will be necessary for both encoding and decoding. We note that shared randomness is actually necessary for *any unbiased mean estimation scheme* that compresses the $d$-dimensional vectors to less than $b=\log d-2$ bits (or equivalently any unbiased mean estimation scheme that uses less than $M=2^b=O(d)$ codewords) as shown in [1, Corollary~5.1]. We will add this clarification, together with the relevant reference, in the final version of the manuscript.
[1] Chen, Kairouz, and Ozgur, "Breaking the Communication-Privacy-Accuracy
Trilemma", IEEE Transaction on Information Theory, 2023.
---
We again thank the reviewer for the careful reading of our manuscript and for asking an important clarifying question. We will include the above discussion on shared randomness in the final manuscript. We hope that we addressed the reviewer's point sufficiently. We are happy to discuss any remaining questions the reviewer may have.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and addressing my questions.
---
Reply to Comment 1.1.1:
Title: Thank you for your response to our rebuttal.
Comment: We thank the reviewer for their time evaluating our manuscript and reading our rebuttal. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for carefully reading our paper, and providing positive and constructive feedback to further improve it. All the reviewers seem to appreciate the technical contributions of the paper in studying the exact optimality of the distributed mean estimation problem under privacy and communication constraints. Specifically, all the reviewers found theoretical and empirical improvements over the existing order-optimal schemes worth praising. Moreover, Reviewer 69ED found one of our main results, namely, the exact optimality of the rotationally symmetric codebook, worthy of independent interest; Reviewer FeU2 liked the clear and fluent presentation of the results and the organization of the manuscript; Reviewer eu7R highlighted that the proposed algorithm RRSC outperforms the state-of-the-art schemes and stressed the importance of the conjectures and future directions in the manuscript to the community; Reviewer jPit found the k-closest encoding algorithm, which we prove to be the exact-optimal approach for a simplex codebook, impressive, clean, and worthy of independent interest in general; and lastly Reviewer JsmV underlined the fact that previous order-optimal solutions can be viewed as special cases of the general setup we propose in this work -- namely the canonical protocol with rotationally symmetric codebook.
The reviewers also shared valuable suggestions that we believed improved the overall quality of our manuscript. We explain how we address them and how we will revise the manuscript in our separate responses to each reviewer below. We also submit another rebuttal file in pdf format to share an additional algorithm and a figure as part of our response to some comments.
Pdf: /pdf/81b2c426e665914a3ea8ae0cb79218ed3a12c60c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work considers the problem of distributed mean estimation and aims to obtain "exact optimal" estimators under communication, (local) differential privacy and utility constraints. Exact optimality here means that instead of focusing on the order of complexity, the focus is also on the constants as well. Prior work either focused on "order optimality" or were exactly optimal for privacy and utility but only order optimal [Feldman-Talwar'11], [Shah-Chen-Balle-Kairouz-Theis'22]. This work achieves exact optimality through a k-closest encoding and a randomly generated codebook shared between the server and users.
Strengths: They obtain exact optimality for privacy-utility-communication tradeoffs of mean estimation under l_2 norm error. This improves upon prior work that could only obtain order optimality or exact optimality but only for some parameters.
Their work unifies the framework that existed in prior work [Feldman-Talwar'11], [Shah-Chen-Balle-Kairouz-Theis'22].
Their experiments show significant improvement in communication budget required compared to the previous work, especially in the setting where the number of bits is small.
The theorem regarding optimality of rotationally symmetric shared random codebooks could be of independent interest.
Weaknesses: There is no "main theorem" that sums up the results in the main results section for mean estimation, and I think it's necessary to include such a theorem.
I think exact optimality compared to order optimality is a more niche setting. That being said, that's not necessarily a weakness, and it could be impactful in practice.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I think part of the paragraph **unified framework** in the discussion section could be mentioned in the related work section as well.
As mentioned in the previous section a main theorem that clearly states the trade offs this work obtain for mean estimation should be included. The score currently given is assuming that such a theorem will be included.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully reading our manuscript and for providing constructive feedback to further improve it. Specifically, we are glad that the reviewer found the theoretical and empirical improvements over the existing order-optimal schemes impressive; and highlighted the significance of the rotational symmetry condition for the optimal codebook. Upon reviewer's suggestions, we will add the following theorems in the final manuscripts, which we think will improve the clarity of the results. Therefore, we thank the reviewer for these valuable suggestions.
### **A main theorem that sums up the main results**
In the theorem below, we put together the main results in Section 3 that summarize the main theoretical contributions of our work. We will provide this theorem in the final manuscript.
**Theorem 1:** *There exists a canonical protocol with a rotationally symmetric random codebook
that achieves the exact-optimal worst-case error among all unbiased $\ell_2$-mean estimation schemes
under $\varepsilon$-local differential privacy (LDP) and $b$-bit constraint simultenously. Moreover, there exists a $k$ such that the $k$-closest encoding algorithm in Section 3.1 in the manuscript is the exact-optimum unbiased scheme for a rotationally symmetric simplex codebook under $\varepsilon$-LDP and $b$-bit constraint. The optimal $k$ that achieves this exact optimality is found by minimizing $r_k$ in Eq. (30) since the error is $r_k^2-1$ (as shown in Eq. (90) in Appendix E) --- see Algorithm 1 in the attached pdf rebuttal for the precise description on how to find the optimal $k$.*
### **A theorem that clearly states the communication-privacy-utility trade-offs**
In this work, we show compelling evidence that our proposed scheme, Randomly Rotating Simplex Coding (RRSC), achieves exact optimality. Upon the reviewer's suggestion, we also provide the theorem and its proof below that shows the order optimality (which is actually a weaker statement than the exact optimality result we show in the manuscript) of RRSC with a clear statement of the error.
**Theorem 2:** *In the region of interest where $1 \leq \varepsilon \leq b \leq d$, RRSC($k^\star$), i.e., RRSC with the optimal choice of k, achieves an error of $\frac{1}{n} \left ( (k^\star)^2 (\frac{e^{\varepsilon} + 1 / q -1}{e^{\varepsilon} -1} )^2 \frac{M-1}{M} \frac{1}{C_{k^\star}^2} - 1 \right)$, where $q=k^\star/M<1$. This corresponds to an error that scales with $\frac{d}{n}$ -- satisfying the order optimality.*
The rough proof is the following. As stated in Eq. (60) in Appendix E, the $\ell_2$ error is $r_k^2-1$, where $r_k$ is precisely defined in Eq. (30). The rest of the proof follows from the fact that (1) the optimal $k^\star$ which is found by minimizing $r_k$ (hence the error) satisfies $k^\star = O(M)$ and (2) $C_{k^\star}=O(k^\star/\sqrt{d})$. The algorithm to find the optimal $k$ is given in Algorithm 1 in the attached pdf, where we also provide a figure that justifies $k^\star = O(M)$. We will include this theorem and its full proof in the final manuscript.
### **The motivation to study exact optimality**
As the reviewer mentioned, there are already existing schemes that achieve order optimality, but it was unclear how much these schemes could be improved in the non-asymptotic regime and how far to the optimal the pre-constant of the estimation error is. It is worth noting that in many practical scenarios (such as federated learning and analytics), the constant factor of the estimation error does significantly affect the end-to-end performance. As a result, in this work, our goal is to bridge this gap by (1) specifying exact-optimality conditions, (2) proving the exact optimality of a novel scheme, and (3) demonstrating that there is indeed a non-trivial gap between this exact-optimal scheme and the previous order-optimal approaches. Therefore, we hope to convey to the community the fundamental limits of the distributed mean estimation problem under both privacy and communication constraints and provide a strong baseline for further studies. We are glad that the reviewer also finds this perspective valuable.
### **Moving the unified framework discussion to the related work section**
We thank the reviewer for this suggestion. We agree that this discussion suits well to the related work section as well. In the final version of the manuscript, we will provide a high-level idea of how we unify the existing schemes in the related work section and give further details in the discussion after we cover the main results.
---
We again thank the reviewer for the careful reading of our manuscript and for providing valuable suggestions to improve it further. We will include the additional theorems in the final manuscript and will carry the part of the discussion on unifying the framework to the related work section as suggested by the reviewer. We hope that we addressed the reviewer's comments sufficiently. We are happy to discuss any remaining questions or concerns the reviewer may have.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. I will keep my score.
For the final version of the paper, I also suggest including a discussion paragraph that compares the exact optimal error rate provided here with the guarantees of the previous work about order optimal schemes. The authors mention that in practice the constant factor difference makes a difference, and have demonstrated this through experiments. However, it would be interesting to see how much they differ theoretically.
---
Reply to Comment 1.1.1:
Title: Thank you for your response to our rebuttal.
Comment: We thank the reviewer for the suggestion. We will include the suggested paragraph in the final manuscript. | null | null | null | null | null | null |
AVIS: Autonomous Visual Information Seeking with Large Language Model Agent | Accept (poster) | Summary: 1. This paper proposes a novel VQA framework that uses LLMs to select external tools in multiple stages to extract the necessary information to answer the question.
2. The authors gather human decision data to develop a decision graph and construct in-context examples to guide the LLM to perform API selection and query construction.
3. The experiments validate the effectiveness of the proposed method on two knowledge-based VQA benchmarks.
Strengths: 1. The experiment result is solid and shows the advantage of the framework and the improvement of each sub-module.
2. The guidance for tool selection from human demonstration is novel, and the huge improvement also gives us some insight into the weakness of LLMs.
Weaknesses: 1. The LLM used is PALM, while the compared methods used GPT series models. It's better to have some results based on GPT models to demonstrate the effectiveness of the proposed framework.
2. The proposed human guidance requires a certain amount of human labor and may be hard and costly to generalize to other tasks using LLMs (manipulation, navigation, etc.).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. It will be interesting to see how the framework performs on zero-shot setting and related discussions, especially the related work ViperGPT is zero-shot.
[1] ViperGPT: Visual Inference via Python Execution for Reasoning
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation and potential solutions are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q. Use GPT**
We change the backbone LLM as GPT4
| Model Configuration | Result (%) |
|--------------------------------------|------------|
| AVIS w/ GPT-4 | 61.9 |
| GPT-4 w/ PALI* | 13.1 |
| GPT-4 w/ PALI* + Object | 36.4 |
| GPT-4 w/ PALI* + Object + Search | 43.8 |
As it shows, AVIS’s decision framework can still benefit a powerful LLM as GPT-4, showing its generalizability to different LLM.
**Q: The proposed human guidance requires a certain amount of human labor and may be hard and costly to generalize to other tasks using LLMs**
A: For AVIS, we carried out a user study that engaged 10 participants, who collaboratively responded to a cumulative count of 644 visual inquiries. On average, an individual took approximately 1 minute to address a visual question utilizing our user interface. As a result, we expended roughly 10 hours of human effort on AVIS. The planner and reasoner, grounded in LLM technology, demonstrated adeptness in generalization with minimal exposure to contextual instances. This characteristic substantially diminishes the demand for human labor.
A future work is to combine human supervision with reinforcement learning to further reduce the need for human annotations.
**Q: Could AVIS with same prompt generalize to other tasks?**
We appreciate the emphasis on the generalizability and practicality of our model. To address this concern, we've conducted experiments with the A-OKVQA dataset, which, share similar question types as OK-VQA and Infoseek but have unique properties (as discussed in their paper). Importantly, we adopted the same prompts used in Infoseek and OKVQA for our AVIS evaluation on A-OKVQA, without the need for additional human annotations.
| Model Configuration | Result (%) |
|------------------------------------------|------------|
| AVIS | 56.7 |
| PALM + PALI* | 41.6 |
| PALM + PALI* + Object | 47.6 |
| PALM + PALI* + Object + Search | 50.8 |
| GPV-2 | 48.6 |
| KRISP | 33.7 |
This indicates the generalizability of our framework with the great reasoning capability of existing LLM, which shall be able to transfer the decision capability across datasets, and reduce the requirement to annotate additional prompts for some new VQA datasets.
For other types of tasks reviewer mention (e.g, manipulation, navigation), we definitely need another set of prompts as their underlying logic is very different from VQA. But similar as what we observe in VQA, we believe that AVIS with a designed transition graph of a certain task shall have some sorts of transferability and generalization across different environments and distributions.
**Q: Zero(one)-shot setup**
A: We'd like to emphasize that ViperGPT is not zero-shot, it provides one example in its documentation per API. To make a fair comparison, we also keep one prompt for each decision action, and don't use any in-context example (zero-shot) for reasoning, and the result on OK-VQA is 53.2, slightly higher than ViperGPT which is 51.9. Note that the two framework doesn't use the same set of API, so it's not directly comparable. But it shows that our framework AVIS could also work for cases without many prompts.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the response!
For GPT models, I intended to see how the framework compares with ViperGPT (using GPT-3). Therefore, an experiment with GPT-3 (or 3.5) will reflect how the proposed framework generalized to weaker LLMs (as GPT4 and Palm 540B could be seen as the most powerful LLMs by far) and compare with ViperGPT. From my understanding, the prompt and dialog history in the proposed framework is long. Therefore, I'm not sure if weaker LLMs can follow it.
---
Reply to Comment 1.1.1:
Title: Respond
Comment: Thanks so much for the clarification. We don't have ViperGPT's results on Infoseek dataset, so we run our framework with ChatGPT (GPT3.5) on OK-VQA dataset
Model Configuration | Accuracy on OKVQA (%)
--- | -----------
AVIS w/ GPT-3.5 | 61.1
AVIS w/ GPT-3.5 (one-shot) | 53.9
GPT3.5 w/ PALI* | 47.2
GPT3.5 w/ PALI* + Object. | 42.6
GPT3.5 w/ PALI* + Object + Search |50.3
The results show that AVIS could also benefit GPT3.5, and the final result (both one-shot & full) is higher than what is reported in ViperGPT (51.9)
Note that we doesn't aim to do a rigorous comparison to ViperGPT, as the tools used are different. As LLM Agent is still a vibrant field, how to unifying different existing methods into a same setup is a ongoing challenge. We're also reproducing ViperGPT in our setting, and we plan to add the results of it with our tool in later version.
And We're also interested to see whether our dynamic decision making w/ human defined transition graph (or we could call it domain-specific language, DSL) could also benefit program synthesis like ViperGPT, which we leave for future work.
Another thing I'd like to clarify. Although we store all the past dialog history into the working memory, not all of them are used to serves as prompt to the LLM. Instead, only the results related to the current state is used to fill in the prompt defined by human examples. The purpose of it is to reduce the inference cost when making the decision (as we need to run LLM multiple times to solve a query, if the last few decision accumulates history then the computational cost is very large, and the context length could even be larger than the largest window size current LLM could take). On the contrary, currently each decision making at arbitrary step cost similar.
You could also look at our discussion with Reviewer MSpf about a similar question on "The inclusion of uninformative outputs in working memory may potentially impact the obtained results". Hope this could clarify our framework better. | Summary: This paper proposes a VQA system that mimics the human decision-making process and leverages LLM and web searches to perform multimodal reasoning. The system consists of three main components: a transition graph, an LLM planner, and an LLM Reasoner. The transition graph is manually designed based on the human decision-making process for knowledge-intensive VQA. It guides the system to infer the answer from different states of information. The transition graph enhances the interpretability of the inference process and allows the system to focus on the current state only. The transition graph also introduces uncertainty to the decision procedure, making it more dynamic than previous methods that use static plans for tool usage. The LLM planner decides the next action (API calls and query content) based on the current state. The LLM Reasoner produces the answers from the collected information. Experiments demonstrate the effectiveness of this method on Visual-Question answering tasks OK-VQA and Infoseek.
Strengths: Leveraging the user study to build a state transition graph is a reasonable way to construct LLM applications, especially for well-understand applications such as VQA or specific expert domains.
The motivation is clear and the writing is easy to follow.
Experiments are well-designed to show the effectiveness of the method.
Weaknesses: Lack of error analysis.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: How does the system handle the situation when the output is not informative and the pipeline gets stuck in an infinite loop?
Regarding the weakness: The model improved the performance on the dataset, but it is still far from perfect. It would be helpful to analyze the sources of the errors (Prompts? LLM? transition graph?) which will benefit the community.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: This paper acknowledges the limitations of the model: 1. it is computationally intensive, and 2. it is currently only suitable for the VQA task.
negative societal impact: not found
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q: Error Analysis**
A: We look through the error samples by AVIS and categorize them with three major types of error:
- LLM Planning Module Errors: Cases where the LLM planning component failed to discern crucial information, leading to inaccurate decision-making.
- LLM Reasoning (QA) Module Errors: Instances where the LLM's reasoning and question-answering capabilities identified incorrect evidence or derived flawed inferences.
- Tool-Induced Errors: Situations where the external tools, invoked by the AVIS framework, furnished misleading or incorrect information.
To provide a more tangible understanding of these errors, we've detailed specific instances of each type in an attached one-page PDF file, offering a granular view of the issues encountered.
To further contextualize, we carried out an analysis on 30 randomly chosen misclassified samples from the Infoseek dataset. Our findings are summarized as follows:
- Reasoning & QA Errors: 11 instances (36.7%)
- Tool-Induced Errors: 7 instances (23.3%)
- LLM Planning Module Errors: 6 instances (20%)
- Ambiguous or Imperfect Matches: For the remaining 6 instances (20%), our assessment suggests that AVIS's prediction was accurate. The classification as erroneous likely arises due to slight mismatches or semantic nuances.
By far we haven't found examples that are caused by incompleteness of transition graph, as we think it already cover most of the possible and promising actions, and the mistakes are mainly caused by LLM didn't make correct decision. Probably when the number of tools increase, the error caused by transition graph will be more prominent, which we leave such analysis for future work.
**Q. Will AVIS be stuck into an infinite-loop as AutoGPT?**
A: Unlike models such as AutoGPT, AVIS has been designed with specific safeguards against infinite-loop scenarios:
Transition graph: At the heart of our model lies a transition graph defining all possible actions for executing tools. This ensures that our search space remains finite (every action our planner of AVIS generates is a <tool_id, input_arg> pair). This is very different from some other autonomous agents such as AutoGPT that use LLM’s generated language as action, which have infinite search space.
Traversal history & repetition removal: As AVIS navigates over a given transition graph with finite paths, we could easily record all previous traversed states (similar to standard DFS algorithm). In this way, everytime we make the planning, we will remove those traversed states and only ask the model to predict the actions that have not traversed before, which avoids redundancy & potential infinite-loop.
Terminating Decision: Based on the design, if AVIS tries to traverse all possible paths and still cannot find the answer, AVIS will terminate and output “we cannot find the answer”, instead of falling into an infinite-loop.
Through these design strategies, we've ensured that AVIS remains efficient in its operations without getting ensnared in unproductive loops.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I have raised the "soundness" score to 4, and I would like to maintain my overall rating i.e., I still recommend acceptance for this work.
---
Reply to Comment 1.1.1:
Title: Thanks for your reviews and responses
Comment: Thanks so much for your recognition of our work. We will definitely include the error analysis and the discussion about how we handle infinite-loop into the paper. | Summary: This paper introduces an autonomous framework for visual question answering framework named AVIS. AVIS utilizes a Large Language Model (LLM) as its core component to dynamically strategize the utilization of external tools. The framework comprises three key components: the planner, reasoner, and working memory. The planner determines the appropriate actions, such as selecting relevant APIs, in each step. The working memory stores the results obtained from executing the APIs, while the reasoner processes the outputs derived from the API calls. Experimental evaluations conducted on the OK-VQA and Infoseek datasets validate the effectiveness of the AVIS framework.
Strengths: 1. The authors present a novel framework that leverages external tools to overcome the challenge of external knowledge dependency in Visual Question Answering (VQA) tasks, thereby enhancing the practicality of VQA in real-world scenarios.
2. The authors introduce a dynamic decision process for selecting the most suitable external APIs to address specific sub-problems at each step.
3. The authors utilize human decision-making data to construct a transition graph, which is helpful for narrowing down the action spaces in the decision-making process.
4. Extensive experiments on the OK-VQA and Infoseek datasets demonstrate the effectiveness of AVIS.
Weaknesses: 1. AVIS necessitates the utilization of costly human-annotated data to guide the (LLM). To further substantiate the generalizability and practicality of AVIS, it would be advantageous to conduct experiments on the A-OKVQA[1] dataset, which incorporates the OK-VQA and Infoseek human-annotated data as in-context samples.
2. As indicated in Line 5 of Algorithm 2, the authors directly incorporate the outputs into the working memory. However, the inclusion of uninformative outputs may potentially impact the obtained results.
3. For the three baseline methods, it would be better to demonstrate the few-shot COT prompts.
4. In Line 250, some human-annotated samples may be “Could not find the answer”. How do the authors leverage this type of sample? When using this type of sample as an in-context sample, does it lead to the models rejecting the question when generating a response?
5. It would be advantageous to provide an average step count that AVIS typically requires to address a single VQA sample.
6. In Line 300, “Table 4” should be “Table 1”.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Refer to Weakness
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
**Q: it would be advantageous to conduct experiments on the A-OKVQA[1] dataset, which incorporates the OK-VQA and Infoseek human-annotated data as in-context samples.**
A: We appreciate the emphasis on the generalizability and practicality of our model. To address this concern, we've conducted experiments with the A-OKVQA dataset, which, share similar question types as OK-VQA and Infoseek but have unique properties (as discussed in their paper). Importantly, we adopted the same prompts used in Infoseek and OKVQA for our AVIS evaluation on A-OKVQA, without the need for additional human annotations.
| Model Configuration | Result (%) |
|------------------------------------------|------------|
| AVIS | 56.7 |
| PALM + PALI* | 41.6 |
| PALM + PALI* + Object | 47.6 |
| PALM + PALI* + Object + Search | 50.8 |
| GPV-2 | 48.6 |
| KRISP | 33.7 |
This indicates the generalizability of our framework with the great reasoning capability of existing LLM, which shall be able to transfer the decision capability across datasets, and reduce the requirement to annotate additional prompts for some new VQA datasets.
**Q. The inclusion of uninformative outputs in working memory may potentially impact the obtained results**
A. Our approach to handling failure cases within the working memory is two-pronged:
Differentiation between working memory and LLM prompt:
Firstly, it's essential to clarify that the working memory of AVIS is distinct from the prompt provided to the LLM. Within the working memory, we do maintain a record of both successful and unsuccessful queries. However, when interfacing with the LLM, only the most pertinent or top-ranked information is presented, ensuring that the LLM isn't unduly influenced by extraneous or unproductive details.
Purpose of retaining uninformative records:
Our goal of recording the uninformative path (failure) is to avoid repetition of previously traversed paths. When we make the planning, we only provide the model the actions that it has not traversed before, and remove all uninformative actions stored in working memory. This helps AVIS avoid going to the same uninformative path again and solves the repetitive and even infinite loops.
**Q: How are the examples where human indicates “cannot find answer” are used to guide AVIS?**
A: The inclusion of "cannot find answer" human annotations serves a strategic purpose in AVIS's decision-making mechanism:
Incorporation into the prompt: We integrate these specific annotations into the model's prompt, ensuring that the model is aware of scenarios where human annotators couldn't discern a clear answer. This awareness is pivotal in providing a holistic context to the LLM during its reasoning process.
Avoid hallucination: Should AVIS exhaust all action spaces and still be unable to generate a result with high confidence, these "cannot find answer" annotations guide it to echo a similar response. In practice, the frequency of such outcomes is within an acceptable range. By mirroring human annotator behavior in these challenging scenarios, we aim to provide a realistic and candid response to users, rather than forcing a potentially inaccurate answer (i.e., hallucination).
**Q: Average step count**
A: The average step of AVIS is 5.2, and the full distribution is shown in Fig 8 of Appendix.
**Q: In Line 300, “Table 4” should be “Table 1”.**
A: Thank you for pointing out this typo. We will fix it.
**Q: Show COT prompts for other baselines**
A: Sure, we will definitely include all the details in appendix as well as open-sourced codes. Below we show a few examples:
pali_prompt = """
Please based on the "Caption" to answer the question:
Question: What type of fruit are they holding?
Caption: a couple of men standing next to each other holding oranges . There are two persons standing and holding oranges in their hands and there are few people beside them and there is a building in the background.
reason: from caption, it says the men are holding oranges.
Answer: orange
......
"""
pali_object_prompt = """
Please based on the "Caption" and "Entity" to answer the question:
Question: What does the train carry?
Caption: a train traveling down train tracks next to a forest . There are four trains on the railway track. In the background there are trees,poles and sky.
Entity: [
BNSF Railway: BNSF Railway is one of the largest freight railroads in North America (score=89.3)
Extracted Text: BNSF (score=100.0)
]
Reason: from caption, there are no enough information about what the train is carrying. From entity, it says the train is BNSF railway that is freight ralroads. So the train carry freight, which is good.
Answer: good
......
"""
For the PALI+Object+Search baseline, we adopt two-level procedure: first use pali_object_prompt to get visual answer, and then feed it into search API to get documents, with the same prompt shown in appendix to get final answer.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for the response. I have raised the "rating" score to 6.
---
Reply to Comment 1.1.1:
Title: Thanks for the response
Comment: Thanks so much for the review and response. The generalization study is definitely very important for LLM Agent, and we will include all the updated experiments in the paper. | Summary: This work aims to more general VQA task (often needs external knowledge) via LLM-based information seeking. First, the info-seeking system is built with three components: planner, reasoner, and memory, seeking useful information with external tools/APIs. Second, they build a dataset with human decision-action samples via user study. Then, they build the transition graph from the dataset, for depicting the user state and action, which can be used as contextual instances serving the info-seeking system.
Strengths: The info-seeking system is designed in the reasonable and mainstream VQA/QA fashion.
Weaknesses: How to compare this work with the end-to-end frameworks? As LLMs growing gradually, along with the visual knowledges being grounded better and better with language knowledges, the external knowledges will become thinner and an inclusion of internal ones.
In general, this is a good paper with much groundwork. I am looking forward author feedback and better version to persuade me.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How to avoid potential infinite loop such like those emerge in AutoGPT? Considering, rather than powerful LLMs, the capacity bottleneck is often caused by external tools.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and remarks.
**Q: Compare AVIS with end-to-end models? Will external knowledge still be useful?**
A: The key difference between AVIS and a single end-to-end model is that AVIS separates knowledge memorization from reasoning. Within this architecture:
- The external tools take the responsibility of memorizing diverse and flexible knowledge.
- The neural network, freed from the necessity to store vast amounts of information, focuses its full capacity on planning and reasoning.
Based on the modular nature of AVIS, adding or modifying knowledge doesn't necessitate finetuning any model parameters (which is costly especially for LLMs). Instead, one only needs to update the knowledge base in different tools (e.g., Google Search index). This provides distinct advantages in situations where end-to-end models might fail:
- Handling addition of new knowledge (e.g., news & updates): While it's true that LLMs are able to memorize more and more facts within the model weights, it's essential to highlight that their knowledge is static and limited to the last training set. AVIS, through its dynamic utilization of APIs, can access real-time information and news updates that an LLM could not inherently possess. This ensures our model stays up-to-date with the latest information, which is crucial for many visual questions rooted in real-time contexts.
- Domain-specific Knowledge: Despite the broad spectrum of information LLMs might encompass, specialized and long-tail domains often call for specific tools and databases. AVIS is primed for such integrations, making it a good candidate for domain-centric visual question-answering tasks.
- Personal & private knowledge: Given privacy constraints, in many cases private data (such as photos & message history in personal phone) cannot be included into the training corpus of LLM, making it hard to become a personalized model. AVIS, on the other hand, can be customized to incorporate personal and private knowledge sources, should users consent, allowing for a more tailored and precise response mechanism.
In summary, while the capabilities of LLMs are indeed advancing, there remains a distinct advantage in combining their reasoning prowess with the dynamic, real-time, and specialized knowledge retrieval capabilities of external tools, such as Google Search Engine and LENS.
**Q: Will AVIS be stuck into an infinite-loop as AutoGPT?**
A: Unlike models such as AutoGPT, AVIS has been designed with specific safeguards against infinite-loop scenarios:
Transition graph for defined action spaces: At the heart of our model lies a transition graph defining all possible actions for executing tools. This ensures that our search space remains finite (every action our planner of AVIS generates is a <tool_id, input_arg> pair). This is very different from some other autonomous agents such as AutoGPT that use LLM’s generated language as action, which have infinite search space.
Traversal history & repetition removal: As AVIS navigates over a given transition graph with finite paths, we could easily record all previous traversed states (similar to standard DFS algorithm). In this way, everytime we call the planner to perform an action, we remove the traversed state and only ask the model to predict the actions that are not traversed before, which avoids redundancy & potential infinite-loop.
Decision to stop: Based on the design, if AVIS tries to traverse all possible paths and still cannot find the answer, AVIS will terminate and output “we cannot find the answer”, instead of falling into an infinite-loop.
Through these design strategies, we've ensured that AVIS remains efficient in its operations without getting ensnared in unproductive loops.
**Q: Is Tool the bottleneck or LLM?**
A: In our general response, we show a detailed error analysis statistics (examples of each type shown in one-page pdf file). Among the three major error types, tool error consists of 23.3%, while the other two types of error caused by LLM planning and reasoning add up to 56.7%. We admit that these existing tools are definitely not perfect (otherwise we could directly use them for solving the task), but a good LLM as an agent shall be learnt to use only the important information from imperfect tools to make correct decision, which shares similar insights like the classical boosting idea, while we use LLM as controller to make the ensemble.
Based on this and what we describe above, we believe that the key bottleneck for building up better AI agent is 1) the basic capability of LLM itself; 2) a better algorithm and framework that enable LLM to utilize external tools to provide required knowledge, and it could focus on reasoning. We believe the planning & reasoning framework of AVIS over a human-defined transition graph is one of a good starting point of such framework.
---
Rebuttal Comment 1.1:
Title: Any further question for discussion?
Comment: Dear Reviewer:
As the author-reviewer discussion period is coming to an end, we wonder whether our response (especially the usefulness of external knowledge) has addressed your concerns? Looking forward to the further discussion regarding any more questions. | Rebuttal 1:
Rebuttal: # Response to Reviewers
We thank the reviewers for their valuable comments and remarks.
In the rebuttal, we mainly add:
**1. Experimental results with GPT4 on Infoseek dataset**
| Model Configuration | Result (%) |
|--------------------------------------|------------|
| AVIS w/ GPT-4 | 61.9 |
| GPT-4 w/ PALI* | 13.1 |
| GPT-4 w/ PALI* + Object | 36.4 |
| GPT-4 w/ PALI* + Object + Search | 43.8 |
**2. Generalization Analysis**
- Showing that our model with the same prompts written on infoseek and okvqa generalize also to a-okvqa without re-human annotation.
| Model Configuration | Result (%) |
|------------------------------------------|------------|
| AVIS | 56.7 |
| PALM + PALI* | 41.6 |
| PALM + PALI* + Object | 47.6 |
| PALM + PALI* + Object + Search | 50.8 |
| GPV-2 | 48.6 |
| KRISP | 33.7 |
**3. Compare with ViperGPT over one-shot setup**
- We'd like to emphasize that ViperGPT is not zero-shot, it provides one example in its documentation per API. To make a fair comparison, we also keep one prompt for each decision action, and don't use any in-context example (zero-shot) for reasoning, and the result on OK-VQA is 53.2, slightly higher than ViperGPT which is 51.9. Note that the two framework doesn't use the same set of API, so it's not directly comparable. But it shows that our framework AVIS could also work for cases without many prompts.
**4. Error Analysis**
- Conduct thorough error analysis with three major types of error: 1) LLM planning module miss important information to make mistake decision; 2) LLM reasoning (QA) module extracts wrong evidence; 3) Tool provide incorrect information
- **Note:** We put the detailed examples of error in the one-page PDF file, please check it.
- We randomly select 30 wrong classified samples of infoseek, and find that reasoning & QA part consists of 11 instances (36.7%); tool error consists of 7 instances (23.3%); and LLM planning module takes 6 instances (20%). There are other 6 instances we think the model make correct prediction, and it's just because of not perfectly matched
In the following, we will respond each authors' concerns.
Pdf: /pdf/b9945f592981e682a1a9051afecce27761d48fb9.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Defending against Data-Free Model Extraction by Distributionally Robust Defensive Training | Accept (poster) | Summary: This paper presents a new method to defend against depth function preserving model extraction (DFME) attacks. The method proposed by the authors, called MeCo, adds data-dependent random perturbations to the input data, making it difficult for attackers to extract useful information from the black-box model. The authors claim that MeCo effectively reduces the accuracy of the cloned model while maintaining the utility of the target model. The approach was evaluated on several data sets and compared to existing defense strategies, showing excellent performance in terms of effectiveness, computational efficiency, and memory usage.
Strengths: 1.The quality of the work is evident in the extensive experiments conducted and the clear presentation of results.
2.The main advantage of the paper is its novel approach to defend against DFME attacks. The proposed method, MeCo, is both effective and efficient, which is demonstrated by experimental results. However, these advantages may be affected by a lack of theoretical support.
Weaknesses: 1.The paper only conducted experiments in cifar10, cifar100 and mnist data sets, but the lack of relevant experiments on large data sets led to some doubts about the extensibility of the model.
2.The model may need more testing and proof in generalization.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can you provide more details on the theoretical basis of the proposed approach?
How does the proposed method perform under different attack scenarios?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors acknowledge the lack of theoretical analysis as a limitation of the current work and take it as a direction for future work. They also mention the potential benefits of their proposed defense training for protecting large-scale pre-trained models used in public apis. However, a more detailed discussion of the possible negative social impacts of the proposed approach would be useful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for your constructive feedback.
**Q1**: lack of relevant experiments on large data sets led to some doubts about the extensibility of the model.
**A**: Thanks for your suggestions. As requested, we included the results obtained on the MiniImageNet [1] dataset with 100 classes, which is a subset of the ImageNet dataset. The results are presented in Table 8 of the Appendix in our submission. For convenience, we also present the results on MiniImageNet in the following table.
In addition, we took into consideration the dataset evaluated in the current state-of-the-art model extraction defense methods, as referenced in [2]. As a result, we extended our experiments to include the CUB200 dataset [3] with 200 image classes, which, as mentioned in [2], represents the largest dataset utilized for model extraction defense. The detailed outcomes of our experiments on the CUB200 dataset are provided in the following tables. We hope that these additional results further contribute to the comprehensive evaluation of our proposed model and its defense against model extraction techniques.
*Clone Model Accuracy with Different Defense Methods on **MiniImageNet** with Different Clone Model Architecture*
| Attack | Defense | ResNet18-8X | MobileNetV2 | DenseNet121 |
|--------|----------|------------|-------------|-------------|
| | Undefended | 35.89% ± 3.97% | 28.71% ± 3.25% | 25.05% ± 3.68% |
| DFME | RandP | 30.76% ± 4.09% | 22.06% ± 3.83% | 20.23% ± 3.97% |
| (Soft Label) | P-poison | 29.36% ± 4.23% | 21.83% ± 3.77% | 20.01% ± 3.89% |
| | GRAD | 29.87% ± 3.76% | 21.65% ± 3.75% | 19.82% ± 3.77% |
| | MeCo | **23.29% ± 3.83%** | **17.83% ± 3.67%** | **16.73% ± 3.88%** |
| | | | | |
| | Undefended | 46.72% ± 4.86% | 40.35% ± 4.97% | 38.71% ± 3.85% |
| DFMS-HL | RandP | 45.09% ± 4.93% | 39.51% ± 4.83% | 38.08% ± 3.95% |
| (Hard Label) | P-poison | 45.16% ± 5.03% | 39.06% ± 4.72% | 37.78% ± 4.26% |
| | GRAD | 45.32% ± 5.21% | 39.17% ± 4.85% | 37.85% ± 4.32% |
| | MeCo | **39.23% ± 4.83%** | **35.81% ± 4.69%** | **32.30% ± 4.56%** |
*Clone Model Accuracy with Different Defense Methods on **CUB200** with Different Clone Model Architecture*
| Attack | Defense | ResNet18-8X | MobileNetV2 | DenseNet121 |
|--------|----------|------------|-------------|-------------|
| | Undefended | 49.75% ± 3.82% | 32.83% ± 3.53% | 29.08% ± 3.92% |
| DFME | RandP | 38.89% ± 4.16% | 26.85% ± 3.96% | 23.81% ± 4.05% |
| | P-poison | 36.45% ± 4.38% | 25.31% ± 3.91% | 22.09% ± 4.03% |
| (Soft Label) | GRAD | 31.70% ± 3.61% | 24.78% ± 3.89% | 21.91% ± 3.86% |
| | MeCo | **20.78% ± 4.67%** | **18.65% ± 4.90%** | **15.86% ± 4.51%** |
| | | | | |
| | Undefended | 58.89% ± 5.03% | 49.35% ± 5.18% | 46.71% ± 4.37% |
| DFMS-HL | RandP | 55.28% ± 5.39% | 46.67% ± 5.65% | 45.19% ± 4.32% |
| (Hard Label) | P-poison | 52.71% ± 5.36% | 45.28% ± 5.31% | 43.76% ± 4.91% |
| | GRAD | 51.43% ± 5.73% | 46.51% ± 5.50% | 42.39% ± 4.82% |
| | MeCo | **43.31% ± 5.16%** | **38.27% ± 5.03%** | **33.42% ± 4.95%** |
Reference:
[1] Matching Networks for One Shot Learning. NeurIPS 2016.
[2] How to Steer Your Adversary: Targeted and Efficient Model Stealing Defenses with Gradient Redirection. ICML 2022.
[3] Caltech-UCSD Birds 200. 2010
**Q2**: The model may need more testing and proof in generalization.
**A**: Thank you for your suggestions. In response to your concerns, we made significant enhancements to our work based on your feedback. Specifically, we have included additional experimental results and in-depth theoretical explanations for the proposed methods.
In response to your first question **Q1**, we have incorporated supplementary experimental results that shed further light on the performance and generalization of our approach. These results can be found in **Q1**.
Furthermore, we have provided a comprehensive theoretical basis for our proposed methods. Our aim was to elucidate the underlying principles and mechanisms that support the efficacy of our approach. These theoretical explanations are also elaborated upon in the global response. We believe that these additional elements substantially strengthen the quality and comprehensiveness of our work.
**Q3**: Can you provide more details on the theoretical basis of the proposed approach? How does the proposed method perform under different attack scenarios?
**A**: Thank you for your comments, we provided a comprehensive theoretical basis for our proposed methods. Our aim was to elucidate the underlying principles and mechanisms that support the efficacy of our approach. These theoretical explanations are also elaborated upon **in the global response**. Furthermore, we supplemented our theoretical explanations with visualization, which can be found in Figure 1 within the **'rebuttal.pdf' document as part of the global response**.
Furthermore, our method can also be viewed as random smoothing: $T_{\theta_T}^{S}(x) = \int T_{\theta_T}(x + h_{\omega}(x, \epsilon)) d\epsilon$, where $\epsilon \sim N(0, I)$. Compared to learning a single function $T_{\theta_T}(x)$, learning the smoothed function $T_{\theta_T}^{S}(x)$ necessitates a larger number of queries for attacker due to the requirement of calculating the integral. By necessitating a larger query budget, our approach makes it considerably harder for attackers to successfully clone the model.
**Q4**: Negative Social Impact
**A**: Sharing models and insights is a key driver of progress in the AI research field, and if the community becomes more protective of their models, it could slow down advancements.
Stricter model extraction defenses could create economic barriers for startups that rely on reverse engineering or analyzing existing models as part of their business strategy. | Summary: * __Problem Statement__: The paper proposes a defense against data-free model extraction attacks (where the attacker black-box queries a victim image classifier, s.t queries are then used to train a clone model)
* __Approach__: The proposed defense "DRO" returns class probabilities as a result of the defender *perturbing* the input image. The objective of the perturbation is to flip the top-1 predicted class of attacker's queries (characterized by lying close to decision boundaries), but while preserving utility (measured by L1 distance to unperturbed probabilities).
* __Experimental findings__: The approach is evaluated on 4x standard datasets, compared against 3x recent baseline approaches and in addition provides more studies (e.g., ablation). Results indicate that the proposed approach outperforms the baselines (in terms of both accuracy and utility).
Strengths: 1. __Originality__: 4/5. The approach is original and well-motivated. Although some ideas have been explored before (i.e., perturbing outputs, changing gradient direction), the proposed approach explores a novel objective (learnt perturbation of inputs to introduce a targeted shift in prediction) and proposes a reasonable way to achieve it.
2. __Quality__: 4/5. The paper is high-quality. The insights motivating the approach are sound and the experimental section is exhaustive.
3. __Clarity__: 3/5. The approach is somewhat clear (certain technical choices/notation are unclear; more under questions).
4. __Significance__: 3/5. The approach is somewhat significant -- it clearly improves upon existing defenses, however against a very specific class of model extract attacks. Specifically, "data-free" attacks that have shown to be successful largely in extreme settings (such as with attacker querying the victim millions of times).
Weaknesses: 1. __DRO optimization setup - unclear__: I found the intuition for the defense objective (Fig 2, L148-207) clear. However, it is unclear how the objective formulation (Eq. 4-6) achieves this. Based on Fig. 2 and L183-190, it appears that the requirement is to perturb images s.t the label is flipped (i.e., maximize CE loss wrt parameters $\omega$), however the outer maximization in Eq. 6 seems to do the opposite? Furthermore, in Eq. 6, since the utility is upper-bounded $||\cdots||_1 \le \tau$, it appears that the perturbation generator $h_\omega$ can simply learn an identity function?
2. __Evaluation curves missing__: The paper although extensively evaluates the defense, it appears that it is primarily for a specific defense effectiveness-utility operating point (specific value of $\gamma$ and $Q$). This contrasts prior works which evaluates a curve of attacker's vs. defender's classification accuracies (e.g., Fig. 4 GRAD, Fig. 4 PredPoison). Without such a curve, it's somewhat unclear to determine the overall performance of the attack.
3. __Perturbation budget $\le$ 1.0__: The L_1 perturbation budget $\tau$ in the defense chosen appears to be a fixed to 1 (L222, L298) seems unreasonably high. After all, this allows the defender to always flip the top-1 class-label.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Hyperpameter choices
1. How were the defense hyperparameters determined? I'm concerned that if they are a result of hyperparam searches (e.g., over a sweep of attacks), it might not generalize to novel attacks.
2. Hyperparameter choices: Are the defense hyperparameters fixed for all choices of datasets, clone model architectures, and attacks?
2. DRO optimization objective vs. Adversarial training: Can the authors clarify on how the min-max optimization objective (Eq. 4) is different from adversarial training? After all, in both cases, the objective is to find parameters $\theta$ s.t predictions are robust to perturbed inputs.
3. (Suggestion) Sec 5.4 Adaptive attacker: Please elaborate on how the attacker "adapts". It is unclear what the attacker does from L341.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are adequately discussed in Sec. 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for your insightful comments.
**Q1.1**: DRO optimization setup: Based on Fig. 2 and L183-190, it appears that the requirement is to perturb images s.t the label is flipped (i.e., maximize CE loss wrt parameters )?
**A1.1**: Eq 4-6 is to **maintain the model utility on benign inputs**. Given that perturbations applied to benign inputs can diminish model utility by altering probability outputs, the rationale behind this Distributionally Robust Optimization (DRO) formulation is to ensure the robust generalization of the model on perturbed test data. Consequently, the objective of this optimization is to minimize the loss function across the simulated test set from the training data.
**Q1.2**: DRO optimization setup: in Eq. 6, since the utility is upper-bounded, it appears that the perturbation generator can simply learn an identity function?
**A1.2**: Eq.6 is a constraint (not optimization objective) for the optimization problem in Eq. 4. Learning an identity function is only *a feasible solution of the optimization, but is not the optimal solution*. This is due to the interdependence between the parameters of the perturbation generator and parameters of the target model.
**Q2**: evaluate a curve of attacker's vs. defender's classification accuracies.
**A**: Thanks for your question, please refer to **Figure 2 in rebuttal.pdf in the global response** for the evaluation curves.
**Q3**: The L_1 perturbation budget appears to be fixed to 1 seems unreasonably high.
**A**: Thanks for pointing out this.
**Set a perturbation budget equal to 1 is intended to create stronger baseline defense methods for comparisons**. A larger perturbation budget can **strengthen the defense of the compared baseline methods** by allowing more flexibility to adjust the prediction logits. To further illustrate the impact of varying perturbation budgets on defense methods, we have included additional results with smaller perturbation budgets of 0.2 and 0.5 in the following table. The results show that the defense performance of compared baseline defense methods appears weaker when the defender's perturbation budget is constrained. In such cases, the defender can only perturb the output probabilities using a smaller budget, which poses a challenge to existing defense mechanisms.
*Perturbation Budget of 0.2*
|Attack| Defense|ResNet18-8X|MobileNetV2|DenseNet121|
|--------|----------|------------|-------------|-------------|
||Undefended|87.36%±0.78%|75.23%±1.53%|73.89%±1.29%|
|DFME|RandP|86.32%±1.05%|74.67%±2.02%|72.16%±2.05%|
||P-poison|83.17%±1.66%|73.32%±1.45%|71.68%±1.51%|
||GRAD|84.26%±1.72%|71.82%±1.67%|71.23%±1.68%|
||MeCo|**51.68%±1.96%**|**46.53%±2.09%**|**61.38%±2.41%**|
*Perturbation Budget of 0.5*
|Attack|Defense|ResNet18-8X|MobileNetV2|DenseNet121|
|--------|----------|------------|-------------|-------------|
||Undefended|87.36%±0.78%|75.23%±1.53%|73.89%±1.29%|
|DFME|RandP|85.53%±1.65%|74.62%±2.37%|71.56%±2.79%|
||P-poison|81.19%±1.86%|69.71%±1.56%|70.93%±1.58%|
||GRAD|82.89%±1.82%|68.30%±1.75%|71.32%±1.73%|
||MeCo|**51.68%±1.96%**|**46.53%±2.09%**|**61.38%±2.41%**|
**Q4**: How were the defense hyperparameters determined? Are the defense hyperparameters fixed for all choices of datasets, clone model architectures, and attacks?
**A**: Thank you for your questions. In the data-free model extraction setting, where the attack query data distribution is unknown, we use a surrogate dataset to represent the attack query data and then choose the hyperparameters that achieve the worst performance on the validation dataset of target dataset. For example, we adopt CIFAR100 dataset as a surrogate query dataset to query the target victim model trained on CIFAR10. Our next step involves defensive training using our proposed method on CIFAR10 training dataset. To evaluate the effectiveness of our defense, we simulate the model stealing process by calculating the loss function of the attacker. This is done by comparing the outputs of the cloned model and the target model on CIFAR100 dataset.
We choose the hyperparameters that lead to the worst performance on the validation set of CIFAR10 with the extracted model.
These selected hyperparameters should be tuned for each target dataset, i.e., dataset-dependent. After tuning for each dataset, then the hyperparameters remain fixed across clone model architectures, and various attacks.
**Q5**: DRO optimization vs. Adversarial training
**A**: Thanks for your question. We'd like to clarify the distinction between adversarial training and distributionally robust optimization (DRO).
Adversarial training focuses on optimizing the model's performance against worst-case perturbations for **each individual training data point**. The goal is to enhance the model's robustness to small adversarial perturbations. However, this approach will lead to a drop in performance on the test data, as the model becomes *too conservative*.
On the other hand, DRO aims to optimize the model's performance by considering the worst-case performance over **a set of possible data distributions**. Instead of focusing solely on individual data points, DRO accounts for *uncertainties in the data and generalizes well to different distributions*. As a result, DRO tends to improve performance on the test data, striking a balance between standard empirical optimization and adversarial training.
**Q6**: Adaptive attacker: Please elaborate on how the attacker "adapts"
**A**: The adaptive strategy involves introducing data-dependent perturbations to the clone model's input to simulate the defense mechanism. This approach aims to bridge the gap between the defense used during testing and the training of the clone model for attacker. By incorporating the defense mechanism into the clone model's input, attacker seeks to better adapt to defender's protection strategy. This allows attacker to improve the effectiveness of their attacks. | Summary: In this paper, authors propose a novel principled defensive training framework that substantially improves the memory and computation efficiency during deployment to defend against DFME attacks and a distributionally robust optimization method to randomly perturb the inputs to defend against DFME effectively while maintaining the model utility simultaneously.
Strengths: 1. I agree that this work would be beneficial for protecting current large-scale pre-trained models used in the public APIs.
2. The experiment in this article is sufficient and the description is clear. And the method proposed by the author is novel to a certain extent.
3. The author's description of the existing research background is clear, and the analysis of the problems existing in the field is satisfactory.
Weaknesses: 1. In my opinion, the method proposed by the author has defects in protecting the performance of the original model, that is, it does not fully take into account the impact on the performance of the target model, which can also be seen from Table 3. After all, the weak accuracy improvement of these models in practical applications is expensive.
2. As stated by the author, the methodology of this article lacks a theoretical explanation. The intuitive explanation of the mechanism of action of the model is not deep enough.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This method does not adequately evaluate the negative effects of the target model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our appreciation for your valuable suggestions.
**Q1**: In my opinion, the method proposed by the author has defects in protecting the performance of the original model, that is, it does not fully take into account the impact on the performance of the target model, which can also be seen from Table 3. After all, the weak accuracy improvement of these models in practical applications is expensive.
**A**: We appreciate your comments and your acknowledgment of the challenges involved in defending against model extraction attacks. In practice, developing effective defenses necessitates striking a balance between benign accuracy and defense performance.
* In our research, we thoroughly evaluated various defense baselines, including GRAD, P-poison, and Random defense, etc. It is important to note that most compared baselines sacrifice benign accuracy to enhance defense performance, our proposed defense method strives to achieve a more optimal trade-off.
* As demonstrated in Table 3, our defense methods significantly reduce output probability perturbations when compared to other defense approaches. This improvement highlights the effectiveness of our method in mitigating the risk of model extraction attacks with only slightly compromising the model's performance on benign inputs. This is beneficial for benign users in practical applications, while the compared defense methods need to significantly modify the output probabilities.
* Regarding the slight reduction in benign accuracy, it is essential to emphasize that this decrease is minimal and does not significantly impact the overall model performance. In most cases, our proposed method only slightly reduces the benign accuracy, e.g., 0.28% on Minist, 0.74% on CIFAR10, and 1.35% on CIFAR100. Despite this marginal drop in accuracy, our method remains competitive with the baseline defense methods. Additionally, our method is significantly more efficient and effective in defending against model extraction attacks as shown in other tables.
* Overall, our work aims to strike a balance between robustness against model extraction attacks and maintaining competitive accuracy levels.
**Q2**: As stated by the author, the methodology of this article lacks a theoretical explanation. The intuitive explanation of the mechanism of action of the model is not deep enough.
**A**: Thank you for pointing out this. We provided a comprehensive theoretical foundation for our proposed methods, with the intention of illuminating the fundamental principles and mechanisms that underpin the effectiveness of our approach. These theoretical elucidations are further illustrated in the **global response**. Furthermore, we also included figure visualizations to complement our theoretical explanations, as depicted in **Figure 1 in the 'rebuttal.pdf' within the global response**.
Our method can also be analyzed from another perspective. The introduction of random input perturbation allows our method to act as a form of random smoothing, defined by the function: $T_{\theta_T}^{S}(x) = \int T_{\theta_T}(x + h_{\omega}(x, \epsilon)) d\epsilon$, where $\epsilon \sim N(0, I)$. This smoothing technique enhances our method's resilience against model extraction attacks. It is important to note that compared to learning a single function $T_{\theta_T}(x)$, the process of learning the smoothed function $T_{\theta_T}^{S}(x)$ necessitates a larger number of queries for the attacker due to the computational complexity of calculating the integral.
By necessitating a larger query budget, our approach makes it considerably harder and more resource-intensive for attackers to successfully clone the model. As a result, our defense method offers a robust and effective barrier against model replication.
---
Rebuttal Comment 1.1:
Title: Thanks for the responses
Comment: Thanks for the responses. I don't have any other questions anymore.
---
Reply to Comment 1.1.1:
Title: Thanks for your confirmation
Comment: Thank you for your acknowledgement. | Summary: This paper proposes a defense against data-free model extraction attack (DFME). The basic idea is to add randomized data-dependent perturbations to the input query. It proposes a new training method that considers the perturbation generator to mitigate the risks of dropping the benign accuracy. Namely, it will trian the model and the perturbation generator so that the model can behave properly on the benign queries.
Strengths: It is interesting to add random perturbations to the test data for defending against the DFME attacks.
The paper is well organized, with all items discussed.
Weaknesses: From the very high level, I do not see how this works. The method tries to add perturbations to the test data. This will obviously decrease the benign accuracy. But I am not able to identify what makes the training method work. In DFME attacks, the queries are just normal queries, which can be similar to the ones used in training or testing. I am not sure how the method can work without identifying different types of queries.
One argument may be the query in the attack will be different from the ones used in training. Does that mean that the defense assumes the attack query out of distribution? I think out-of-distribution attack queries will surely degrade the attack performance.
Is there a theoretical guarantee of the proposed method?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: How does the method tell the differences between normal and malicious queries?
Is there a theoretical guarantee of the proposed method?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper discussed the limitations and broader impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful feedback.
**Q 1.1**: Adding perturbation to test data decreases benign accuracy.
**A 1.1**: We explain why our method maintains benign accuracy by considering three cases: (We invite you to refer to **Figure 3 for visual depiction in the "rebuttal.pdf" within the global response**)
* In the first scenario, target model undergoes standard training on the target dataset. Although model utility remains intact, its vulnerability to model theft is heightened due to the lack of defense mechanisms.
* Second, random input perturbations are introduced to query data without our proposed defensive training. As a result, target victim model gains protection against theft attempts, but this comes at the cost of compromised model utility.
* To tackle these challenges and achieve the goals of preserving model utility and establishing robust defenses against model extraction attacks, we introduce our distributionally robust optimization approach (outlined in Equations 4-6 in the main text). This procedure aims to uphold model utility by optimizing worst-case generalization on perturbed test data, achieved by simulating testing data from the training data. This approach applies the common assumption that benign queries share the same distribution as training data, a prevalent assumption in existing model extraction defenses [5].
**Q 1.2**: DFME queries can be similar to the ones in training or testing. How does the method tell the differences between normal and malicious queries?
**A 1.2**:
* First, we would like to clarify that DFME attacks commonly operate under the assumption that the attacker lacks knowledge of the training data distribution employed by the target model. In other words, the attacker operates in a "data-free" manner, being able to employ **flexible data distributions that differ from the training data distribution** to interact with the target model [3,4].
Furthermore, in light of the insights and findings presented in PoW [1], the data utilized for attack queries is synthetic and out-of-distribution (OOD) since these data are generated synthetically using a data generator. Consequently, they possess a **different distribution** compared to the data encountered during training or testing.
* Additionally, we leverage the insights in [2], which observe the fact that attack queries tend to reside closer to decision boundary, while normal queries are positioned further away. Consequently, the output probabilities associated with attack queries experience more significant perturbations upon the application of input perturbation. Conversely, owing to normal queries distance from the decision boundary, the output probabilities of normal queries undergo comparatively milder perturbations.
* We employ distributionally robust optimization, as Equations 4-6 in the main text, to ensure the preservation of model utility on the test data through training a data-dependent perturbation generator, denoted as $h_{\omega}(x, \epsilon)$. When $x$ corresponds to a benign query, one that aligns with the distribution of training data, the perturbation magnitude is inherently restrained due to the defensive training of the perturbation generator. This restraint arises from the training and simulation of these data points derived from training data, thereby guaranteeing model utility. In contrast, when $x$ pertains to an attack query, the perturbation generator generates entirely distinct random perturbations. This is because attack queries are nearer to decision boundary, substantiated by [2]. As a result, the prediction class probabilities are perturbed more significantly with input perturbation.
**Q 1.3**: defense assumes attack query OOD?
**A 1.3**: Thanks for your questions.
As in earlier clarifications, DFME attack query data comprises synthetic out-of-distribution (OOD) data. This is consistently substantiated by a range of existing studies [1,2,3,4]. Due to the attacker's ability to leverage diverse and appropriate data distributions for querying the target model, the exact distribution of these query data remains unknown to defender. Our proposed method perturbs these data and effectively ensures the preservation of model utility and attaining an excellent defense performance. The principles are illustrated in **Q 1.1** and **A 1.1**.
Furthermore, our method **does not rely on the assumption that attack query is OOD** and can be viewed as random smoothing: $T_{\theta_T}^{S}(x) = \int T_{\theta_T}(x + h_{\omega}(x, \epsilon)) d\epsilon$, where $\epsilon \sim N(0, I)$. Compared to learning a single function $T_{\theta_T}(x)$, learning the smoothed function $T_{\theta_T}^{S}(x)$ necessitates a larger number of queries for attacker due to the requirement of calculating the integral. By necessitating a larger query budget, our approach makes it considerably harder for attackers to successfully clone the model.
**Q 2**: I think OOD attack queries will surely degrade the attack performance.
**A**: We would like to clarify that successful model extraction **does not necessarily require in-distribution query data**. As evidenced by existing DFME attacks [3, 4], they employ out-of-distribution (OOD) data for querying the target model. Despite the utilization of such OOD data, these attacks still closely replicated the functionality of target model without degrading attack performance.
**Q3**: theoretical guarantee
**A**: We invite you to refer to **global response** for theoretical analysis. We provide **Figure 1 in rebuttal.pdf in the global response** for visualization.
Reference:
[1] Increasing the Cost of Model Extraction with Calibrated Proof of Work. ICLR 2022
[2] Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model. ICML 2021
[3] Data-Free Model Extraction. CVPR 2021
[4] Towards Data-Free Model Stealing in a Hard Label Setting. CVPR 2022
[5] Protecting DNNs from Theft using an Ensemble of Diverse Models. ICLR 2021
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification and details. Please make sure to include the promised revisions in the paper.
---
Reply to Comment 1.1.1:
Title: Thanks for your response
Comment: Thank you for getting back to us. We will incorporate the revision as promised in the final version. | Rebuttal 1:
Rebuttal: # Global Response (Theoretical Analysis)
Given the random perturbation applied to the query input, the loss function for extracting the victim model becomes noisy. Attacker has to employ the following noisy loss function to extract the target model.
$\mathcal{L}_{C}^{\Delta}(\theta_C) := KL(T(x + h^{\omega}(x, \epsilon); \theta_T), C(x;\theta_C))$
where $T(x, \theta_T)$ is the target victim model with parameters $\theta_T$, $C(x;\theta_C)$ is the clone model with parameters $\theta_C$ and $h_{\omega}(x, \epsilon)$ is the data-dependent perturbation. $KL$ denotes the KL divergence
The ground truth loss function without input perturbation (which remains inaccessible to the attackers due to their lack of knowledge regarding the amount of input perturbation) is illustrated below.
$\mathcal{L}_{C}(\theta_C) := KL(T(x; \theta_T), C(x;\theta_C))$
Ideally, attacker should optimize the following loss.
$\mathcal{L_C}^{*} = \min_{\theta_C} \mathcal{L}_{C}(\theta_C)$
Due to the noisy loss function, attacker loss gradient becomes noisy and biased, shown as the following:
**Attacker Loss Gradient Modeling**:
$G_t(\theta_C) = \nabla_{\theta_C} \mathcal{L_C}^{\Delta}(\theta_C) = \nabla_{\theta_C} \mathcal{L}_{C}(\theta_C) + B_t(\theta_C) + N_t(\theta_C)$
where $\nabla_{\theta_C} \mathcal{L}_{C}^{\Delta}(\theta_C)$ is the actual gradient adopted by the attacker
$\nabla_{\theta_C} \mathcal{L}_{C}(\theta_C)$ is the ground truth loss gradient without input perturbation, which is inaccessible to attacker
$B_t(\theta_C)$ is the gradient bias due to the noisy loss function introduced by perturbation generator $h_w$
$N_t(\theta_C)$ is the random variable introduced by the randomness in data samples. For analysis convenience, the expectation of the noise is assumed to be zero, i.e. $\mathbb{E} N_t(\theta_C) = 0$
where attacker updates the clone model as $\theta_C^{t+1} = \theta_C^t - \alpha G_t$; $\alpha$ is learning rate
**Assumption 1**: We assume Polyak- Łojasiewicz (PL)
condition on attacker loss function $\mathcal{L}_{C}(\theta_C)$
$|\nabla_{\theta_C} \mathcal{L_C}(\theta_C)|^2 \geq 2\omega (\mathcal{L_C}(\theta_C))$
**Assumption 2**
We assume the following smoothness condition for the loss function $\mathcal{L}_{C}(\theta_C)$ by following [2]:
$\mathcal{L}{C}(\theta_C^2) \leq \mathcal{L}{C}(\theta_C^1) + \langle \nabla_{\theta_C} \mathcal{L}_{C}(\theta_C^1), \theta_C^2 - \theta_C^1 \rangle + \frac{A}{2} |\theta_C^2 - \theta_C^1|^2$
**Assumption 3** Assume bounded gradient bias and randomness
$\mathbb{E}||N_t(\theta_C)||^2 \leq D ||\mathcal{L}_{C}(\theta_C) + B_t(\theta_C)||^2 + \rho^2$
$||B_t(\theta_C)||^2 \leq d||\nabla_{\theta_C} \mathcal{L}_{C}(\theta_C)||^2 + \tau^2$ ($0 \leq d<1$)
where $\omega, D, d, \rho, \tau$ are constants
**Theorem 1**. Assume the attacker loss $\mathcal{L}_{C}(\theta_C)$ function satisfy assumption 2 and 3. $\alpha \leq \frac{1}{(D+1)A}$. Then, the attacker loss function satisfies the following inequality:
$\mathbb{E}[\mathcal{L_C}(\theta_C^{t+1})] \leq \mathcal{L_C}(\theta_C^{t}) + \frac{\alpha}{2}(d-1)|\nabla_{\theta_C} \mathcal{L}_{C}(\theta_C)|^2 + \frac{\alpha}{2}\tau^2 + \frac{\alpha^2 A \rho^2}{2}$
**Proof**: By assumption 2 and 3. We set $\theta_C^{1} = \theta_C^{t}$ and $\theta_C^{2} = \theta_C^{t+1}$ with $\theta_C^{t+1} = \theta_C^t - \alpha G_t$
$\mathbb{E}[\mathcal{L_C}(\theta_C^{t+1})] \leq \mathcal{L_C}(\theta_C^{t}) -\alpha \langle \nabla_{\theta_C} \mathcal{L}_{C}(\theta_C), \mathbb{E}(G_t) \rangle + \frac{\alpha^2A}{2} \left(\mathbb{E}|G_t - \mathbb{E}G_t|^2 + \mathbb{E}|\mathbb{E}G_t|^2\right)$
$= \mathcal{L_C}(\theta_C^{t}) -\alpha \langle \nabla{\theta_C} \mathcal{L_C}(\theta_C), \mathbb{E}(G_t) \rangle + \frac{\alpha^2A}{2} \left(\mathbb{E}|N_t|^2 + \mathbb{E}| \nabla_{\theta_C} \mathcal{L_C}(\theta_C) + B_t|^2\right)$
$\leq \mathcal{L_C}(\theta_C^{t}) -\alpha \langle \nabla{\theta_C} \mathcal{L_C}(\theta_C), \mathbb{E}(G_t) \rangle + \frac{\alpha^2A}{2} \left((D+1)\mathbb{E}| \nabla_{\theta_C} \mathcal{L_C}(\theta_C) + B_t|^2 + \rho^2\right)$
$\mathbb{E}[\mathcal{L_C}(\theta_C^{t+1})] \leq \mathcal{L_C}(\theta_C^{t}) + \frac{\alpha}{2} \left(-2 \langle \nabla_{\theta_C} \mathcal{L_C}(\theta_C), \mathbb{E}(G_t) \rangle + |\nabla{\theta_C}\mathcal{L}_{C}(\theta_C) + B_t|^2 \right) + \frac{A\alpha^2\rho^2}{2}$
$= \mathcal{L_C}(\theta_C^{t}) + \frac{\alpha}{2} (-|\nabla{\theta_C} \mathcal{L}_{C}(\theta_C)|^2 + |B_t|^2) + \frac{A\alpha^2\rho^2}{2}$
$\leq \mathcal{L_C}(\theta_C^{t}) + \frac{\alpha}{2}(d-1) |\nabla{\theta_C} \mathcal{L}_{C}(\theta_C)|^2 + \frac{A\alpha^2\rho^2}{2}$
**Theorem 2** With assumption 3, convergence error of attacker loss function can be estimated as the following:
$L_C^{T} \leq (1 - \alpha \omega(1-d))^T L_C^{0} + \frac{\tau^2 + A\alpha \rho^2}{\omega(1-d)}$
**Proof**: We define $L_C^t = \mathcal{L_C}(\theta_C^t) - \mathcal{L}_{C}^{*}$. Then, we apply assumption 1 to theorem 1 and got following:
$L_C^{t+1} = (1 - \alpha \omega(1-d)) L_C^{t} + \frac{\alpha}{2}\tau^2 + \frac{\alpha^2A\rho^2}{2}$
We set $\kappa = \frac{\tau^2 + A\alpha \rho^2}{\omega(1-d)}$.
Above equation can be rearranged as following :
$L_C^{T} - \kappa \leq (1 - \alpha \omega(1-d))^T(L_C^{0} - \kappa)$
Therefore, $L_C^{T} \leq (1 - \alpha \omega(1-d))^T L_C^{0} + \frac{\tau^2 + A\alpha \rho^2}{\omega(1-d)}$
**Remark** From the above analysis, it becomes evident that accumulation of gradient estimation errors leads to a deviation of the final estimation error $L_C^{T} := \mathcal{L_C}(\theta_C^T) - \mathcal{L_C}^{*}$ from the ground truth.
The first term in the above inequality $(1 - \alpha \omega(1-d))^T L_C^{0} \rightarrow 0$.
This deviation occurs due to the increase in $\frac{\tau^2 + A\alpha \rho^2}{\omega(1-d)}$ caused by higher gradient bias $\tau$ and gradient randomness $\rho$. Consequently, extracted model by attacker has a larger optimization error.
Pdf: /pdf/2746af47c3c1af82e532088807e890872bfdf8fa.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning Energy-based Model via Dual-MCMC Teaching | Accept (poster) | Summary: This paper proposes a novel method for training and inference of energy-based model (EBM). The author claim that although the use of generator as an initializer model may improve MCMC sampling, training unbiased generator is an open problem . Thus they propose a joint learning framework in which generator is learned by MLE via posterior sampling. While EBMs have a number of appealing properties they are difficult to inference. Therefore learning an unbiased generator for faster MCMC sampling from the EBM distribution is a hot topic.
Strengths: The paper is clearly written and well presented. The potential is impact of the proposed dual-MCMC teaching is considerable. The problem arised in the paper is actual and mostly uncovered by the previous research. The paper provides a number of novel ideas which could encorage the future research in this direction. The experiments are very clear and solid including fair comparison with concurent works. Remarkably, the numerical results for the proposed method are spectacular.
Weaknesses: 1. The section 2 is clear-written and self-contained but it is large and has little connection to the methodology proposed in the paper. As the authors reforlulate each component described in the section 2 it might be helpful to better explain the benefits of the choice for proposed method.
2. The objective proposed for learning EBM includes maximization of Kulback-Leibler divergence term (equation 7) which is stated to perform as a self-critic for EBM. The grounding beneath this “self-adversarial learning” is not very confident. Either literature referenced by the authors lack the verification or some formal reasoning behind this objective. Compared to the objectives for generator and inference model this one doesn’t pose an upper bound for MLE objective.
3. In the Figure 2 in the main text you provide an energy profile after performing $30$ steps (as during training) of MCMC revision on $x$. FID improves from $5.94$ on $x_0$ to $5.15$ on $x_{30}$, still the energy profile is not convergent. It would be more convincing if you provide a result for long-run MCMC (until energy profile convergence) revision for learned EBM to verify your EBM.
4. Please include the training details, including optimization hyperparameters, step sizes for Langevin dynamics, etc.
Typos:
1. In the line 156 there is a typo for the marginal distribution of x, but inside the brackets you use a proper notation.
2. In the line 214 “second” is used instead of “first”.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. It was noticed [1] that for EBM distribution with density $\pi(x)$ and generator $p_{\theta} = p_{\theta}(x|z)p(z)$ sampling latent variable from $p(z)\pi(g_{\theta}(z))$ and pushing it by $g_{\theta}$ to the ambient space is equivalent to sampling from $\pi(x)$. Why do you propose to perform EBM inference in the ambient space instead of latent space? As traversing high-dimensional space with MCMC sampling is a hard problem. Please elaborate on why don’t you use an advantage of sampling from EBM in the latent space of the generator model.
2. The paper massively adopts the method proposed in [2], nevertheless considering results on CIFAR-10 dataset FID drops from $30.10$ for [2] to $9.26$ for the current paper. What is a major reason for such a dramatic improvement? In the Appendix you provide a quality for a "simple network" which seems to share architecture with paper [2]. It would be helpful for understanding if you provide a comparison of model complexity and/or time, GPU consumption in addition to image quality comparison.
[1] Che, Tong, et al. "Your gan is secretly an energy-based model and you should use discriminator driven latent sampling." Advances in Neural Information Processing Systems 33 (2020): 12275-12287.
[2] Han, Tian, et al. "Divergence triangle for joint training of generator model, energy-based model, and inferential model." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations of the method are obscured. The authors only mention that “this work may share the limitation with other MCMC-based methods in terms of the computational cost” which is a bit vague.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your encouraging feedback and the correction of typos. We shall follow the suggestion and fix the typos in the revision. In the **General Response**, we provide clarifications for concerns regarding the motivation of our method, connection to Divergence Triangle, and computational cost (see also the attached **PDF**).
---
**Q1. Self-adversarial Interpretation of EBM Learning.**
We learn our EBM by minimizing $\min_\alpha KL(p_{\rm data}(x)\| \pi_\alpha(x)) - KL(T^x_{\alpha_t} p_{\theta_t}(x)\|\pi_\alpha(x))$ (Eqn. 8), which actually lower bounds the traditional MLE objective. Such "$KL -KL$" objective has the self-adversarial interpretation, as the energy value for observed examples is learned to decrease, whereas it increases for model samples. Therefore, the model is learned to criticize its own samples at current iteration. On the other hand, the model sample along the MCMC iteration process (Eqn. 3) is adjusted through lowering its sample energy, which can be viewed to fool the EBM itself at the current iteration.
**Q2. Long-run Langevin Trajectory.**
We conduct the long-run experiment and show the result in the attached **PDF**, where we run MCMC revision on $x$ for 3000 steps (much longer than 30 training steps). We observe only minor changes during the process, which indicates that the generator model could successfully match the long-run MCMC samples from EBM and approach the target distribution $\pi_\alpha(x)$.
**Q3. Training Hyper-parameters.**
We shall include training details in the Appendix and open-source our implementation along with checkpoints.
**Q4. Traversing Latent Space.**
Thank you for providing such an insightful suggestion. As stated in [1], the data distribution $p_{\rm data}(x)$ and generator distribution $p_\theta(x)$ need to have the same support to avoid the mode-dropping phenomenon. In other words, in the case when the generator model fails to cover some of the modes in data distribution, the latent sampling from such an induced model will also miss these modes. Addressing such an issue would require an additional treatment (Corollary 1 in [1]). Our explicit EBM, on the other hand, allows a further MCMC revision, so even when the generator model only partially covers the data distribution, such MCMC revision would still effectively guide the samples toward the data manifold.
Comparison of sample quality, wall-clock training time (seconds / per-iteration).
| | Traversing on $\mathbf{z}$ | Traversing on $\mathbf{x}$ |
| :------------: | :------------------------: | :------------------------: |
| FID | 40.35 | 9.26 |
| Time (s) | 1.742 | 1.594 |
| Langevin Steps | 30 | 30 |
Empirically, we conducted an initial experiment where we adapted our method by traversing the induced latent space for EBM sampling, rather than directly performing MCMC on the data space. The results can be found in the table below, where we observe a weaker performance of the adapted model. Additionally, such a sampling method would increase the training time as it requires back-propagation of the gradient through the generation network. For comparison, we use the same network structure and other hyper-parameters (e.g., Langevin steps, etc.), while we shall explore its potential in greater depth in our future study. Thanks again for such an insightful comment.
**Q5. Discussion of Limitation.**
The details of our computational cost can be found in **General Response**. We shall add a thorough discussion of the limitation, including our computational cost and the point (provided by **Reviewer** **uj7J**, Q1), in the revision.
---
[1] Che, Tong, Ruixiang Zhang, Jascha Sohl-Dickstein, Hugo Larochelle, Liam Paull, Yuan Cao, and Yoshua Bengio. Your gan is secretly an energy-based model and you should use discriminator driven latent sampling. Advances in Neural Information Processing Systems, 33, 2020.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response and fruitful discussion, especially towards the relations of your method with the local-global MCMC approach. In general I enjoyed reading the paper and continue to support its acceptance.
---
Reply to Comment 1.1.1:
Comment: Thanks for your insightful comments, and we are happy to hear that you like our discussion with the local-global MCMC approach.
Please feel free to let us know if you have any questions, we are willing to provide further responses. | Summary: This paper investigates the problem of Maximum Likelihood (ML) estimation for Energy-Based Models (EBM). Building on previous research, the authors propose a novel approach that involves learning a surrogate generative model to initialize the costly Langevin steps. The authors chose a latent-based generative model thus requiring additional Markov Chain Monte Carlo (MCMC) sampling for estimation. This new MCMC procedure is amortized using a simpler third generative model, referred to as the inference model. This work provides a procedure to learn the three models jointly. Through extensive experimental evaluations, the authors demonstrate the effectiveness of their proposed framework. They showcase the benefits of incorporating the additional models and highlight the superiority of their combined algorithm over other existing methods. The results substantiate the value of joint learning, showcasing improved accuracy.
Strengths: - Experiments are well detailled.
- Experiments on Out-of-Distribution Detection (OOD) in App 1.2 allow to evaluate the EBM beyond generating samples.
Weaknesses: - The motivation to learn the generator with observed samples is unclear.
- The improved accuracy of this method is not balanced with the computational and memory costs generated.
- The quality of the density estimation task is barely considered.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - One of the main justifications of your learning procedure is to learn the generator using training data. Can you explain why not doing this (like in cooperative learning [1,2]) would introduce a bias ? This claim was made multiple times in the main paper.
- As a justification of your revisited densities (or in L129), you say that Langevin steps could transform something unimodal into something multimodal which is not the case (see for instance [3]) as shown in Fig 2 of Sec 5.2. Is there another justification for those Langevin steps ?
- The ablation studies (Sec 5.2 and 5.3) highlight the usefulness of the inference model with surprisingly low computational overhead. What about the memory overhead ?
[1] Jianwen Xie, Yang Lu, Ruiqi Gao, Song-Chun Zhu, and Ying Nian Wu. Cooperative training of descriptor and generator networks. IEEE transactions on pattern analysis and machine intelligence, 42(1):27–45, 2018.
[2] Jianwen Xie, Zilong Zheng, and Ping Li. Learning energy-based model with variational auto-encoder as amortized sampler. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 10441–10451, 2021
[3] Véronique Gayrard, Anton Bovier, Markus Klein, Metastability in reversible diffusion processes II: precise asymptotics for small eigenvalues. J. Eur. Math. Soc. 7 (2005), no. 1, pp. 69–99
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed reviews, and we appreciate the time you spent on reviewing our paper. In **General Response**, we describe the motivation behind our method and discuss the computational and memory costs (see also the attached **PDF**).
---
**Q1. Density Estimation.**
Thanks for the note on the density estimation task. Evaluating the density of learned EBM requires the computation of intractable normalization constant (or partition function), which can be computationally expensive and non-exact as observed in [1]. Therefore, we consider standard metrics, such as FID and IS, for the sample quality of our EBM. We also consider out-of-distribution detection for EBM evaluation (please see *Out-of-Distribution Detection*, Section 1.2, Table 3 in Appendix).
**Q2. Biased-learned Generator.**
Our generator model is learned to match the EBM and the true data distribution. In the absence of observed training data, the generator can only learn from the EBM samples, and any approximation error of the EBM may carry over to the generator learning. Consequently, this leads to a biased-learned generator, which, in turn, affects the EBM learning by providing sub-optimal initial points. This is particularly true in the early stage of learning. To demonstrate, we adapt our model and learn the generator without using true data. As shown in Figure 1 in the attached **PDF** such a biased-learned generator might generate noisy and less realistic samples, resulting in sub-optimal EBM training and, consequently, an overall weaker generation performance (see, e.g., cooperative-style baselines in *Image Modelling*, Section 5.1, Table 1).
**Q3. Justification of Langevin Steps.**
The use of Langevin step is intended to refine the direct samples obtained from generator and inference model, guiding them towards target distributions. For example, in L129, the starting (reference) distribution is unimodal Gaussian $q_\phi(z|x)$, using K steps of Langevin would drive such distribution towards the exact generator posterior (i.e., $p_\theta(z|x)$, which is the target distribution of latent MCMC, and can be multi-modal). This is based on the monotonicity property of KL [2], i.e., $KL(T_{\theta_t}^{z}q_{\phi_t}(z|x)||p_{\theta_t}(z|x))\le KL(q_{\phi_t}(z|x)||p_{\theta_t}(z|x))$.
The Figure 2 in Section 5.2 is shown to demonstrate the learned generator and inference model. If they are well-learned and successfully absorb the corresponding MCMC revisions, then they should generate ancestral samples that are on (or lie close to) the major modes of target distributions (or stationary distributions of MCMC transitions). For example, for latent MCMC revision (Figure 2, right panel), the samples exhibit only mild pixel changes along the trajectory, indicating the inference model $q_\phi(z|x)$ is well-learned and close to the stationary distribution, i.e., $KL(T_{\hat{\theta}}^{z}q_{\hat{\phi}}(z|x)||q_{\hat{\phi}}(z|x))\rightarrow 0$ ($\hat{\theta}$ and $\hat{\phi}$ denoted for learned generator and inference models). This phenomenon (albeit within the realm of image space) is also demonstrated in Section 6.1, Figure 7.b in [3].
We humbly request that you could reconsider the decision given our response. Please feel free to let us know if you have more questions about the paper. We will try our best to address your concerns. Thank you!
---
[1] Du, Yilun, and Igor Mordatch. Implicit generation and generalization in energy-based models. Advances in Neural Information Processing Systems, 32, 2019.
[2] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. John Wiley \& Sons, 1991.
[3] Jianwen Xie, Zilong Zheng, and Ping Li. Learning energy-based model with variational auto-encoder as amortized sampler. In Proceedings of the AAAI Conference on Artificial Intelligence, 2021
---
Rebuttal Comment 1.1:
Comment: I first want to thank the authors for their insightful rebuttal. Based on the rebuttal as well as the feedback provided by the other reviewers, I still have some questions regarding the paper.
**The suggestions seem ad-hoc to the reviewers**
I agree with reviewers uj7J and Rjbi that many contributions of this paper seem ad-hoc and lack motivation. I believe the main reason is that (as pointed by reviewers uj7J and XcPg) the paper follows the probalistic framework provided by the Divergence Triangle paper. I think that introducing those new ideas by highlighting the contrast with DT (like in the general answer which perfectly highlights the importance of the revised densities) would make the paper much easier to read.
**The density estimation task**
I like that the authors provided long-run samples from the EBM which suggest a good density estimation. However, I disagree with the fact that the unormalized density is a limitation for density related benchmarks. Not only the unormalized likelihood is sufficient for many benchmarks but looking at a 2D estimate on the checkboard, the spiral distribution or any multi-modal distribution is now a standard in the EBM community.
**Biased generator**
In your answer, you say that "any approximation error of the EBM may carry over to the generator learning" however isn't the goal of this first MCMC chain to sample the EBM? In other words, it seems important that an approximation error on the EBM should propagate to the generator.
I am happy to see that the authors illustrated their point through Fig. 1 of the rebuttal PDF. However, I think that the low resolution of the image as well as the lack of objective metrics don't allow for an accurate comparison.
**Langevin sampling**
I think there is a miss-understanding of my review here. What I mean is that if the support of $q_{\phi}(z | x)$ is included in the support of one of the modes of $p_{\theta}(z | x)$ then a single Langevin chain initialized with this generator definitely cannot properly mix between the modes of $p_{\theta}(z | x)$ leading to a biased learning. This is exactly what you illustrate in the second part of your answer and in Fig. 2 of the main paper.
This lack of proper mixing require many parallel MCMC chains to cover all the modes of $p_{\theta}(z | x)$. This could be fixed by using mixing MCMC such as global-local MCMC algorithms [1,2] (which could leverage the tractable generator as global proposal) as it was recently done in [3] for direct MLE on EBMs.
**Conclusion**
Overall, I think the motivation of this paper is not very clear as the connection with DT should be highlighted. However, my review indeed seems severe as the authors clearly proved the superiority of their method in a fair and illustrated way. I will increase my score to 5 and I'm ready to increase it again depending on the answer given by the other reviewers or if some density experiments are appended. I think that the discussion on Langevin sampling is a bit out of the scope of this paper (especially if the latent space is not very multimodal which I would like to see).
[1] Gabrié, M., Rotskoff, G. M., & Vanden-Eijnden, E. (2022). Adaptive Monte Carlo augmented with normalizing flows. Proceedings of the National Academy of Sciences, 119(10), e2109420119. doi:10.1073/pnas.2109420119
[2] Samsonov, S., Lagutin, E., Gabrié, M., Durmus, A., Naumov, A., & Moulines, E. (2022). Local-Global MCMC kernels: the best of both worlds. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, & A. Oh (Eds.), Advances in Neural Information Processing Systems (Vol. 35, pp. 5178–5193).
[3] Grenioux, L., Moulines, É., & Gabrié, M. (2023). Balanced Training of Energy-Based Models with Adaptive Flow Sampling. arXiv preprint arXiv:2306.00684.
---
Reply to Comment 1.1.1:
Title: New Response for Questions
Comment: Thanks for your comments. Please see our answers to the new questions below.
---
**Q1. Clarification of Generator Learning**
Sorry for the confusion. For clarification, we shall begin with a simple analogy of "teacher" (i.e., EBM) and "student" (i.e., generator model) (similar to Section. 1.1 in [c]). In Cooperative Learning [c], the 'teacher' directly learns from the "textbook" (i.e., true data), while the "student" can only learn from the "teacher"s advice.
In contrast, we learn both the generator and EBM to match the true data distribution; intuitively, the generator model that also fits with observed data should provide a more informative starting point for EBM learning. In the view of analogy, the "student" now has access to the "textbook" and thus can be *"self-learned"* but also needs to take revisions from the "teacher." Our additional experiment in the attached **PDF** (Figure. 1) demonstrates that our generator model can generate better image synthesis than the generator model learned without training observation (FID score at 5K iterations, 39.25 v.s. 58.74).
**Q2. Langevin Sampling and Density Estimation Task.**
[We have kindly asked the ACs to forward the **anonymous link** as required by the regulation of NeurIPS. ]
Thanks for the new references. We agree that MCMC methods, such as HMC or Langevin dynamics, can be challenging in practice to serve as a "convergent" sampler. This is a long-standing problem, as the movement/proposal in each step can be "local" (as indicated by \[1,2]), which in turn results in long-mixing time and ineffective traversal of modes, especially in the realm of high-dimensional multi-modal data. In this work, we mainly focus on the short-run, non-convergent, non-persistent Langevin sampler due to its simplicity, efficiency, and ability to facilitate fair model comparison.
Such short-run Langevin samplers can be helpful in practice for the learning of realistic samples (see, e.g., *Image Modelling*, Section 5.1) but can also lead to limited EBM density estimation (see 3rd column of *Kernel Density Estimation (KDE) experiment* in Figure. 1 from the **anonymous link**). Such a phenomenon is shared by non-convergent samplers and is studied in [a,b] (see, e.g., Figure. 6 in [a] and Section. 4.1 in [b]). As shown in Table. 5 and Table. 6 in *Ablation Studies*, increasing the steps of Langevin transition during training could render a better sampler, resulting in improved generation/reconstruction quality and density estimation (see also 2nd and 4th column in Figure. 1 from the **anonymous link**).
The suggested global-local MCMC algorithm is interesting and inspirational. Based on our understanding (sorry in advance if we miss the key points in those papers), our generator can be viewed as an unadjusted "global" proposal to facilitate the "local" Langevin transition (similarly, the inference model shall be viewed as an unadjusted “global” proposal to facilitate the “local” latent posterior Langevin transition). However, different than the NF-based proposal that features tractable log-likelihood, the generator proposal in our work has a lower-dimensional latent space and can be intractable, as the generator is not bijective as those of NF-based models. This can make direct use of the i-SIR framework [2] challenging (e.g., computation of importance weights), since the computation of the marginal generator distribution $p_\theta(x)$ can be typically intractable and costly with the integration of latent variables. The reference [3], along with the related mixing MCMC variants mentioned therein, certainly sheds light on and has the potential to further improve the existing Langevin samplers used in our framework. For example, the local Langevin samplers could be interleaved between global generator proposals to achieve better multi-modal traversal and better negative samples from EBM. We shall leave the deeper exploration along this direction as our future study.
[Note: we also want to kindly point out that the reference [3], the v1 version, is uploaded to ArXiv **after** our NeurIPS submission].
Overall, thanks again for such helpful suggestions. We shall cite them in the revision and acknowledge them as effective and generic ways to improve standard MCMC samplers, especially in those complex, multi-modal scenarios.
---
[a] Nijkamp, Erik, Mitch Hill, Tian Han, Song-Chun Zhu, and Ying Nian Wu. "On the anatomy of mcmc-based maximum likelihood learning of energy-based models." In Proceedings of the AAAI Conference on Artificial Intelligence, 2020.
[b] Nijkamp, Erik, Mitch Hill, Song-Chun Zhu, and Ying Nian Wu. "Learning non-convergent non-persistent short-run MCMC toward energy-based model." Advances in Neural Information Processing Systems 32, 2019.
[c] Xie, Jianwen, Yang Lu, Ruiqi Gao, and Ying Nian Wu. "Cooperative learning of energy-based model and latent variable model via mcmc teaching." In Proceedings of the AAAI Conference on Artificial Intelligence, 2018. | Summary: This work presents a new approach for training energy-based models (EBMs). These are commonly trained by approximating the maximum-likelihood estimate of the model parameters. This is done by minimising a certain Kullback--Leibler divergence via stochastic-gradient descent. To estimate the necessarily gradient, one must approximate an expectation w.r.t. the law of the data under the energy-based model. This is typically done via an MCMC algorithm.
The present work attempts to speed up this procedure by training an additional surrogate model (the "generator" model) from which the MCMC chain can be initialised (so that fewer MCMC iterations are needed). This generator model and the EBM are trained as part of a larger scheme which additionally trains an inference model (i.e. the posterior distribution of the latent variables given the observed data).
-----------------------
EDIT (2023-09-04): I have now read all the reviews and rebuttals. All my questions have been addressed. I continue to be happy for this paper to be published.
Strengths: 1. The method appears to be sound and novel.
2. The paper is mostly well written.
Weaknesses: Main comments:
My expertise with training EBMs in this fashion is too limited to offer deeper insights. However:
1. Some of the objectives used for training the three models appear a little ad-hoc and their motivation/justification sometimes leans on convergence of the MCMC chains (which seems unlikely to occur in practice given the small number of MCMC iterations used). At the very least, some added justification/derivation of these objectives would help the reader gain more intuition for the method.
2. I'd ask the authors to confirm that the numerical comparisons are fair in the sense that all algorithms have roughly the same computational cost (see question below).
3. I think it would help to have some (brief/high-level) pseudo code in the main paper.
Minor comments/typos:
L114: grammar in "such Langevin process"
L180: are the KL divergences equivalent to the marginal version or is the /minimisation/ of these KL divergences equivalent to minimising the marginal divergences?
L152--160: I understand what the authors mean with the "transition-kernel" notation. But I think the notation/explanation in this paragraph needs to be improved. For example, in Line 156, the stated expression is referred to as "marginal distribution on $x$" but it clearly represents distribution on joint the space of $(x, z)$. I would state the formal definitions (i.e. those found in Lines 158 and 160) right from the start to enhance clarity. It might also be worth reminding the reader that $p_\theta(x)$ is the marginal of $p(z)p_\theta(x|z)$.
L256--257: the MCMC sampling -> MCMC sampling
- there are sometimes missing spaces between "Eq" and the equation number and between "Sec." and the section number.
- punctuation is needed even if a sentence ends in a (displayed) equation
- the font size in most of the figures is too small (especially in Figure 3) and axis labels are sometimes missing
- it may just be my lack of familiarity with the literature in this area but is it clear to the reader what "explaining away inference" means in this specific context? If not, maybe it'd be worth adding a brief explanation.
- table captions normally go /above/ tables (unless the NeurIPS style guide says otherwise)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In Table 1 (and also in the numerical results from Sections 5.2 and 5.3), is the computational cost of all algorithms comparable?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your supportive comments and all the corrections, such as typos, formats, and captions, and we shall correct them in our final version. Please also find a brief description and model comparison in the **General Response** and the attached **PDF**.
----
**Q1. Convergence of the MCMC Chain.**
The convergence analysis of MCMC chains is mainly used for asymptotically theoretical understanding and is meant to establish the connection with the standard maximum likelihood estimation (MLE) and other relevant models. In practice, it can be costly and even infeasible to run a mixed and convergent MCMC sampler in each iteration. As such, a short-run sampling strategy (i.e., using only a small number of sampling steps) is commonly used and can provide meaningful learning signals for the model traversing as observed in [1]. We add the experiment for the long-run Langevin traversing (with $3,000$ steps, please see Figure 2 in the attached **PDF**), where only minor changes can be observed. It suggests that the generator, though trained with short-run MCMC revision, could essentially learn to match the long-run (i.e., near convergent) MCMC samples.
**Q2. Computation Cost of All Algorithms.**
We follow the standard evaluation protocol in generative modeling (e.g., [2, 3] and others) and report the best-performing results for the baseline models. MCMC-based methods typically involve an inner loop for sampling, which introduces computational overhead compared to variational-based or adversarial methods (see the comparison of computational cost in **General Response** and the attached **PDF**). However, we note that our proposed joint learning scheme allows complementary models (e.g., inference model) to act as informative starting points and jump-start MCMC procedures (e.g., MCMC sampling on generator posterior), making them more efficient and effective than other noise-initialized MCMC methods. For example, as demonstrated in *Analysis of Inference Model* (Section 5.3), the inference accuracy of our 10-step MCMC revision with inference model initializer can be even better than 30-step, noise-initialized MCMC sampling.
**Q3. Pseudo Code, KL Divergences Equivalence, Notation and Expression.**
Thanks for your valuable suggestions. The detailed derivation for the KL divergence equivalence can be found in *Methodology* (Section 2.2 in Appendix). We shall follow your advice to change the notations and expressions (e.g., marginal distribution in L156) in the revision, and we shall open-source our implementation.
**Q4. Explaining-away Inference.**
Sorry for the confusion. The "explaining-away inference" refers to the situation where the knowledge of one latent variable reduces the influence or importance of another latent variable in explaining observed data. We employ this term here to emphasize that during the Langevin process, latent factors compete with each other to explain the given training example. We shall make it more clear in the revision.
---
[1] Erik Nijkamp, Mitch Hill, Song-Chun Zhu, and Ying Nian Wu. Learning non-convergent non-persistent short-run mcmc toward energy-based model. Advances in Neural Information Processing Systems, 32, 2019.
[2] Zhisheng Xiao, Karsten Kreis, Jan Kautz, and Arash Vahdat. Vaebm: A symbiosis between variational autoencoders and energy-based models. In International Conference on Learning Representations, 2020.
[3] Gao, Ruiqi, Yang Song, Ben Poole, Ying Nian Wu, and Diederik P. Kingma. Learning Energy-Based Models by Diffusion Recovery Likelihood. In International Conference on Learning Representations, 2021.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: I want to thank the authors for their detailed response. I think I'd be happy to see this published.
That said, this is somewhat outside my field of expertise (as my low confidence score indicates) e.g., I was unaware of the divergence-triangle paper.
---
Reply to Comment 1.1.1:
Comment: Thanks for your comments and the time you spent reviewing our paper.
We are willing to provide further responses if you have more questions about our paper. | Summary: This work investigates learning EBMs for image data using auxiliary generator and inference models. The model generates samples by drawing a latent normal vector, passing it through the generator, and refining the generator sample using MCMC with the EBM. An inference network, which predicts latent vectors from images, is used to assist with training the EBM and generator. The key innovation is to update both generator samples using image space MCMC and inferred latent vectors for data using latent space MCMC. This allows the generator to be efficiently trained using both revised MCMC samples, as in Cooperative Learning, and revised latent codes for data, which is unique to this work. Training the generator to match both EBM and data samples is crucial for improving performance beyond the results achieved by related methods. Training algorithms for the EBM, generator, and inference network are presented using several KL terms between different joint distributions of images and latent codes. Experiments show strong EBM synthesis results for CIFAR-10, CelebA, CelebA HQ, and LSUN-Church-64.
Strengths: * The work identifies and addresses a key limitation of the Cooperative Learning framework, which is the fact that the generator is trained to reconstruct only refined EBM samples. This can limit the potential of the generator since EBM samples are less realistic than true data. While other works such as Divergence Triangle train the generator to reconstruct data images using inferred codes for observed data, these works lack an MCMC refinement stage which is crucial for improving sample quality. This work bridges the gap between Cooperative Learning style methods and Divergence Triangle style methods to incorporate both MCMC sampling and generator data reconstruction into a single learning framework.
* Experimental results show significant benefits from the proposed method compared to a variety of other EBM learning methods.
Weaknesses: * The loss functions appear to be significantly inspired by the Divergence Triangle, but the paper lacks a thorough comparison between the proposed method and Divergence Triangle. Such a discussion would help elucidate the differences between the proposed method and prior work. To my understanding, the primary differences are 1) replace all LHS of KL terms with distributions defined by short-run MCMC distributions with frozen model parameters, 2) add KL($\tilde{P} || P$) and KL($\tilde{Q} || Q$) terms to the generator and inference network updates respectively.
* The motivation of each loss term is not entirely clear, which makes the proposed loss gradients appear somewhat ad-hoc. An intuitive explanation of the role of each KL term would be very helpful for the reader. A clearer discussion of connections to the Divergence Triangle could also help with understanding the KL terms.
* The proposed method is somewhat complex, and the loss functions are defined sequentially given the current fixed model weights. Such an approach is common to Cooperative Learning and related methods, while amortized methods like Divergence Triangle have a loss function which can be written in closed form without reference to fixed model weights. While this is not a major issue in my view, the reliance on loss defined by a sequence of fixed model weights as opposed to a single loss function should be discussed as a limitation of the proposed method.
Minor Problems
* The related work [12] uses MCMC sampling initialized from the generator to update the EBM similar to [36, 37], instead direct generator samples like [11, 9, 19].
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could the authors provide a detailed comparison between their proposed method and the Divergence Triangle?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: A broader impact section was not included. Computational limitations are very briefly discussed, but more thorough details about runtime and compute costs compared to related methods would strengthen the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful feedback. We appreciate the correction of the references and shall fix them in our final manuscript.
We provide the clarification in **General Response** for
- Connection with Divergence Triangle (**G.2**)
- Motivation of our method (**G.1**)
- Computational cost (**G.3**) (see also the attached **PDF**)
----
**Q1. Closed-form Objective.**
You are correct. Our MCMC teaching-based framework, indeed, cannot be learned within a single objective function. The MCMC revision shall be conducted based on the fixed model weights. We shall include this point and a thorough discussion of the limitation in our revision.
---
Rebuttal Comment 1.1:
Title: Thanks for the responses
Comment: Thanks to the authors for their responses. I have read the responses to myself and other reviewers and find that they provide satisfactory answers. I will keep my score.
---
Reply to Comment 1.1.1:
Comment: We thank you for your valuable comments.
Please let us know if you have any further questions, and we are willing to make further clarification. | Rebuttal 1:
Rebuttal: We thank the valuable comments from all the reviewers. The reviewers note that (1) our idea is *novel* (**Reviewer** **uj7J**, **RJbi**, **XcPg**), (2) our experiments are *clear* and *solid* (**Reviewer** **uj7J**, **iX9Y**, **XcPg**), (3) our paper is *well-written* (**Reviewer RJbi**, **XcPg**). We are also encouraged to hear that you found our paper addressed the *key limitation* and *actual problem* for future research (**Reviewer uj7J,** **XcPg**).
We provide the general response for the common questions below and respond to each reviewer in their individual comments.
We refer to the attached **PDF** for additional ablation results.
----
### G.1 Motivation (Reviewer **uj7J, RJbi, iX9Y, XcPg**).
**Dual-MCMC Teaching.** This paper studies the learning of the energy-based model (EBM). The EBM can be typically learned through the MLE, which involves MCMC sampling and is known to be challenging in practice. To tackle this challenge, we introduce a probabilistic learning framework, where a generator and inference model are jointly trained with the EBM.
Firstly, our joint learning scheme is designed to facilitate the intractable EBM and generator learning, and both generator and inference models are learned to jump-start MCMC samplings for EBM and generator posterior distribution, respectively. Such a joint learning scheme is related to work **Divergence Triangle** [1] (as noted by **Reviewer** **uj7J**, **XcPg**). However, DT relies on ancestral samples from both the generator and inference model, which may be sub-optimal. Additionally, the learning can be fundamentally different (please refer to **G.2** for a detailed breakdown and comparison). Secondly, the KL terms used in generator and inference model learning are designed to integrate two MCMC revised samples (one in the latent space and another in the image space) to steer the generator and inference learning (as highlighted by **Reviewer uj7J**). Consequently, this approach would enhance EBM learning by providing improved initial samples for MCMC samplings. We shall add a detailed discussion of the motivation in our final version.
### G.2 Connection to Divergence Triangle (DT) [1] (Reviewer **uj7J, XcPg**).
We denote $Q$ for $Q_{\phi}(x, z)$ and other joint densities for notation simplicity. Recall that $Q=q_\phi(z|x)p_{\rm data}(x)$,$P=p_\theta(x|z)p(z)$, $\Pi=\pi_\alpha(x)q_\phi(z|x)$, and $\tilde{Q}=p_{\rm data}(x)T^z_{\theta_t} q_{\phi_t}(z|x)$, $\tilde{P}=T^x_{\alpha_t} p_{\theta_t}(x|z)p(z)$, where $T^z_{\theta_t}(\cdot)$ denotes the Markov transition kernel of finite step Langevin dynamics that samples $z$ from $p_{\theta_t}(z|x)$, and $T^x_{\alpha_t}(\cdot)$ denotes the transition kernel that samples $x$ from $\pi_{\alpha_t}(x)$.
- For learning the EBM ($\alpha$), we use $\min_\alpha KL(\tilde{Q}\| \Pi) - KL(\tilde{P}\|\Pi)$ (Eqn. 7), and DT considers $\min_\alpha KL(Q\| \Pi) - KL(P\|\Pi)$. Specifically, we utilize the MCMC revised samples (i.e., joint densities of $\tilde{Q}$ and $\tilde{P}$) for learning the EBM, while DT only considers ancestral samples (i.e., joint densities of $Q$ and $P$). Such MCMC-revised samples should render more effectively learned EBM, as model samples revised by the EBM itself can be more accurate than samples directly drawn from the generator, i.e., $KL(T^x_{\alpha_t}p_{\theta_t}(x)\|\pi_{\alpha_t}(x))\le KL(p_{\theta_t}(x)\|\pi_{\alpha_t}(x))$.
- For learning the generator model ($\theta$), the KL terms for our work are $\min_\theta KL(\tilde{Q}\| P) + KL(\tilde{P}\|P)$ (Eqn. 10), and DT's are $\min_\theta KL(Q\|P) + KL(P\| \Pi)$. In comparison, for training observation, our generator is learned with revised latent samples ($KL(\tilde{Q}\| P)$ v.s. $KL(Q\|P)$). Such samples can be more accurate towards the generator posterior, i.e., $KL(T^z_{\theta_t}q_{\phi_t}(z|x)\|p_{\theta_t}(z|x))\le KL(q_{\phi_t}(z|x)\|p_{\theta_t}(z|x))$. For generated model samples, our model, i.e., $KL(\tilde{P}\|P)$, learns to match the revised MCMC samples from $\pi_\alpha(x)$, while DT intends to chase the major modes of $\pi_\alpha(x)$ through variational approximation (i.e., $KL(P\| \Pi)$).
- For learning the inference model ($\phi$), we utilize $\min_\phi KL(\tilde{Q}\| Q) + KL(\tilde{P}\|\Pi)$ (Eqn. 15), and DT uses $\min_\phi KL(Q\|P) + KL(P\|\Pi)$. Two learning schemes can be very different. On the one hand, for training observation, our model, i.e., $KL(\tilde{Q}\| Q)$, is trained by amortizing the latent MCMC, while the DT follows variation inference as in VAEs (i.e., $KL(Q\|P)$). On the other hand, on generated model samples, our inference is learned to match the revised generator samples from $\pi_\alpha(x)$ (i.e., $KL(\tilde{P}\|\Pi)$), while the DT directly considers ancestral generator samples, i.e., $KL(P\|\Pi)$, which can be sub-optimal.
### G.3 Computational and Memory Cost (Reviewer **uj7J, RJbi, iX9Y, XcPg**).
Our learning algorithm belongs to MCMC-based methods and can indeed incur computational overhead due to its iterative nature compared to variational-based or adversarial methods. We provide further analysis by computing the wall-clock training time and parameter complexity for our related work Divergence Triangle [1] (variational and adversarial-based joint training without MCMC) and our model (see Table 1 in the attached **PDF**), where the proposed work requires more training time but can also render significantly better performance. In terms of memory cost, it's important to note that we didn't observe further improvement by just increasing parameter complexity (see *Parameter Efficiency* in Section 1.1 of the Appendix). This emphasizes the effectiveness provided by our learning algorithm.
---
[1] Tian Han, Erik Nijkamp, Xiaolin Fang, Mitch Hill, Song-Chun Zhu, and Ying Nian Wu. Divergence triangle for joint training of generator model, energy-based model, and inferential model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019
Pdf: /pdf/f259488fbed0d5dc6d3095e0fc9e4433950a5a1a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Language Model Tokenizers Introduce Unfairness Between Languages | Accept (poster) | Summary: The paper studies the discrepancy in the tokenization length in different languages. It shows the unfairness of utilizing tokenization in different languages due to the cost, latency, and long-term language dependencies. The paper evaluates the unfairness between different tokenization strategies and model architectures. The paper's motivation is clear, and it provides an extensive analysis of multiple languages, showing the significance of the study and why it is essential in research and practical perspectives.
Strengths: **Originality**
- The paper introduces an important issue on utilizing LLM in different languages and investigates the aspects of cost, latency, and long-dependency. The problem formulation is important for multilingual researchers and practitioners in deciding what tokenization strategies are cost-effective.
**Quality / Significance**
- The paper shows a comprehensive analysis of how the different tokenization on different languages was done in different tokenization strategies (byte-based, Unicode-based, and subword-based) on the wide range of pre-trained LLM (on encoder-only, encoder-decoder, and decoder-only) and provides the quantitative evidence on the processing time of each language compared to English.
**Clarity**
- The paper is well-written, and the examples (in Japanese and Shan) give a high-level idea of why the problem introduced in the paper is important.
Weaknesses: - Besides the aspects mentioned by the paper, it is unclear if the choice of the tokenization strategies would impact the performance of LLM in particular languages.
- The suggestion of "building a multilingually fair parallel corpus" is vague. It would be great if the authors could give a more straightforward and practical solution.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: **Questions**
- Does the tokenization lengths impact the performance on languages (e.g., Shan)?
**Typo**
Line 177: XML-R => XLM-R
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Besides the aspects mentioned by the paper, it is unclear if the choice of the tokenization strategies would impact the performance of LLM in particular languages.
Tokenization does also affect downstream performance. In the Related Works section, we mention the work by Zhang et al. (2022), which shows that a balanced tokenizer corpus results in better translation performance. The works of Hofmann et al. (2021, 2022) also show that larger tokens, as well as tokenization informed by morphological structures, results in better downstream performance. We will add them to our Related Works section and will discuss further the downstream implications of the tokenization disparities.
> The suggestion of "building a multilingually fair parallel corpus" is vague. It would be great if the authors could give a more straightforward and practical solution.
In order to build a multilingually fair tokenizer one first needs a parallel corpus to use in order to ensure that the conent against which tokenization lengths are measured is indeed comparable. FLORES-200 is big enough for evaluation of tokenizers, but is too small for training ones. However, building a large parallel corpus for so many languages is extremely difficult. In our "On the development of a multilingually fair tokenizer" comment we offer a way to approximate such a multulingual parallel corpus via many bilingual parallel corpora.
> Does the tokenization lengths impact the performance on languages (e.g., Shan)?
Decoupling the effects of the tokenizations and the amount of data for a specific language is not possible. However, a recent work by Liu at el. (2023) showed that the performance of language models decreases as the input sizes grow. Hence, it is very likely that the longer tokenizations for some languages (e.g. Shan) do hurt the performance of that language, keeping everything else the same.
--
Liu, Nelson F., et al. "Lost in the middle: How language models use long contexts." arXiv preprint arXiv:2307.03172. (2023)
Zhang, Shiyue, et al. “How robust is neural machine translation to language imbalance in multilingual tokenizer training?” Biennial Conference of the Association for Machine Translation in the Americas (2022)
Hofmann, Valentin, et al. “Superbizarre Is Not Superb: Derivational Morphology Improves BERT’s Interpretation of Complex Words”. Annual Meeting of the Association for Computational Linguistics (2021)
Hofmann, Valentin, et al. “An Embarrassingly Simple Method to Mitigate Undesirable Properties of Pretrained Language Model Tokenizers”. Annual Meeting of the Association for Computational Linguistics (2022)
---
Rebuttal Comment 1.1:
Comment: Thank you for the comments. I will keep my score unchanged. | Summary: Tokenization is a crucial, yet underappreciated component of language models. This paper presents a welcome investigation into the effects of tokenization choices across languages. The paper introduces the notion of premium for language A relative to B which measures the ratio of the average number of tokens for translations of the same sentence in the two languages. The authors show that the premium relative to English can be over 10 for certain low resource languages.
The paper then highlights the negative effects of having a high premium, which include: cost, latency, and the ability to fit less content in the fixed context window of an LM. Based on these findings, the authors make a case for developing multilingually fair tokenizers where the premiums are close to one across languages.
Strengths: 1. Tokenization is, in my view, an underappreciated problem and this work provides an insightful analysis on the effects of the tokenizer selection.
2. The discrepancies presented in the paper across languages are surprisingly large. As such, this paper may stimulate new research in the field to address the discrepancies.
3. The paper is very well written and a pleasure to read.
Weaknesses: 1. The paper doesn’t discuss how to achieve the goal of training multilingually fair tokenizers, apart from providing an overview of previous approaches to developing multilingual tokenizers. An obvious solution is to reserve more tokens for the languages with high premiums. However, this would mean that the number of tokens for other languages would need to be reduced to keep the vocabulary size fixed. This in turn will likely hurt the performance on high-resource languages such as English and it may even end up hurting the performance for the rest of the languages if there is less cross-lingual transfer from high-resource languages (NB: The paper states that “multilingual models struggle to deliver on the promises of deep transfer learning” but I’m not aware of works suggesting that there’s zero transfer happening across languages.)
2. The paper lacks discussion on the potential negative effects of using a fair tokenizer (such as the one presented above) and discussion on the potential means of developing fair tokenizers. Yet, it argues strongly why the switch to fair tokenizers is necessary. This makes the paper seem a bit like a position paper (which doesn’t reduce the value of the paper in my view but makes me wonder a bit whether NeurIPS is an ideal venue for it).
3. An obvious explanation for the tokenization length differences is the widely varying amounts of available training datasets across languages. However, another possible explanation is inherent differences between languages in terms of characteristics such as morphological richness of a language. Discussion of the different explanations seems to be missing from the paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Could you elaborate on the following sentence “training a model from scratch with this fair tokenizer is necessary, despite potential suboptimal performance in individual languages due to vocabulary limitations”? What are the potential suboptimalities you’re referring to? Are you arguing for using a fair tokenizer regardless of any potential negative side effects?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >The paper doesn’t discuss how to achieve the goal of training multilingually fair tokenizers, apart from providing an overview of previous approaches to developing multilingual tokenizers. An obvious solution is to reserve more tokens for the languages with high premiums.
Please see our "On the development of a multilingually fair tokenizer" comment, where we discuss the development of a multilingually fair tokenizer. We also discuss why giving quotas to different languages will not work: languages sharing the same script also partially share tokens.
> This in turn will likely hurt the performance on high-resource languages such as English and it may even end up hurting the performance for the rest of the languages if there is less cross-lingual transfer from high-resource languages.
The additional cost for English would be much smaller than the total benefit for the rest of the languages. That is because increasing the vocabulary size has diminishing returns: the additional tokens correspond to increasingly rare (parts of) words. Therefore, removing rarely used English (sub)words and replacing them with frequently used (sub)words in other languages would likely be of a net befit overall. Furthermore, cross-lingual transfer can be bi-directional: there can be information present in the data in the less-resourced language that is not present in the high-resourced language (for example, cultural, legal or historical knowledge). Therefore, more fair tokenization can also improve cross-lingual transfer in the opposite direction.
> The paper lacks discussion on the potential negative effects of using a fair tokenizer (such as the one presented above) and discussion on the potential means of developing fair tokenizers. Yet, it argues strongly why the switch to fair tokenizers is necessary. This makes the paper seem a bit like a position paper (which doesn’t reduce the value of the paper in my view but makes me wonder a bit whether NeurIPS is an ideal venue for it).
We are not aware of other negative effects of using a fair tokenizer, apart from the above-mentioned issue of slightly increasing the tokenization lengths of the most tokenisation-efficient language. Moreover, in the fairness literature, reducing the preferential treatment of a dominant group to balance the treatment of all groups is typically considered a positive effect rather than a limitation.
We do not consider our work as a position paper, especially because it is based on a comprehensive empirical evaluation. We do say that if one doesn't want to charge users of different languages more, reduce their effective context size and increase their processing time, then they would need to switch to a more fair tokenizer. However, this is backed by our results and analysis of the underlying technical factors, rather than simply our views or opinions, as customary for position papers.
> An obvious explanation for the tokenization length differences is the widely varying amounts of available training datasets across languages. However, another possible explanation is inherent differences between languages in terms of characteristics such as morphological richness of a language. Discussion of the different explanations seems to be missing from the paper.
The widely varying amounts of available training datasets across languages are certainly the main culprit. However, as mentioned above, even if we use a perfectly balanced dataset, parity will not be achieved because some languages share tokens while others do not. Assessing morphological differences between languages is a challenging task as more morphological richness does not necessarily imply that more tokens would be necessary. As the information content of all languages is the same (Coupé et al., 2019), we argue that similar tokenization lengths should be achievable. We looked into the absolute number of tokens needed to encode the FLORES-200 dataset across several languages and tokenizers targeting them. The numbers cannot be directly compared, though, as the tokenizers might have different vocabulary sizes and might be trained differently (e.g. BPE vs SentencePiece). Still, the differences are much smaller than the fairness premiums we observe. That indicates that morphological differences cannot explain the drastic variations reported in the paper. Thank you for highlighting this question; we will incorporate it in the manuscript!
|Language|Tokenizer|Number of tokens for FLORES-200|
|----|----|----|
|Standard Arabic|ArabicBERT|52834|
|German|GottBERT|58508|
|English|RoBERTa|52567|
|French|CamemBERT |67031|
|Hindi|MuRIL|62712|
|Japanese|BERT Japanese|69209|
|Chinese (Simplified) | RoCBert |83317|
|Vietnamese|PhoBERT|69628|
> Could you elaborate on the following sentence “training a model from scratch with this fair tokenizer is necessary, despite potential suboptimal performance in individual languages due to vocabulary limitations”? What are the potential suboptimalities you’re referring to?
We mean that one cannot get away from having to retrain the model using the fair tokenizer. In other words, it is not possible to post factum make the tokenizer of a pre-trained model fair. The suboptimal performance can stem from representing a language in the tokenization, but then training the model with little data from this language. In such a case the tokenizer would be fair but the model would not perform well in the underrepresented language. However, this problem is resolvable by one ensuring sufficient training data for all languages with respect to which the model should be fair. We agree that the sentence could be better worded and will edit it.
--
Christophe Coupé et al. “Different languages, similar encoding efficiency: Comparable information rates across the human communicative niche”. Science Advances (2019)
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses.
> even if we use a perfectly balanced dataset, parity will not be achieved because some languages share tokens while others do not. Assessing morphological differences between languages is a challenging task as more morphological richness does not necessarily imply that more tokens would be necessary. As the information content of all languages is the same (Coupé et al., 2019), we argue that similar tokenization lengths should be achievable.
I'm not convinced that token sharing is the main cause for not achieving parity even when using balanced dataset. Two other causes that could be even more significant and would be useful to discuss in the paper:
1. **Morphological differences:** To give an example, consider a tokenizer trained on a balanced set of English and German nouns and their plural forms. Let's fix the vocabulary size such that every English plural noun is split into at most 2 tokens. Now, we would very likely see a premium for German since a significant fraction of German nouns require umlaut when pluralized (e.g. `Stuhl` (chair) -> `Stühle` (chairs)) which makes token sharing across the singular and plural forms within the same language more difficult.
2. **Number of characters:** For instance, Korean encoded as Hangul Syllables has ~11k Unicode characters whereas the Latin (ASCII) has only ~100.
> increasing the vocabulary size has diminishing returns: the additional tokens correspond to increasingly rare (parts of) words. Therefore, removing rarely used English (sub)words and replacing them with frequently used (sub)words in other languages would likely be of a net befit overall.
I agree that this is likely the case due to the diminishing returns. The paper would be much stronger if it experimentally supported this point by actually training a fairer tokenizer (as also proposed by several other reviewers) and measuring the net benefit.
Having said that, even in it's current form, I find this a strong paper which is likely to inspire several future works in the NLP community.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your feedback and for your support of our work!
You are right, morphological differences do complicate the task significantly, with `Stuhl` being a great example.
As for Korean, none of the authors is a speaker of Korean, but to the best of our understanding, there are two ways to encode Korean in Unicode. One is Hangul Syllables which do indeed have reserved ~11k Unicode codepoints. However, as Korean syllables are compositional, one can also encode it via [Hangul Jamo](https://en.wikipedia.org/wiki/Hangul_Jamo_(Unicode_block)). There are 256 Jamo codepoints, but most of them seem to not be used in modern Korean. According to the article, only 67 Jamo are used in modern Korean, which means that 67 Unicode codepoints should be sufficient to represent most modern content. Choosing how to normalize the Unicode string (Syllables or Jamo) would hence have a great impact on the resulting tokenization, which further illustrates that one has to be very careful when approaching multilingual tokenization.
> I agree that this is likely the case due to the diminishing returns. The paper would be much stronger if it experimentally supported this point by actually training a fairer tokenizer (as also proposed by several other reviewers) and measuring the net benefit.
Perhaps the reviewer would be interested in seeing our response to Reviewer Qimt, where we show that with only one-third of the vocabulary of the ChatGPT/GPT-4 tokenizer, English sequences will become just 10% longer. | Summary: This paper investigates the disparities between languages caused by tokenization policies used in large language models (LLMs). The authors compare the numbers of tokens needed to represent the translations of the same sentence in different languages and observe that the number of tokens needed in one language can be an order of magnitude larger than that in the target language (mostly English). They argue that these disparities have significant real-world implications such as the increased cost and performance degradations in using LLMs.
Strengths: - I think investigating the disparities of the cost and performance in using LLMs between different languages is an important topic.
- The approach taken by the authors (i.e., the use of the FLORES-200 parallel corpus) seems sound.
- The observations made in their experiments are informative.
Weaknesses: - There is little technical novelty in the work.
- The authors point out the problems caused by the disparities but do not present a concrete solution. They argue that LLMs should be trained from scratch with a multilingually fair subword tokenizer but do not provide any experimental results towards that solution. I would be interested to see how much the performance and efficiency of an LLM in English needs to be sacrificed to achieve the parity.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Would it be difficult to conduct some (preliminary) experiments regarding the proposals discussed in Section 6?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The authors point out the problems caused by the disparities but do not present a concrete solution. They argue that LLMs should be trained from scratch with a multilingually fair subword tokenizer but do not provide any experimental results towards that solution.
Please refer to our "On the development of a multilingually fair tokenizer" comment discussing a strategy we are currently working on. We also explain why it is non-trivial to implement it in practice and why we believe it is a separate work in its own right.
> I would be interested to see how much the performance and efficiency of an LLM in English needs to be sacrificed to achieve the parity.
The additional cost for English would be much smaller than the total benefit for the rest of the languages. That is because increasing the vocabulary size has diminishing returns: the additional tokens correspond to increasingly rare (parts of) words. Therefore, by removing rarely used English (sub)words and replacing them with frequently used (sub)words in other languages, we would likely see an overall net befit.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I understand that developing a multilingually fair tokenizer is non-trivial and can be regarded as a separate piece of work, but I still think that presenting preliminary experimental results is desirable. I am not fully convinced by the authors' argument that the additional cost for English would be much smaller than the total benefit for the rest of the languages. I think there is a non-negligible possibility that the additional cost for English (or major target languages) is so large that developing a multilingually fair tokenizer is not possible in practice.
---
Reply to Comment 1.1.1:
Comment: Regarding your concern about the effect on the dominant language when reducing the tokens for it, perhaps you will find [this plot](https://ibb.co/7ymNkP7) interesting. It shows how many tokens would be necessary for the tokenizer of ChatGPT/GPT-4 to encode all of the English corpus of FLORES-200 for different vocabulary sizes. The result is that with __only one-third__ of the vocabulary, English sequences will become __just 10% longer.__ A 10-fold reduction in the vocabulary would result in only 30% longer sequences. English would still be treated better than how the same tokenizer treats the _cheapest_ other language, Portuguese, which is 50% longer than English. Therefore, we believe that this is not a prohibitively large cost for English.
Whether such a trade-off is acceptable for a specific model or not, however, depends on the number of languages used and their similarity, and is a design choice that lies with the model developers. | Summary: The paper proposes the concept of tokenizer parity as a way to measure the fairness of tokenization across different languages in natural language processing. The authors argue that achieving tokenizer parity is necessary to improve the performance of multilingual models and address potential unfairness in the cost of accessing commercial language services, processing time and latency, and the amount of content that can be provided as context to the models. The paper suggests that training language models from scratch with a multilingually fair subword tokenizer is the only approach that can effectively address tokenization unfairness and achieve tokenizer parity. The authors provide several examples to demonstrate the effectiveness of the proposed method for measuring tokenizer parity and suggest that a variation of subword tokenization is necessary to achieve parity. The paper contributes to the field of natural language processing by proposing a new metric for measuring fairness in tokenization and providing guidance on how to achieve tokenizer parity in multilingual models.
Strengths: In terms of originality, the paper introduces the concept of "tokenizer parity" as a systematic way to assess the fairness of tokenization across different languages. This is a novel idea that has not been explored extensively in the literature.
In terms of quality, the paper provides a detailed analysis of the tokenization lengths of different languages using various tokenizers. The authors also propose a method to measure tokenizer parity and demonstrate its effectiveness using several examples.
In terms of clarity, the paper is well-written and easy to follow. The authors provide clear definitions of the key concepts and use examples to illustrate their points.
In terms of significance, the paper addresses an important issue in natural language processing, namely the fairness of tokenization across different languages. The concept of tokenizer parity has the potential to improve the performance of multilingual models and make natural language processing more equitable across different languages. Therefore, the paper could have a positive impact on the development of multilingual models.
Weaknesses: One major weakness is that the paper does not provide a clear roadmap for how the concept of tokenizer parity could be integrated into existing natural language processing pipelines. While the authors suggest that training language models with a multilingually fair subword tokenizer is the only approach that can effectively address tokenization unfairness, they do not provide guidance on how this could be achieved in practice. More detailed recommendations for how to integrate tokenizer parity into existing natural language processing pipelines would be helpful for researchers and practitioners in the field.
Another potential weakness is the lack of discussion on the impact of tokenizer disparity on downstream task accuracies. This aspect is particularly interesting and may attract more attention.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can you provide more detailed recommendations for how to integrate tokenizer parity into existing natural language processing pipelines?
2. What is the potential impact of tokenizer disparity on downstream task accuracies?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The lack of clear guidance on how to integrate tokenizer parity into existing natural language processing pipelines may limit the impact of the paper's findings. Additionally, the absence of discussion on the impact of tokenizer disparity on downstream task accuracies is a limitation that could be addressed in future research. Without this information, it may be difficult for researchers and practitioners to fully understand the potential benefits of using a multilingually fair subword tokenizer.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > One major weakness is that the paper does not provide a clear roadmap for how the concept of tokenizer parity could be integrated into existing natural language processing pipelines. While the authors suggest that training language models with a multilingually fair subword tokenizer is the only approach that can effectively address tokenization unfairness, they do not provide guidance on how this could be achieved in practice.
We wrote an extensive explanation about the challenges of developing a multilingually fair tokenizer in our "On the development of a multilingually fair tokenizer" comment. We hope that addresses your concerns.
> 1.Can you provide more detailed recommendations for how to integrate tokenizer parity into existing natural language processing pipelines?
In most language processing pipelines, tokenization happens before any other training or modelling efforts. Therefore, tokenization can be independently addressed without having to adjust other elements of the pipeline. Learning a tokenizer is typically posed as an optimization problem: finding the vocabulary that minimizes the number of tokens necessary to encode a given corpus. Obtaining a multilingually fair tokenizer can be thought of as the addition of a constraint that content in different languages should have approximately similar tokenized lengths. Please see our "On the development of a multilingually fair tokenizer" comment, where we outline one way towards implementing that.
> 2.What is the potential impact of tokenizer disparity on downstream task accuracies?
We intentionally focus on tokenizers to show that disparities between languages exist even before the model is trained, and even without considering the differences in the model performance. Still, tokenization does also affect downstream performance. In the Related Works section, we cite the work by Zhang et al. (2022), which shows that a balanced tokenizer corpus results in better translation performance. The works of Hofmann et al. (2021, 2022) also show that larger tokens, as well as tokenization informed by morphological structures, result in better downstream performance. We will add them to our Related Works section and will discuss further the downstream implications of the tokenization disparities.
> Additionally, the absence of discussion on the impact of tokenizer disparity on downstream task accuracies is a limitation that could be addressed in future research. Without this information, it may be difficult for researchers and practitioners to fully understand the potential benefits of using a multilingually fair subword tokenizer.
We respectfully disagree: even if there weren’t downstream improvements, the fundamental benefit of using multilingually fair tokenizers is ensuring that users of different languages pay the same for the same service, can process similar amounts of content, and can enjoy similar generation speeds. Nevertheless, as mentioned above, we will add a discussion highlighting the prior works showing that better tokenization does improve downstream task performance.
--
Zhang, Shiyue, et al. “How robust is neural machine translation to language imbalance in multilingual tokenizer training?” Biennial Conference of the Association for Machine Translation in the Americas (2022)
Hofmann, Valentin, et al. “Superbizarre Is Not Superb: Derivational Morphology Improves BERT’s Interpretation of Complex Words”. Annual Meeting of the Association for Computational Linguistics (2021)
Hofmann, Valentin, et al. “An Embarrassingly Simple Method to Mitigate Undesirable Properties of Pretrained Language Model Tokenizers”. Annual Meeting of the Association for Computational Linguistics (2022) | Rebuttal 1:
Rebuttal: # On the development of a multilingually fair tokenizer
Reviewers Fdv1, EmCj and Qimt mentioned as a shortcoming of our work that we did not develop a new multilingually fair tokenizer. We would like to highlight several reasons why we found this to be more challenging than it may seem. These are all problems that we are actively working on and we hope to soon have a follow up work addressing them.
**How to account for token sharing across languages:**
As discussed in the paper, the byte-level, character-level and word-level tokenizers cannot achieve tokenization parity and subword tokenization is needed. However, simply training a subword tokenizer on a balanced dataset is also not sufficient as languages can share tokens. For example, “hotel” is written the same way in English, Spanish, Italian, Portuguese, Dutch, Danish, Hungarian, Polish and more. Hence, languages from more numerous language families will also witness shorter tokenization lengths while more isolated languages, e.g. Korean, would see larger language premiums. (The spelling of “hotel” in Korean is “호텔” and no other language has the same spelling as no other language uses the Korean script).
Instead, we suggest a two-stage process: first, training individual monolingual tokenizers for all target languages, and then merging them while maintaining parity. The merging can be done by starting with the 256 tokens corresponding to each value a byte can take and then repeatedly adding the most frequently used token for the language with the highest premium. This approach can account for the shared tokens across languages. For example, if at some stage Polish has the highest premium, adding the token for “hotel” will simultaneously reduce the premiums for all the other languages using the same spelling.
However, training 200 individual tokenizers is computationally expensive. We looked into leveraging already trained tokenizers but they are often incomparable. For example, it is not trivial to combine BPE-based tokens and SentencePiece tokens. One reason is that SentencePiece pre-processes the input by splitting it at spaces and has tokens which cannot be at the beginning of a word, something incompatible with BPE. Therefore, the only methodologically clean way we see to proceed is via training individual tokenizers from scratch using the same tokenization procedure.
**The lack of large multilingual parallel corpora and how to get around it:**
A separate challenge is the need for a large parallel corpus. FLORES-200 is rather small, just 2000 sentences. It is good enough for the evaluation of parity but not sufficient for building a fair tokeniser. That is because there might be characters and words not present in FLORES-200 which should nevertheless be present in the vocabulary. Furthermore, to do proper evaluation, a split into training and test datasets is necessary. This would reduce the training data even further. Unfortunately, we are not aware of a larger parallel corpus spanning so many languages. Constructing one is extremely difficult and expensive as well.
That is why we are looking into approximating a multilingual parallel corpus by many bilingual parallel corpuses. In such a case, the parity between, for example, Greek and Shan would not be evaluated only using Greek—Shan translations (which may not be available) but also Greek—English and English—Shan translations, Greek—German and German—Shan translations, etc. However, this requires further analysis into the conditions under which leveraging bilingual data in such a way would constitute a valid approximation.
Because of these reasons, we believe that properly building a multilingually fair tokenizer is a substantial piece of work which requires its own special treatment. We are currently working towards it but cannot possibly give it justice in the limited space of the present paper. Nevertheless, we will extend the discussion in Section 6 to better highlight these challenges. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper is very interesting in highlighting the disparities in tokenization across different languages, leading to cost, latency, and long-distance modeling inequalities in LLMs. The authors show that this is not limited to one type of tokenizer or a single family of LLMs. They show that different types of tokenizers for LLMs in practice, including subword, multilingual, or byte-level tokenization, all show disparities for languages in FLORES-200.
Based on these results, the paper convincingly argues that we need multilingually fair tokenizers for future LLMs, so that some languages are not in disadvantages in terms of the cost to access LLMs, the latency of the service, and the amount of data that can be processed.
Strengths: The paper is well written and the first one (to my knowledge) to study tokenization disparities in LLMs at this scale.
The presented results will be very interesting to LLM researchers and practitioners in understanding the disparities in tokenization across different languages in different LLMs. This has major consequences for some languages, putting them at a disadvantage in terms of the cost to access LLMs, the latency of the service, and the amount of data that can be processed.
Weaknesses: Section 6 argues that future tokenizers should be based on subword tokenization and support all Unicode codepoints. However, one of the main weaknesses of the paper is that it does not develop such multilingual fairer tokenizers, and compares them with existing tokenizers in practice. Attempting to do that would have emphasized challenges in developing such tokenizers and how one can overcome them.
I found the argument around disparities in long-distance modeling a bit thin. The second paragraph of Section 5.3 discusses this briefly, but additional discussion or experiments are needed to strengthen the argument. For example, it might be helpful to include analysis with multilingual document summarization (e.g., XLSum).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I think the paper does a very good job of explaining why there are language disparities among different tokenizers. However, I feel these reasons are known already, what are the challenges in overcoming them?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper presents a large-scale study of disparities in tokenization across different languages. It discusses how LLMs are putting some languages to their disadvantage due to this. The paper itself does not propose a new tokenization scheme that needs to address its limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > I found the argument around disparities in long-distance modeling a bit thin. The second paragraph of Section 5.3 discusses this briefly, but additional discussion or experiments are needed to strengthen the argument. For example, it might be helpful to include analysis with multilingual document summarization (e.g., XLSum).
The critical issue is that assuming a fixed context size, the more tokens required for the same content, the smaller the maximum size of the content that can be processed. For example, if the context of GPT-4 is just enough to fit a blog post in English, it can only fit 1/15 of the same blog post when translated into Shan. Experiments comparing the ratio content that can be represented would not bring additional value as they would be precisely the reciprocal of the premium values. Furthermore, Liu et al. (2023) show that even if the content can fit in the context size, the performance of language models decreases as the input sizes grow. We will extend Section 5.3 in terms of clarity of the writing and depth of argumentation to explain this better.
> I think the paper does a very good job of explaining why there are language disparities among different tokenizers. However, I feel these reasons are known already, what are the challenges in overcoming them?
Please check our "On the development of a multilingually fair tokenizer" comment, where we explain what are the challenges in overcoming these language disparities and what is the approach we are working on towards addressing them.
--
Liu, Nelson F., et al. "Lost in the middle: How language models use long contexts." arXiv preprint arXiv:2307.03172. (2023) | null | null | null | null | null | null |
IDEA: An Invariant Perspective for Efficient Domain Adaptive Image Retrieval | Accept (poster) | Summary: This paper proposes to use invariant learning based on causality inference for domain adaptive retrieval. In the proposed IDEA model, a feature disentanglement module is deployed for obtaining causal and non-causal features. A generative model is designed with non-causal features intervention for reconstructing images. Experiments show the good effectiveness.
Strengths: +Deploying casuality perspective into domain adaptive hashing retrieval is a new point.
+The proposed model is simple in methodology with encoder and decoder under intervention.
+Experiments and comparisons to related methods show the superiority of the proposed model.
Weaknesses: -How about the experimental setting? Is the setting the same as some related models, such as PWCF [22]? because the performance is much better than previous ones. It can be discussed.
-Causality perspective is a common sense in domain generalization and out-of-distribution, although it is under-studied in domain adaptive hashing retrieval.
-Please discuss the following work, because this work also considers to remove non-causal features. The idea is a little similar. Additionally, the authors claim that the non-causal features are domain information, but it may not be correct, because non-casual feature may still be domain invariant information. Causal feature should be contained in domain invariant features.
"Multi-view Adversarial Discriminator: Mine the Non-causal Factors for Object Detection in Unseen Domains, CVPR 2023."
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would lean to accept this paper, after addressing the above point in weakness, because I think this paper has some novelty.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I have not found obvious issue.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are truly grateful for the time you have taken to review our paper, your insightful comments and support. Your positive feedback is incredibly encouraging for us! In the following response, we would like to address your major concern and provide additional clarification.
>Q1. How about the experimental setting? Is the setting the same as some related models, such as PWCF [22]? because the performance is much better than previous ones. It can be discussed.
A1. Thanks for your problem. Yes, we follow the same setting as PWCF and PEACE. We can observe that PEACE can indeed perform much better than PWCF and ours can even perform better than PEACE by a large margin.
>Q2. Causality perspective is a common sense in domain generalization and out-of-distribution, although it is under-studied in domain adaptive hashing retrieval.
A2. Thanks for your comment. Although causality has been explored in domain generalization and out-of-distribution, it is less-explored in domain adaptation, which is quite different in transfer learning. Moreover, it has also not been explored in image retrieval. Lastly, our causality perspective provides three views quite different from existing works:
- **Problem understanding**. It can provide the explanation for the problem of unsupervised domain adaptation image retrieval.
- **Methodology formulation**. Based on the proposed SCM, we propose the condition that our mapping should fulfill, which guides the formulation of the proposed strategy.
- **Implementation: disentangle and intervene**. To resolve the domain shift, our method disentangles each image into casual features and non-casual features and then adds intervention with reconstruction, which is based on image generation and decomposition from the SCM.
>Q3. Please discuss the following work, because this work also considers to remove non-causal features. The idea is a little similar. Additionally, the authors claim that the non-causal features are domain information, but it may not be correct, because non-casual feature may still be domain invariant information. Causal feature should be contained in domain invariant features. "Multi-view Adversarial Discriminator: Mine the Non-causal Factors for Object Detection in Unseen Domains, CVPR 2023."
A3. Thanks for your comment. Although our work also considers to remove non-causal features, there are three key distinctions that set our work apart:
- **Different motivations**. We propose a generation-based structural causal model to analyze the relationship in domain adaptation retrieval, while [1] learns from frequency spectrum to generate causal features.
- **Different scenarios**. Our model focuses on image retrieval under domain adaptation settings while [1] focuses on domain generalization for image segmentation. Their scenarios have a huge difference.
- **Different methodology**. Our method disentangles each image based on mutual information and then adds intervention with reconstruction, which is based on image generation and decomposition from the SCM. In contrast, [1] focuses on Spurious Correlations Generator from frequency domain with domain adversarial learning.
We will definitely add this discussion in our revised version.
**Reference**
[1] Multi-view Adversarial Discriminator: Mine the Non-causal Factors for Object Detection in Unseen Domains, CVPR 2023
Thanks again for appreciating our work and for your constructive suggestions. Please let us know if you have further questions.
---
Rebuttal Comment 1.1:
Title: Agree with the authors' response
Comment: After reading the authors' rebuttal, I agree with their comment on the differences from [1], in problem, scenario and methodology aspects. I would like to explain why I suggested [1] because this work also considers learn causality invariant features by filtering non-causal factors out and therefore can be included with discussion.
I am satisfactory with the authors' rebuttal, and this work has novelty. Other reviewers also have positive evaluation on this paper. I therefore keep my rating and suggest accept this paper.
---
Reply to Comment 1.1.1:
Title: Thanks again for your feedback!
Comment: Thanks again for your feedback! We are pleased to know that our responses have addressed your concerns. We really appreciate your efforts on reviewing our paper, your insightful comments and support. We will definitely include the discussion with [1] in our revised version. | Summary: This paper focuses on an unsupervised domain adaptation method for deep hashing. This paper proposes to disentangle each image into causal and non-causal features, where casual features represent label information and non-causal features represent domain information. The causal features are used to compute hash codes, while both causal and non-causal features are used to reconstruct the image. The disentanglement loss is to minimize the mutual information between non-causal features and label and maximize the mutual information between non-causal features and the hidden features. As a result, the proposed method performs better than prior methods in multiple benchmarks.
Strengths: - the proposed method outperforms state-of-the-art methods on cross-domain retrieval
- the structural causal model on learning hash codes and the disentanglement loss on learning hash codes are interesting. This way is intuitively able to disentangle causal and non-causal features from an image. I believe this can be one of the regularization term for future retrieval/hashing methods.
Weaknesses: - while using non-causal features from other samples for the reconstruction of hash codes seems intuitive, no experiments or analysis on the proposed intervention scheme
- it is unknown whether the extracted features for hash codes are indeed the causal part.
- no interpretability analysis. While the author claims that their method can generate high-quality interpretable hash codes, the only evaluation is on performance analysis, with no other quantitative and qualitative analysis to claim that the hash codes are interpretable.
- no visualization of the causal and non-causal features, not interpretable on how those features correlate to which part of the image
- although testing with VGG features for unsupervised hashing is a standard since this method involves disentangling features, the author should test with more different kinds of architectures such as ResNet or Vision Transformer. It is because VGG features are globally pooled features, thus already not directly interpretable.
- no synthetic samples have been shown. As the author uses an intervention scheme on generating images and then learns to extract hash codes that have low variance, what will the samples look like?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - how effective is this proposed method on vision transformer-based method where attention mechanism is involved? will causal features only attend to certain parts? will it overlap with non-causal features?
- can synthetic images show that different non-causal features can generate different images from different domains?
- why do you need a warm-up for learning Eq. 15? What causes the instability at the beginning of the training?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - the author mentioned that their method cannot tackle the open-set scenarios, the proposed solution is for seen classes scenario only
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are truly grateful for the time you have taken to review our paper, your insightful comments and support. Your positive feedback is incredibly encouraging for us! In the following response, we would like to address your major concern and provide additional clarification.
> Q1. While using non-causal features from other samples for the reconstruction of hash codes seems intuitive, no experiments or analysis on the proposed intervention scheme.
A1. Thanks for your comment. We have added a model variant IDEA w/o I by removing the proposed intervention. The compared performance is shown below. From the results, we can find that the proposed intervention scheme makes a crucial effect on the performance.
| Dataset | Office-Home | | | | Office31 | | | |
|---------|:------------:|:------------:|--------------|----------|:-----------:|:-----------:|-------------|---------------|
| Task | Product2Real | Real2Product | Clipart2Real | Real2Art | Webcam2Dslr | Dslr2Amazon | Amazon2Dslr | Amazon2Webcam |
| IDEA w/o I | 58.24 | 60.36 | 45.10 | 49.09 | 83.49 | 52.19 | 48.05 | 53.79 |
| IDEA | 59.18 | 61.84 | 45.71 | 49.64 | 84.97 | 53.53 | 48.70 | 54.43 |
> Q2. It is unknown whether the extracted features for hash codes are indeed the causal part.
A2. Thanks for your comment. We have added several visualizations to show the gradient of causal features. The results are shown in Figure A. From the results, we can observe that extracted features are indeed with the casual part in the image.
> Q3. No interpretability analysis. While the author claims that their method can generate high-quality interpretable hash codes, the only evaluation is on performance analysis, with no other quantitative and qualitative analysis to claim that the hash codes are interpretable.
A3. Thanks for your comment. We have added several visualizations to show the gradient of causal features. The results are shown in Figure A. From the results, we can observe that extracted features are indeed with the casual part in the image. Moreover, since hash codes are direct from the causal features, they have the similar gradients, which provide the interpretability of hash codes.
> Q4. no visualization of the causal and non-causal features, not interpretable on how those features correlate to which part of the image
A4. Thanks for your comment. Actually, since the causal and non-causal features are high-dimensional features, it is hard to get information from their visualization. Instead, we have added several visualizations to show the gradient of causal features. The results are shown in Figure A. From the results, we can observe that extracted features are indeed with the casual part in the image.
> Q5: Although testing with VGG features for unsupervised hashing is a standard since this method involves disentangling features, the author should test with more different kinds of architectures such as ResNet or Vision Transformer. It is because VGG features are globally pooled features, thus already not directly interpretable.
A5: Thanks for your comment. We have implemented both PEACE and IDEA with ViT formulation. The compared results are shown below. From the results, we can observe that 1) our IDEA can still achieve better performance under different architectures 2) with strong networks, the performance of our IDEA can improve a lot.
| Dataset | Office-Home | | Office31 | | Digits | |
|------------|:------------:|:--------:|:-----------:|:-------------:|:----------:|:----------:|
| Tasks | Clipart2Real | Real2Art | Amazon2Dslr | Amazon2Webcam | USPS2MNIST | MNIST2USPS |
| PEACE (VGG) | 38.72| 42.68 | 46.69 | 48.89 | 60.91 | 62.84 |
| Ours (VGG) | 45.71 | 49.64 | 48.70 | 54.43 | 67.97 | 67.48 |
| PEACE (ViT) | 53.83| 56.11 | 56.05 | 60.25 | 79.28 | 78.37 |
| Ours (ViT) | 58.56 | 64.39 | 63.27 | 66.30 | 83.39 | 82.96 |
> Q6. No synthetic samples have been shown. As the author uses an intervention scheme on generating images and then learns to extract hash codes that have low variance, what will the samples look like?
A6. Thanks for your comment. We have added the visualization of generalization images in Figure B. From the results, we can observe that our method has the potential to generate samples with different backgrounds, which can help to learn domain-invariant hash codes.
Thanks again for appreciating our work and for your constructive suggestions. Please let us know if you have further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
My concern with this paper is the interpretability analysis. The author should also visualize where the non-causal features focus and visualize/study the synthetic images to understand/interpret what the model has learned for causal & non-causal features in future work/revisions.
Nonetheless, the idea of this paper is quite interesting and I agree with the author's rebuttal. Thus, I recommend a weak accept for this paper.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback and raising the score!
Comment: Thanks again for your feedback and increasing the rating! We are pleased to know that you agree with our response. We will definitely add more visualization and case studies for interpretability analysis in the final version. We really appreciate your efforts on reviewing our paper, your insightful comments and support. | Summary: This paper proposes a novel method called Invariance-acquired Domain Adaptive Hashing for generating high-quality and interpretable hash codes. The approach incorporates the concepts of causal and non-causal features, leveraging the information bottleneck principle and consistency learning to optimize the hash network. The paper presents the training algorithm and experimental settings for IDEA and provides qualitative and quantitative analysis of the experimental results. Overall, this paper proposes a novel approach to address certain challenges in hash code generation and achieves promising performance on multiple benchmark datasets.
Strengths: 1. A structure causal model is adopted to explain the problem of unsupervised domain adaptation image retrieval.
2. This paper employs mutual information as a metric to achieve feature disentanglement, which provides a solid theoretical foundation for the paper.
3. The experimental results demonstrate that the proposed method outperforms the compared methods with a large margin.
Weaknesses: 1. The contribution of the structure causal model is not clear. In fact, there is no usage of casual inference for image retrieval.
2. The paper does not provide the convergence analysis of mutual information. Mutual information is sensitive to noise and outliers in the data. Thus, it is necessary to verify the convergence of the proposed method.
3. In the sensitivity analysis, the paper does not provide the recommend range of \beta and \tao.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. More clear contribution.
2. Convergence analysis.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are truly grateful for the time you have taken to review our paper, your insightful comments and support. Your positive feedback is incredibly encouraging for us! In the following response, we would like to address your major concern and provide additional clarification.
>Q1. The contribution of the structure causal model is not clear. In fact, there is no usage of causal inference for image retrieval.
A1. Thanks for your comment. The contribution of the SCM is three-fold:
- **Problem understanding**. It can provide the explanation for the problem of unsupervised domain adaptation image retrieval.
- **Methodology formulation**. Based on the proposed SCM, we propose the condition that our mapping should fulfill, which guides the formulation of the proposed strategy.
- **Implementation: disentangle and intervene**. To resolve the domain shift, our method disentangles each image into casual features and non-casual features and then adds intervention with reconstruction, which is based on image generation and decomposition from the SCM.
>Q2: The paper does not provide the convergence analysis of mutual information. Mutual information is sensitive to noise and outliers in the data. Thus, it is necessary to verify the convergence of the proposed method.
A2: Thanks for your comment. We have provided the loss with respect to different epochs for the proposed method. In particular, the MI loss is [0.0012, 0.0007, -0.0025, -0.0197, -0.0387, -0.0718, -0.0917, -0.1543, -0.2545, -0.3567, -0.4821, -0.6251, -0.8765, -1.2345, -1.5678, -1.9012, -2.3045, -2.5542, -2.6787, -2.8011, -2.9078, -2.9987, -3.0521, -3.1211, -3.1236, -3.1236, -3.1236, -3.1237, -3.1237, -3.1237] for the total 30 epochs. From the results, we can obtain the mutual information would converge empirically.
>Q3: In the sensitivity analysis, the paper does not provide the recommend range of \beta and \tao.
A3: Thanks for the comment. The recommended value for \beta and \tao is 0.2 and 0.1, respectively. We will add this into the revised version.
Thanks again for appreciating our work and for your constructive suggestions. Please let us know if you have further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your reply. I really appreate your responses to Q2 and Q3, and I will keep my rating and suggest accept this paper.
---
Reply to Comment 1.1.1:
Title: Thanks again for your feedback!
Comment: Thanks again for your feedback! We are pleased to know that our responses have addressed your concerns. We really appreciate your efforts on reviewing our paper, your insightful comments and support. | Summary: - In this research, the authors investigate the problem of unsupervised domain adaptation for hashing, which aims to expedite learning on a target domain with limited label information by leveraging knowledge from a source domain with abundant labels. The IDEA model begins by decomposing each image into a causal feature that captures label information and a non-causal feature that represents domain information. The authors then utilize consistency learning on both source and target domains to generate discriminative hash codes based on the causal features. Additionally, the authors employ a generative model for synthetic samples to simulate various non-causal effects, ultimately minimizing their impact on the domain invariant hash codes.
- The authors conduct extensive experiments on benchmark datasets to evaluate the performance of their IDEA model against a variety of competitive baselines. The results demonstrate that the IDEA approach outperforms the others, showcasing its superiority in handling unsupervised domain adaptation for hashing.
Strengths: - Clarity: The authors have clearly and concisely explained their methods, results, and conclusions in a way that is easy for other researchers to understand and replicate.
- Quality of analysis: The authors have conducted extensive experiments and analyses, and the experimental data demonstrate the effectiveness and persuasiveness of the IDEA method.
- Theoretical analysis: The authors have provided a thorough theoretical analysis of the research problem, exploring new insights and frameworks that contribute to the advancement of knowledge in the field.
Weaknesses: - from the Ablation Study(line 310) on six benchmark cross-domain retrieval tasks, it can be observed that the effectiveness of these four Loss functions cannot be well demonstrated. It is suggested to consider incorporating more permutations and combinations in order to verify the validity of each component.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The IDEA pipeline has numerous modules and loss functions, which may make it challenging to transfer and fine-tune for other tasks or datasets. Could the authors provide some explanations and clarifications on this matter?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: - One potential drawback of this approach is that it may not be effective in open-set scenarios, where target samples could potentially come from unseen classes.
- To address this limitation, more advanced techniques and models (such as those in AICG and multimodal large-scale models) can be incorporated into the research work to improve the generalizability of the method in a wider range of scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are truly grateful for the time you have taken to review our paper, your insightful comments and support. Your positive feedback is incredibly encouraging for us! In the following response, we would like to address your major concern and provide additional clarification.
Q1:It is suggested to consider incorporating more permutations and combinations in order to verify the validity of each component.
A1: Thanks for your comment. We have introduced three different model variants as below. From the results, we can validate their validity.
| Dataset | Office-Home | | Office31 | |
|-----------------------|:------------:|:--------:|:-----------:|:-------------:|
| Tasks | Clipart2Real | Real2Art | Amazon2Dslr | Amazon2Webcam |
| w/o L_MI + L_v | 43.32 | 46.60 | 45.24 | 52.21 |
| w/o L_CL + L_v | 40.09 | 42.78 | 41.77 | 50.14 |
| w/o L_MI + L_v + L_RE | 40.62 | 43.17 | 42.58 | 50.35 |
| Ours | 45.71 | 49.64 | 48.70 | 54.43 |
> Q2. The IDEA pipeline has numerous modules and loss functions, which may make it challenging to transfer and fine-tune for other tasks or datasets. Could the authors provide some explanations and clarifications on this matter?
A2. Thanks for your comment. Although IDEA has numerous modules and loss functions, the parameter analysis shows the consistency of model parameters, which make the model easy to be transferred in other datasets. As for different tasks, we would further extend the strengths of the proposed IDEA in scenarios such as image classification and Re-ID in the future.
We will also add your suggestion about future works into our revised version. Thanks again for appreciating our work and for your constructive suggestions. Please let us know if you have further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply, it solved my problem. And hope more advanced techniques and models (such as those in AICG and multimodal large-scale models) can be incorporated into the research work to improve the generalizability of the method in a wider range of scenarios.
---
Reply to Comment 1.1.1:
Title: Thanks again for your feedback!
Comment: Thanks again for your feedback! We are pleased to know that our responses have addressed your concerns. We really appreciate your efforts on reviewing our paper, your insightful comments and support. We will definitely extend our work with more advanced techniques to improve the generalizability of the method in our future work. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We thank you for your careful reviews and constructive suggestions. We acknowledge the positive comments such as "The problem is interesting" (Reviewer BQ1H), “The techniques seem correct” (Reviewer BQ1H), "Clarity" (Reviewer MFjc), "Quality of analysis” (Reviewer MFjc), “Theoretical analysis” (Reviewer MFjc), “solid theoretical foundation” (Reviewer z9GY), “outperforms the compared methods with a large margin.” (Reviewer z9GY and 8Msq), “interesting” (Reviewer 8Msq), “a new point” (Reviewer fabx). We have also responded to your questions point by point.
The figure results are attached in the PDF. Please let us know if you have any follow-up questions. We will be happy to answer them.
Best regards,
the Authors
Pdf: /pdf/f986fc4540c1fd4e9b782d4032914848a9e07852.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work studies apply hashing for domain adaptive image retrieval. Specifically, the authors propose Invariance-acquired Domain Adaptive Hashing (IDEA) to consider alignment invariance, and causal effects. IDEA decomposes each image into causal and non-causal features, and introduce invariant learning to minimize the variance under different intervention. The experiments validate the effectiveness of the proposed method.
Strengths: - The problem is interesting.
- The techniques seem correct.
Weaknesses: - The motivation of this work seems to be not stated clearly. The author claim invariance needed, however it lacks explanation in whether and why it is important in hashing. It also applies for the use of causal feature.
- This work is a simple combination of several techniques, e.g., contrastive learning, causal learning, invariance learning. It does not provide new insights to me.
- I have some concerns on the empirical results. In ablation study, the most important modules claimed in this work, i.e., L_V, L_MI have little improvement on the performance, and thus the importance of the two modules is questionable.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Plz see weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The work discusses the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are truly grateful for the time you have taken to review our paper and your insightful review. Here we address your comments in the following.
> Q1. The motivation of this work seems to be not stated clearly. The author claim invariance needed, however it lacks explanation in whether and why it is important in hashing. It also applies for the use of causal feature.
A1. Thanks for your comment. The reason for using invariance for hashing is to learn domain-invariant hash codes, which is quite important for effective cross-domain retrieval. In particular, cross-domain retrieval is that give a query in one domain, we aim to get similar samples in the other domain. Clearly, we need to enforce that samples with the same semantics have small distance, which indicate the domain-invariance. The reason for using causal features is that causal features are related to semantics instead of backgrounds, which can generate hash codes discriminative to different semantics and invariant to backgrounds.
> Q2. This work is a simple combination of several techniques, e.g., contrastive learning, causal learning, invariance learning. It does not provide new insights to me.
A2. Thanks for your comment. Actually, our method does not utilize classic contrastive learning. The novelty of the proposed methods is three points:
- **Generation-based causal model**. We propose a generation-based structural causal model to analyze the relationship under domain shift, which is the first in domain adaptation and image retrieval to our best knowledge.
- **Implementation: disentangle and intervene**. To resolve the domain shift, our method utilize disentangled each image into casual features and non-casual features and then add intervention with reconstruction, which is quite different from existing causal learning and invariance learning methods.
- **A unified framework**. We incorporate our components in a unified framework for domain adaptive hashing, which can achieve superior performance in cross-domain retrieval.
> Q3. I have some concerns on the empirical results. In ablation study, the most important modules claimed in this work, i.e., L_V, L_MI have little improvement on the performance, and thus the importance of the two modules is questionable.
A3. Thanks for your comment. We have added more ablation studies on more datasets and the results are shown below. From the results in Table 3 and below, we can observe that the performance gain of the full model is between [0.7%, 18.5%] and over 1% in most cases, which can validate the effectiveness of the proposed components.
| Dataset | Office-Home | | Office31 | |
|---------|:------------:|:------------:|:-----------:|:-----------:|
| Tasks | Product2Real | Real2Product | Webcam2Dslr | Dslr2Webcam |
| w/o L_v | 58.23 | 60.92 | 83.88 | 87.22 |
| w/o L_MI | 57.87 | 60.46 | 83.71 | 87.32 |
| Ours | 59.18 | 61.84 | 84.97 | 88.69 |
In light of these responses, we hope we have addressed your concerns, and hope you will consider raising your score. If there are any additional notable points of concern that we have not yet addressed, please do not hesitate to share them, and we will promptly attend to those points.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Some of my concerns have been addressed. I will increase my score a bit.
---
Reply to Comment 1.1.1:
Title: Thanks again for your feedback and increasing the rating!
Comment: Thanks again for your feedback and increasing the rating! We are pleased to know that we solve some concerns and will definitely add these results in the final version according to your suggestion.
We sincerely thank you for your dedication and effort in evaluating our submission. Please do not hesitate to let us know if you need any clarification or have additional suggestions. | null | null | null | null | null | null |
Fragment-based Pretraining and Finetuning on Molecular Graphs | Accept (poster) | Summary: In this paper, the authors proposed contrastively and predictively strategies for pretraining GNNs based on graph fragmentation. Using principle subgraph extraction, the authors pretrain two separate encoders for molecular and fragment graphs, capturing structural information at different resolutions.
Strengths: 1. The paper is well-organized and easy to follow.
2. The authors conduct comprehensive comparisons with baselines to show their advantages.
3. The fragment-level contrastive pretraining framework is novel, which captures both granular patterns and higher-order connectivity.
4. The t-SNE visualization in Figure 3 clearly shows the effectiveness of the authors' design.
Weaknesses: 1. The technical contribution is limited. For example, the principle subgraph extraction module is borrowed from [19].
2. The authors do not clearly state how to choose hyperparameter alpha.
3. In Table 3, the effects of the vocabulary size are only explored on three datasets.
4. The pertaining time is not reported in the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weakness.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation is not clearly discussed. Please provide the discussion in the rebuttal period.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and insightful review. We are happy to answer your concerns.
(1) **The technical contribution is limited. For example, the principle subgraph extraction module is borrowed from [19].**
The focus of the paper is on developing pretraining and finetuning strategies based on the fragmen graphs. Our strategies are elegant and efficient, achieving improvement over recent strong baselines. The fragmentation component is not fixed and can be replaced with other similar frequency-based fragmentation methods. Please refer to point **(1)** from the main rebuttal for a detailed discussion regarding our contribution.
(2) **The authors do not clearly state how to choose hyperparameter alpha.**
We choose alpha from [0.1,0.5] with step size 0.1. The detailed performance on each value of alpha is reported in Table 6 from the Appendix.
(3) **In Table 3, the effects of the vocabulary size are only explored on three datasets.**
In **Table C** of the main rebuttal, we show the effect of varying vocabulary sizes on all 8 chemical benchmarks. We also added smaller vocabulary sizes (200, 400) as suggested by reviewer **RaR1**. We found interesting trends among datasets. Thank you for your suggestion.
(4) **The pretaining time is not reported in the paper.**
In general, it takes 4-5 hours to do one round of pretraining (100 epochs) for any of our models on one V100 GPU. We did not find a significant difference in training time between the pretraining variations in our work. We will report the training time in the revision.
(5) **The limitation is not clearly discussed.**
This is a valid concern. In general, we do not observe any potential negative societal impact from our work. We do recognize a few limitations:
- Our paper is empirical. The theoretical work for our paper is limited. We concluded the effectiveness of our designs based on downstream performances, ablation of training components, and visualization of the learned embeddings.
- Due to resource constraints, we did not fully optimize the hyperparameters during pretraining and finetuning. We generally went with hyperparameters that seem to work well, referring to those used in previous work. Still, our models compare competitively with results from other works.
- We pretrain on a subset of ChEMBL containing 456k molecules. Given that public database containing millions of molecules are available, the size of our pretraining data could be a bottleneck for performance. However, this pretraining size ($10^5$) is reasonable to compare with other baselines. GraphMVP used 50k molecules with 3D information for pretraining while others use pretraining set ranging from $10^5$ to $10^6$ in size.
- We mainly evaluated on 8 popular binary prediction datasets. Molecular property prediction is a considerably rich field with a variety of datasets suitable for benchmarking. Even so, the benchmarks we used are the most popular among existing works.
- We did not experiment with a variety of fragmentation methods. A recent paper [1], as pointed out by Reviewer **RaR1**, is another frequency-based fragmentation strategy comparable to the one used in our paper. It would be interesting to see how our methods perform with a variety of fragmentation methods, however, we'd like to leave this for future work.
[1] Zijie Geng Z, Shufang Xie, Yingce Xia, et al. De Novo Molecular Generation via Connection-aware Motif Mining. ICLR 2023.
---
Rebuttal Comment 1.1:
Title: Reply to the authors
Comment: I have read the reply and appreciate the author's reply. My concerns are mostly resolved. Thanks!
---
Reply to Comment 1.1.1:
Title: Thank you. Any further questions?
Comment: Thank you for acknowledging our reply. We are glad that your concerns are mostly resolved. We are happy to discuss any further clarifications to improve your evaluation and enhance the acceptance chance of the paper. | Summary: This paper presents a novel approach to pretrain Graph Neural Networks (GNNs) at the fragment level for property prediction on molecular graphs. By utilizing a compact vocabulary of prevalent fragments and introducing fragment-based contrastive and predictive pretraining tasks, the authors overcome the limitations of node-level and graph-level pretraining. Two different GNNs are pretrained to capture structural information at multiple resolutions, and the fragment information is utilized during finetuning. The proposed models show improved performances on both common molecular and long-range biological benchmarks.
Strengths: - The paper is easy to follow.
- The idea that using motif for pretraining is novel and reasonable.
Weaknesses: - Empirical performance is not strong enough. Especially in Table 2, it's hard to distinguish the absolute gain over the baselines. The authors are encouraged to report the average score over all tasks in molecular property prediction.
- The authors are also encouraged to compare with stronger baselines. For example, the authors can also compare JOAO V2 in addition to JOAO.
- Missing ablations: the authors add many components and loss functions in the system. It would be interesting know how each contribute to the performance.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time reviewing our paper and your thoughtful insights! We are happy to address your concerns.
(1) **Empirical performance is not strong enough. The authors are encouraged to report the average score over all tasks in molecular property prediction.**
We understand that the average score is often reported in the literature to give an evaluation of the overall performance across benchmarks. However, we argue that this metrics can be problematic since the values and variations of the scare can be quite different among benchmarks and a few benchmarks can skew the average value. Instead, we opt for rankings per benchmark and report the average ranking to evaluate overall performance.
We do agree with the reviewer that absolute values give a better impression of the performance. As a result, we will add an extra column reporting this metric. Our 3 models, GIN_C, GIN_CP, and GIN_CPF, are the top 3 regarding both AUC and Ranking.
| Baselines | AttrMasking | ContextPred | G-Motif | G-Contextual | GPT-GNN | GraphLoG | GraphCL | JOAO | GraphMVP | MGSSL | GIN_C | GIN_CP | GIN_CPF |
|:---------:|:-----------:|:-----------:|:-------:|:------------:|:-------:|:--------:|:-------:|:-----:|:--------:|:-----:|:---------:|:---------:|:---------:|
| Avg. Rank | 5.81 | 5.44 | 10.06 | 8.69 | 10.38 | 8.94 | 8.69 | 7.31 | 6.00 | 8.13 | **4.31** | **3.69** | **3.56** |
| Avg. AUC | 71.15 | 70.89 | 70.16 | 69.34 | 67.16 | 69.70 | 70.78 | 71.89 | 71.70 | 70.41 | **72.81** | **72.59** | **74.01** |
In terms of the strength of the empirical evaluation, we argue that our models achieved substantial improvement over existing methods. Please see our response to the next concern.
(2) **The authors are also encouraged to compare with stronger baselines. For example, the authors can also compare JOAO V2 in addition to JOAO.**
The following table compares our models with JOAO and JOAOv2:
| | BBBP | Tox21 | ToxCast | SIDER | ClinTox | MUV | HIV | BACE | Ave AUC |
|---------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|---------|
| JOAO | 70.2 ± 1.0 | 75.0 ± 0.3 | 62.9 ± 0.5 | 60.0 ± 0.8 | 81.3 ± 2.5 | 71.7 ± 1.4 | 76.7 ± 1.2 | 77.3 ± 0.5 | 71.89 |
| JOAOv2 | 71.4 ± 0.9 | 74.3 ± 0.6 | 63.2 ± 0.5 | 60.5 ± 0.7 | 81.0 ± 1.6 | 73.7 ± 1.0 | 77.5 ± 1.2 | 75.5 ± 1.3 | 72.12 |
| GIN_C | 71.5 ± 1.6 | **75.5 ± 0.4** | 63.8 ± 0.6 | 61.4 ± 0.9 | 78.6 ± 2.7 | **77.2 ± 1.5** | 76.3 ± 1.0 | 78.2 ± 3.4 | 72.81 |
| GIN_CPF | **72.0 ± 1.7** | 74.0 ± 0.7 | **63.9 ± 0.9** | **63.6 ± 1.2** | **84.7 ± 5.8** | 75.4 ± 1.9 | **78.0 ± 1.5** | **80.5 ± 1.8** | **74.01** |
In addition, Table B from the main rebuttal shows the average AUC across 8 chemical benchmarks of our models against strong baselines. The combination of our proposed pretraining strategies obtained 2.62%, 1.72%, and 1.29% relative improvement over JOAOv2, GraphMVP-G, and GraphMVP-C, respectively. When considering only the contrastive component, our model GIN-C (72.81) performs better than other contrastive models, including GraphCL (70.78), JOAOv2 (72.12), and GraphMVP (71.69).
(3) **Missing ablations: the authors add many components and loss functions in the system. It would be interesting know how each contribute to the performance.**
To clarify, we have conducted certain ablation on the components proposed in the paper. These components are:
- C: contrastive pretraining.
- P: predictive pretraining.
- F: including fragment GNN in downstream prediction.
Because F requires C, all possible combinations are {C, P, CP, CF, CPF}. In the paper, we included 3 out of 5 combinations (C, CP, CPF) and showed in Table 1 of the paper that more components successively improve the performance. We added results with all 5 combinations in Table A of the main rebuttal.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thank you for the detailed explanation and most of my concerns are addressed. I am happy to change my score. Also, please add these new numbers to your revision.
Thanks,
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for your consideration. We appreciate your time reviewing and your help making the paper stronger. We will include the new numbers in the final revision. | Summary: Based on the belief that learning with fragments can help capture structural information at multiple resolutions, this paper proposes a fragment-based strategy for pretraining and fine-tuning.
First, the paper extracts fragments by an existing heuristic algorithm called Principle Subgraph Mining algorithm to obtain fragments from a large molecular dataset (i.e., ChEMBL database). Then, the paper uses two separate GNNs (one for molecules and one for fragments) to learn the node embeddings. In particular, the node embeddings obtained by molecular-based GNN are pooled into fragment embeddings.
The model is trained with respect to three tasks: a contrastive task between fragment embeddings obtained by molecular-based GNN and fragment-based GNN, a fragment existence prediction task, and a fragment graph structure prediction task.
The experiments are done on 8 binary graph classification tasks from MoleculeNet and 2 graph prediction tasks on large peptide molecules from the Long-range Graph Benchmark.
Strengths: + The proposed method is easy to follow and conduct.
+ The results on Long-range Graph Benchmark are particularly good.
+ Figure 1 clearly shows the method.
+ Codes are provided. Appendix further provides more details.
Weaknesses: The idea of using molecular fragment can be interesting. The proposed method shows some effectiveness, although how it obtains can be less illuminating.
Many design choices need to be further described. Please reply to my questions below.
In addition, some writing issues exist. Sentences cannot start with "[reference]".
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. The fragment extraction strategy can be described and compared with more details. The authors choose Principle Subgraph Mining algorithm to extract fragments. Do you try others? Any empirical evidence supports this choice? Like BRICKS or RECAP?
On line 53, the authors comment MGSSL as "their multi-step fragmentation may overly decomposes molecules, losing the ability to represent high-order structural patterns". Can you expand it? Why?
On line 122, the authors wrote that "however, existing methods use suboptimal fragmentation or fragment embeddings". Likewise, can you provide more discussion about it? Why their methods obtain suboptimal fragments while yours can? Any theoretical proof?
2. The method assigns different fragment embeddings for two identical fragments appearing int the same molecule. Have you trying forcing them to be the same? I get that the local neighborhood of these two identical fragments can be different. But will it also help capture generic information if fragment is uniquely represented?
3. On lie 159, the authors wrote that "An edge exists between two fragment F nodes of GF is there exist at least a bond interconnecting atoms from the fragments." Will these cause too many edges between fragments? I suspect that most fragments are connected in this case. Therefore, the topology of fragment graph is lost.
4. How to obtain ground-truth structural backbones?
5. More experimental details of fragment extraction algorithm are needed. Any hyperparameters? On line 308, the authors wrote that "we prepare two additional vocabularies with size 1600 and size 3200". How do you prepare that?
6. Why results on long-range tasks can be much improved than existing works? Why different tasks use different baselines?
7. Figure 4(d) is hard to follow. How do you assign fragment ID? Why you need fragment ID as x axis?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: No potential negative societal impact of their work as far as I know.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the thoughtful questions!
(1) **The fragment extraction strategy can be described and compared with more details. ..., the authors comment MGSSL as "their multi-step fragmentation may overly decomposes molecules, losing the ability to represent high-order structural patterns"?**
Rule-based methods (BRICS, RECAP) produce:
- Very large vocabularies.
- Most fragments have very low occurrence (like Kipf distribution).
- Fragments with high occurrence are mostly small.
A very large vocabulary with mostly rare and unique fragments is a major challenge for learning embeddings. One solution is further fragmenting the large and rare fragments to form a more concise vocabulary with mostly small fragments (MGSSL). However, if a fragment graph is constructed with small fragments, then this fragment graph is not much sparser than the original molecular graph. This means that the connectivity of the fragment graph is not much more global or higher-level than that of the molecular graph, which is the reason for our comment on MGSSL.
We set the requirements for a fragment vocabulary to have precise size and contain larger fragments with good occurrences. Principal Subgraph Mining provides these qualities, so it is suitable for our study.
(2) **On line 122, the authors wrote that "however, existing methods use suboptimal fragmentation or fragment embeddings". Likewise, can you provide more discussion about it?**
We discuss these points from line 48 - 54. We'd like to further clarify them here:
- The fragments in GROVER are k-hop subgraphs surrounding atom nodes, which limits the kind of patterns these fragments can represent because chemical patterns come in various sizes and shapes.
- In MICRO-Graph, the fragment embeddings are contrasted with the graph embeddings. This encourages smoothing among the embeddings of fragments from the same graph since they all form positive pairs with the graph.
- MGSSL, as stated above, overly decompose the fragments, hence their fragment graph cannot effectively represent high-order connectivity.
We will add these explanations after line 122 in the revision.
(3) **The method assigns different fragment embeddings for two identical fragments appearing in the same molecule. Have you tried forcing them to be the same?**
Capturing structural information and multiple resolution and enforcing the consistency between node embeddings and fragment embeddings are our main objectives. The fragment embeddings distill onto the node embeddings 2 pieces of information: the local neighborhood, which comes from the fragment type, and the positional information with respect to the global arrangement, which comes from the embedding of higher-order connectivity via applying a GNN on the fragment graph. Both are arguably important for downstream prediction. Forcing the embeddings of 2 fragments of the same type to be similar will strengthen the first piece of information and weaken the second. Instead, we leaved the decision on weighting these 2 pieces of information to the contrastive learning and to the need of the downstream task. Generic information regarding each fragment is always captured because we keep an optimizable dictionary of fragment embeddings, which is tuned via learning.
(4) **... I suspect that most fragments are connected in this case. Therefore, the topology of fragment graph is lost.**
To clarify, there will be only 1 edge between a pair of connected fragments in the fragment graph no matter how many bonds there are connecting the fragments. To answer the reviewer's question: no, there will not be too many edges between fragments. Generally, fragment graphs are sparse and tree-like.
(5) **How to obtain ground-truth structural backbones?**
We record a dictionary of all possible structural backbones from the pretraining dataset with NetworkX's graph hashing algorithm. To obtain ground-truth structural backbone on an input graph, we remove all node features, hash the empty fragment graph, and look up the hash value in the dictionary.
(6) **...the authors wrote that "we prepare two additional vocabularies with size 1600 and size 3200". How do you prepare that?**
After each iteration of the fragmentation algorithm, a new larger fragment is added to the vocabulary by merging smaller ones, prioritizing frequency. The process repeats until the size of vocabulary reach a predefined value. To prepare larger vocabulary, we run the algorithm with more iterations.
(7) **Why results on long-range tasks can be much improved than existing works? Why different tasks use different baselines?**
Our method can effectively embed higher-order structural information. GNNs are inefficient in capturing long-range connectivity beyond a node's local neighborhood. In Table 2, GatedGCN with RWSE performs better than other GNNs because graph topological information (RWSE) is explicitly added to the node embeddings. Our pretraining effectively enforces this information into both node embeddings and fragment embeddings. To illustrate this point, Figure 3 shows that the fragment embeddings alone can well distinguish different structural backbones and node embeddings agree with the spatial arrangement within a molecule.
For long-range tasks, we compared with GNNs to show that our pretrained GNN overcome the short-range problem of GNNs.
(8) **Figure 4(d): How do you assign fragment ID? Why you need fragment ID as x axis?**
The fragments are arranged based on decreasing size. The ID on the x-axis simply shows unique fragments in this order. The purple line shows the sizes of the fragments on the x-axis. Each horizontal plateau shows a collection of fragments with the same size. The blue plot is a histogram showing the frequency of each fragment with the spikes showing the most frequent fragments. These spikes are distributed across the x-axis, showing that most frequent fragments are of all sizes and not just small fragments.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply. One question left.
Comment: Thanks for the reply.
Most of my concerns are cleared. But I DO have one question left.
In my question (4), I know that there will be only 1 edge between a pair of connected fragments. Given your fragments are not very small, there can be many bonds interconnecting atoms from the fragments. My point is, will this lead most of the fragments to be connected? Can you provide more detailed statistics, such as the average connected fragments?
---
Reply to Comment 1.1.1:
Title: Fragment graph statistics
Comment: We report several statistics regarding the fragment graphs.
Number of fragment graphs by size:
| Size | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | ... | 30 |
|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| Number of Graphs | 167 | 8433 | 76024 | 144940 | 112426 | 52607 | 20999 | 7742 | 3334 | 1570 | 788 | 496 | ... | 1 |
The smallest fragment graphs are of size 1 (standalone fragment) while the largest fragment graphs are of size 30. The majority of fragment graphs are of smaller sizes.
To illustrate the connectivity within fragment graphs, for each fragment graph size, we report the average number of edges. To save space, we only consider fragment graphs with maximum size 10:
| Size | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| Average Number of Edges | 0.00 | 1.00 | 2.02 | 3.07 | 4.17 | 5.33 | 6.51 | 7.74 | 8.88 | 10.11 |
Since our molecular graphs are connected, our fragment graphs are also connected (i.e, no disconnected island). From the above table, we can see that, given a fragment graph with size $n$, the average number of edges connecting the fragments vary from around $n-1$ to $n$. When the fragment graphs are small, the average number of edges are closer to $n-1$, indicating that the fragment graphs are mostly tree-like. As the fragment graph size increases, more loops appear and thus the average number of edges deviate further from $n-1$.
To further illustrate the connectivity and sparsity within fragment graphs, in the following table, with varying vocabulary sizes, we show the average fragment graph size and the average number of edges:
| Vocabulary Size | 200 | 400 | 800 | 1600 | 3200 |
|-|:-:|:-:|:-:|:-:|:-:|
| Average Graph Size | 5.92 | 5.17 | 4.61 | 4.15 | 3.73 |
| Average Number of Edges | 5.23 | 4.40 | 3.78 | 3.28 | 2.84 |
We hope that we could answer your question and improve your evaluation of the paper.
We are happy to follow up if you have any other concern. | Summary: The authors propose a novel method for generating representations for molecule graphs where two GNNs are contrastively learned. Using this new represntations, the authors achieve good results compared to a variety of baseline methods.
Strengths: The paper and method are presented clearly.
The results are strong and the method is interesting + well explained
Weaknesses: I found the presentation of Figure 3 a bit confusing
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: i realise that many molecular benchmarks are based on similar organic compounds, but I am curious how the method would behave over a wider-range of molecules in material science.
Would it be possible to consider fragments are even larger common structures? Would this aid in longer range predictions?
> We reduce the learning rate by a factor of 0.1 every 5 epochs without improvement.
I'm curious if a different learning rate schedule would give better results. How is no improvement defined?
Long range structural information may involve 3D information, where. things that appear far apart on the graph may not be. I'm curious if this could be measured in some way? Could this method be applied to domains where 3D information is more directly used?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time spent reviewing our paper! We appreciate the thoughtful comments and are happy to address your questions regarding the wider applicability of the model.
(1) **I realise that many molecular benchmarks are based on similar organic compounds, but I am curious how the method would behave over a wider-range of molecules in material science.**
This is an interesting question! Indeed learning and pretraining on material science compounds is an underexplored area compared to that of organic molecules. There are some differences between the two domains.
There are a wide variety of materials and its hard to discuss all of them given the limitation of this forum. In general, we can say that while organic compounds are small and are stand-alone structures, materials have consituting units. Their properties are defined by both the arrangement of these units and the units themselves (for example: polymers with monomers, lattice with unit cells, and metal-organic frameworks). For application of GNNs to this domain, the graph representation needs to reflect these special characteristics. For example, [1] construct graphs for crystal lattices based on the connectivity within a unit cell. There have been investigations regarding the performance of GNNs on various types of material property prediction [2]. Most recently, several works have explored pretraining on materials, such as metal-organic frameworks [3] [4].
Our method can be applied at different scale, to capture structural information either within the constituting units or of the unit arrangement. However, more research is needed to find an effective way of combining and utilizing these multiscale arrangement. It would be interesting to see the performance of the method when applied to domains in which patterns are different than those in organic compounds but surely, some domain specific adjustments are needed. We are confident that it would be easier to adapt to materials in which organic patterns are presented such as polymers or metal-organic frameworks, but more investigation will need to be done for other types of compounds. In our work, we pretrained on 2 levels of topological resolution, but it is quite straightforward to extend to more levels. We can have a separate loss to handle each pair of layer $n$ and layer $n+1$. However, as pretraining information can be distilled beyond 2 levels, we need a mechanism to control and tune this flow.
(2) **Would it be possible to consider fragments are even larger common structures? Would this aid in longer range predictions?**
It would be quite interesting to do so. However, a major problem is that as the size of substructures grows, they become rarer and more unique because in general, smaller substructures are more likely to repeat (think singular atoms, bonds, or simple rings). Even when we could extract high-frequency large fragments, this is not always useful, as we discussed in Section 4.4 and showed in Table 3. Larger fragments may overly "blur" the higher order connectivity, leading to worsen performance.
There are cases when we indeed want much larger fragments, such as when working with macromolecules. In those cases, the substructure rarity problem still exists. One possible idea is to look for common substructures within the fragment graphs and graphs with even higher order of connectivity and enforce this information up and down the hierarchy. This expansion may lead to better performance on long-range prediction.
(3) **I'm curious if a different learning rate schedule would give better results. How is no improvement defined?**
No improvement means no new minima in the loss is found after the given number of epochs (5 in this case).
We believe the performance will likely be better with learning rate tuning. However, due to limited resource, we generally went with hyperparameters that seems to work well. The hyperparameters we used are similar and comparable to those of the baselines. The factor value of 0.1 is typical in practice, and the patient value of 5 epochs came from our observation of a few initial training cycles.
(4) **Long range structural information may involve 3D information, where. things that appear far apart on the graph may not be. I'm curious if this could be measured in some way? Could this method be applied to domains where 3D information is more directly used?**
3D coordinates are valuable information that would likely help property prediction. However, such information is expensive to obtain on a large scale. When it is available, pretraining can help distill such knowledge into the molecule's embeddings, like in GraphMVP [5] or 3D-InfoMax [6]. In that case, our learned embeddings, which encode long-range information, likely adapt well to incorporating 3D information.
[1] Xie, T., & Grossman, J. C. (2018). Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties. Physical review letters, 120(14), 145301.
[2] Fung, V., Zhang, J., Juarez, E., & Sumpter, B. G. (2021). Benchmarking graph neural networks for materials chemistry. npj Computational Materials, 7(1), 84.
[3] Cao, Z., Magar, R., Wang, Y., & Barati Farimani, A. (2023). Moformer: self-supervised transformer model for metal–organic framework property prediction. Journal of the American Chemical Society, 145(5), 2958-2967.
[4] Kang, Y., Park, H., Smit, B., & Kim, J. (2023). A multi-modal pre-training transformer for universal transfer learning in metal–organic frameworks. Nature Machine Intelligence, 5(3), 309-318.
[5] Liu, S., Wang, H., Liu, W., Lasenby, J., Guo, H., & Tang, J. (2021). Pre-training molecular graph representation with 3d geometry. arXiv preprint arXiv:2110.07728.
[6] Stärk, H., Beaini, D., Corso, G., Tossou, P., Dallago, C., Günnemann, S., & Liò, P. (2022, June). 3d infomax improves gnns for molecular property prediction. In International Conference on Machine Learning (pp. 20479-20502). PMLR. | Rebuttal 1:
Rebuttal: We want to thank the reviewers for their time and insightful comments!
We would like to use this space to summarize and address some common concerns. Citations follow those in the paper.
**(1)** Reviewer **RaR1** and Reviewer **wcQm** raise concerns about the level of contribution of our work. Reviewer **wcQm** reasoned that the lack of novelty comes from the borrowed fragmentation technique from another work [19]. We used [19] because this algorithm is suitable for our requirements. The fragmentation component is not fixed and can be replaced with other frequency-based fragmentation methods. Our contribution is in the pretraining and finetuning strategies with fragment graphs. For that reason, we respectfully disagree with reviewer **RaR1**'s statement that the paper is a combination of existing methods. We studied fragment-level pretraining, an interesting direction where few prior works exist [26], [46], [47]. Ours is the only beside [46] that does fragment-level contrastive pretraining. Compared to previous work, we proposed to:
- Conduct cross-level contrastive learning between node embeddings and fragment embeddings.
- Learn separate sets of embeddings for node and fragments and train separate models for molecular graphs and fragment graphs.
- Contrast fragment embeddings with aggregated embeddings of nodes corresponding to the fragments. This design allows much more flexibility in learning multi-resolution topology compared to those of previous works.
- Utilize the fragment graphs in both predictive pretraining and downstream prediction.
Our methods are not only elegant, but also efficient. Compared to [47] which took 20 hours to train, each of our models took less than 5 hours to train in total (using similar hardware) while obtaining superior performances.
**(2)** Reviewer **RaR1** and Reviewer **Br6k** indicated a lack of ablation study regarding the various components we proposed. In particular, we have 3 unique components:
- C: contrastive pretraining using separate molecule GNN and fragment GNN.
- P: predictive pretraining utilizing information from fragment graphs.
- F: including fragment GNN in downstream prediction.
To clarify, we have conducted considerable ablation on these components. Because F requires C, all possible combinations of these components are {C, P, CP, CF, CPF}. In the paper, we included 3/5 combinations (Table 1: C, CP, CPF) and showed that more components successively improve the downstream performance, confirming the positive contribution of each component. We report results on all combination in Table A. We only include the average AUC and ranking (among the combinations) here and will include the full details in the final revision.
**Table A**
|Model|GIN_C|GIN_P|GIN_CP|GIN_CF|GIN_CPF|
|:-:|:-:|:-:|:-:|:-:|:-:|
|Avg AUC|72.81|69.05|72.59|73.73|74.01|
|Avg Rank|2.88|3.69|2.75|3.19|2.50|
C and F are stronger contributors than P, however, when combined with others, P has positive impacts on the overall performance. Disparity between AUC and Rank is because strong performance on a few benchmarks can skew the average AUC. Between GIN_CP and GIN_CF, GIN_CF has better AUC because it did particularly well on ClinTox and ToxCast but GIN_CP has better overall ranking because it has more pretraining than GIN_CF.
**(3)** Reviewer **RaR1** and Reviewer **Br6k** raised questions regarding the strength of our results and improvements. For comparison, the following Table B shows the average AUC across 8 common chemical benchmarks. GraphMVP-G and GraphMVP-C are recent strong baselines, and JOAO-v2 is a strong contrastive baseline suggested by Reviewer **Br6k**.
**Table B**
|Baselines|GraphCL|MGSSL|JOAOv2|GraphMVP|GraphMVP-G|GraphMVP-C|GIN_C|GIN_CPF|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Avg AUC|70.78|70.41|72.12|71.69|72.76|73.07|72.81|74.01|
The combination of our proposed pretraining strategies in GIN_CPF obtained 2.62%, 1.72%, and 1.29% relative improvement over JOAOv2, GraphMVP-G, and GraphMVP-C, respectively. Notice that GraphMVP relies on 3D information, which is expensive to obtain and difficult to scale. Compared to generative methods (MGSSL and GraphMVP-G), our strategies are much faster to train. MGSSL takes 20 hours while GIN-CPF takes less than 5 hours to train. Considering only the contrastive component, our model GIN-C (72.81) performs better than others, including GraphCL (70.78), JOAOv2 (72.12), and GraphMVP (71.69).
**(4)** Reviewer **wcQm** and reviewer **RaR1** recommend expanding the analysis on the effects of the vocabulary size. This is a valuable suggestion and we are happy to address it. In particular, we added 2 smaller vocabulary with sizes 200 and 400 to the analysis and report the downstream performances of GIN_C on all 8 datasets in Table C.
**Table C**
| Vocab Size |BBBP|Tox21|ToxCast|SIDER|ClinTox|MUV|HIV|BACE|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|200|69.8±1.5|75.6±0.8|63.5±0.8|61.3±0.7|74.8±4.3|74.9±1.8|76.9±1.0|79.4±1.6|
|400|71.6±1.4|75.8±0.5|63.8±0.4|61.3±0.7|75.2±6.6|75.7±2.5|75.9±1.0|77.5±2.5|
|800|71.5±1.6|75.5±0.4|63.8±0.6|61.4±0.9|78.6±2.7|77.2±1.5|76.3±1.0|78.2±3.4|
|1600|71.1±1.6|75.4±0.8|63.9±0.9|60.4±0.5|76.4±4.6|76.1±2.1|75.3±1.6|79.0±4.3|
|3200|71.3±0.8|75.4±0.4|63.7±0.6|60.6±0.5|75.5±5.9|77.1±1.9|75.1±1.3|79.4±4.5|
The size of the vocabulary directly influences the size of fragments. Small fragments may result in fragment graphs that are too granular to capture high-level connectivity effectively. On the other hand, fragments being too large may lead to an overly loose view of the graph and a loss of structural information. The results suggest that in general, for each task, there is an optimal sweet spot for vocabulary size for which it should be finetuned. For example, the optimal vocabulary size for Tox21 is 400 while the optimal size for BBBP falls between 400 to 800. SIDER favors smaller vocabulary sizes while MUV favors larger vocabulary sizes. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a contrastively and predictively strategy for pretraining GNNs based on graph fragmentation. Specifically, it leverages a frequency-based method for extracting molecule fragments, and performs fragment-based contrastive and predictive tasks to jointly pretrain two GNNs on a molecule graph and a fragment graph. It also enforces the consistency between fragment embeddings and atom embeddings for multi-solution structural information.
Strengths: 1. The paper is easy to follow.
2. The paper investigates an interesting fragmentation strategy for pretraining tasks.
3. The proposed method enforces the consistency between fragment embeddings and atom embeddings for multi-solution structural information, which is a promising trick. Experiments demonstrate the effectiveness of this method.
Weaknesses: 1. The technical novelty is limited, because it is a combination of existing methods. While the performance improvement is not very remarkable.
2. The authors may want to conduct ablation studies on the effect of molecule fragmentation strategy and the pretraining strategy, respectively.
3. Table 3 shows that the performances worsen on some downstream benchmarks as the vocabulary size grows larger. The authors may want to investigate a smaller vocabulary size. When it reaches 1, the method is the same as without fragments.
4. A related work [1]---which leverages a similar frequency-based motif extractor and uses contrastive learning for generative training---is missing.
[1] Zijie Geng Z, Shufang Xie, Yingce Xia, et al. De Novo Molecular Generation via Connection-aware Motif Mining. ICLR 2023.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The results of MGSSL [1] is different from what they report in the original paper. Why?
2. Do you kekulize the molecules? Or how do your deal with the possible breakage of aromatic systems?
3. Why do you use a subset of ChEMBL for pretraining? How is the dataset processed?
[1] Zaixi Zhang, Qi Liu, Hao Wang, Chengqiang Lu, and Chee-Kong Lee. Motif-based graph self-supervised learning for molecular property prediction. NeurIPS 2021.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful suggestions! We are happy to address your concerns.
(1) **The technical novelty is limited, because it is a combination of existing methods**
We would like to argue that our work is not simply a combination of existing methods. Besides Micro-Graph [1], ours is the only work that does fragment-level contrastive pretraining. Our proposed fragment-based pretraining and finetuning strategies are novel and efficient. We used an existing fragmentation procedure, however, the fragmentation component used is not fixed and can be replaced with other frequency-based fragmentation methods. Please refer to point **(1)** from the main rebuttal for a detailed discussion regarding our contributions.
(2) **The performance improvement is not very remarkable.**
For comparison, **Table B** from the main rebuttal the average AUC across 8 common chemical benchmarks of our models against strong baselines. The combination of our proposed pretraining strategies obtained 2.62%, 1.72%, and 1.29% relative improvement over JOAOv2, GraphMVP-G, and GraphMVP-C, respectively. When considering only the contrastive component, our model GIN-C (72.81) performs better than other contrastive models, including GraphCL (70.78), JOAOv2 (72.12), and GraphMVP (71.69).
(3) **The authors may want to conduct ablation studies on the effect of molecule fragmentation strategy and the pretraining strategy.**
To clarify, we have conducted certain ablation on the components proposed in the paper, which are:
- C: contrastive pretraining.
- P: predictive pretraining.
- F: including fragment GNN in downstream prediction.
Because F requires C, all possible combinations are {C, P, CP, CF, CPF}. In the paper, we included 3/5 combinations (C, CP, CPF) and showed that more components successively improve the performance. We provide the full results with 5 combinations (**Table A** in the main rebuttal). Given the limit of the rebuttal period and the focus of our paper on the pretraining and finetuning instead of the fragmentation, we leave the ablation on the fragmentation strategies for future work.
(4) **The authors may want to investigate a smaller vocabulary size. When it reaches 1, the method is the same as without fragments.**
To clarify, the vocabulary size cannot go to 1 since it should at least contain unique singular atom fragments. We added 2 additional experiments with smaller vocabulary sizes (200,400) and show the full results on 8 benchmarks (**Table C** in the main rebuttal). Thank you for your suggestion.
(5) **A related work [1] ... is missing.**
Thank you for the pointer. We will include this citation in the revision.
(6) **The results of MGSSL [1] is different from ... the original paper. Why?**
We rerun MGSSL and GraphLoG using the codes provided by the authors to ensure the results are comparable to other baselines (10 runs). The results from MGSSL paper used 3 runs and the results from GraphLoG are from the last epoch, not the best validation epoch.
(7) **Do you kekulize the molecules? Or how do your deal with the possible breakage of aromatic systems? Why do you use a subset of ChEMBL for pretraining? How is the dataset processed?**
The benchmark datasets and pretraining dataset are curated data from existing works. These datasets have been cleaned in a standard manner, including kekulization. The size of our pretraining dataset (456k) is comparable to those used to pretrain the baseline methods and is optimal for our limited computing resource. As far as we know, the motif mining algorithm does not produce fragments with incomplete aromatic system. Even so, our pipeline is robust to such cases because of how the fragment graphs are constructed. We have a single edge connecting two neighboring fragments. This edge is non-featurized and no matter the type of connection between two fragments (single bond, multiple bonds, overlapping atoms, etc), there is only one edge connecting them. Essentially, the fragment graphs only contain high-level arrangement within molecules and not granular bonding information. As a result, the pipeline is robust to cases in which fragments may contain incomplete aromatic system, such as fragmentation based on functional groups.
[1] Zhang, S., Hu, Z., Subramonian, A., & Sun, Y. (2020). Motif-driven contrastive learning of graph representations. arXiv preprint arXiv:2012.12533.
---
Rebuttal Comment 1.1:
Title: Thanks for the response. Increasing my score from 4 to 6.
Comment: I appreciate the authors' efforts in responding to my concerns. I have raised my score from 4 to 6.
I understand the technical novelty of the paper now, and I admit that it has some interesting insights, especially the cross-level contrastive learning between node embeddings and fragment embeddings. Although I still think these techniques mainly involve detailed manners and are a bit incremental.
I understand that a frequency-based motif extractor is suitable for the model, but a comparison with commonly seen motif extractors such as that in JT-VAE will make the paper stronger. Considering the time limitation during rebuttal, I expect the authors to include the comparison in a future revision.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for your consideration. We are happy that we have addressed your concerns. Your suggestion is valuable to us and we will conduct further experiments with existing fragmentations, such as the one used in JT-VAE, in the final revision. | null | null | null | null | null | null |
Hierarchical clustering with dot products recovers hidden tree structure | Accept (spotlight) | Summary: For agglomerative clustering, this paper proposes a method where the affinity of clusters at each hierarchy is naturally visualized as their height. Section 2 describes its statistical model (a nonparametric model based on first order moments) and the clustering algorithm derived from it (Algorithm 1) is presented. Algorithm 1 is extremely simple (needless to say, this simplicity is a virtue of the proposed method), and is only a slight modification of many of the standard agglomerative clustering methods. However, despite its simplicity, the deep insights of the algorithm are explained in Section 3. Remarkably, very roughly speaking, the affinity of each hierarchical cluster is expressed as the height of its dendrogram, which, under certain assumptions, asymptotically approaches its true value. Finally, Section 4 describes the effectiveness of the proposed algorithm on real data, as well as its limitations, and provides code for follow-up studies.
Note: I was not able to understand the details (techniques in the supplementary material) of some of the theoretical analysis in this paper within the initial review period. For Theorem 1, I was able to follow the outline of its key mathematical induction. For Theorem 2, I have not been able to grasp the details. However, the statements in the text are very reliable, and the authors have made an effort to add intuitive explanations to these theoretical results.
Strengths: - This paper is an excellent combination of a simple methodology and deep insights behind it. Needless to say, methodological simplicity is a virtue of the proposed method at this time.
- The phenomenon of affinities between clusters at each level of hierarchy appearing visualized as heights on the dendrogram will be the attractive results for a wide range of readers, both theoretically and in terms of application.
- It is excellent that the theoretical results (Theorem 1 and Theorem 2) are carefully interpreted in Section 3.3 as detailed intuitions.
- Authors also share the code (Jupyter Notebook) for re-experimentation with the community, which is a great contribution to subsequent researchers.
- In Section 5, the paper explicitly mentions the slightly stronger assumption of the proposed model and clarifies for what cases (what data) it has a negative impact. This is an essential finding in the development of science and technology.
Weaknesses: I have not been able to find any convincing weaknesses for this paper in the initial peer review. However, I have some concern about whether the statistical model assumed by the proposed method is causing model specification for data with so-called multiple clusters. I have inquired about this concern with the author in the following question. If the concern is resolved, I have not found any major weakness in this paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I am very grateful to authors for sharing very insightful and interesting ideas. I have some questions because this paper contains simple methodology but very surprising (and perhaps intuitively non-trivial) results. These questions are the reason I am not giving this paper a higher score at this time for its overall rating.
These may simply be my lack of insight, but perhaps a brief mention in the text or in the supplementary material would make the advantages of the proposed method clearer to some readers.
I would be very grateful if a response from the authors could clear up some of my questions.
(1) Another aspect of the model assumptions (Section 2.1).
It is very interesting to note that the only model mentioned in this paper is the chain rule by Equation 3, and everything else can be expressed nonparametrically. However, does Equation 3 imply that "all data are i.i.d. samples from a single cluster (a single nonparametric distribution that assumes only first order moments)"? In other words, is there model misspecification occurring for data sets with multiple clusters (e.g., a 2-cluster Gaussian mixture model)? If we use the chaining rule in Equation 3 to marginalize the random variables at each node in turn from the terminal node to the root node of the tree structure, this may be a model in which all data are sampled i.i.d. from some nonparametric distribution with identical mean values (only the first order moments are specified). Is this intuition wrong? If this intuition is correct, then surely the "PCA does not do anything wrong" in the second half of Theorem 2 (Equation 9) makes a lot of sense. Conversely, if the proposed statistical model is such that it has multiple clusters (like a Gaussian mixture model, where each cluster tends to gather data at its center), then it is highly counterintuitive that "still PCA does not do anything bad". In summary, my question comes down to whether the proposed model assumes a one-cluster i.i.d. model.
(2) The validity of greedily searching for the closest affinity pair in row 3 of Algorithm 1.
From the viewpoint of an agglomerative algorithm, it seems quite reasonable to greedily look for the closest affinity pair in line 3 of the algorithm. On the other hand, from the perspective of a statistical model, a probabilistic selection (i.e., Markov chain selection), such as finding a single pair from a categorical distribution weighted by affinity proportions, also seems natural. Does the fact that this row 3 is a greedy selection play an important role in the theoretical analysis in section 3? The actual intent of this question can be attributed to something that is also closely related to question (1), as follows.
For example, if the true data distribution is like a Gaussian mixture model with multiple clusters, is it possible to recover the true data distribution (mixture model structure) by, for example, truncating the tree at some hierarchy (resolution) of the tree structure obtained with the proposed model? In other words, would there not be a misspecification problem as a statistical model? My rough intuition tells me that it is not easy to recover the structure of a statistical model in the case of a greedy merge.
I also know that such a topic is a bit off the author's actual intent for this paper. However, the question arises because I am very curious as to why the statistical model behind the proposed method leads to very good properties (Theorem 1 and Theorem 2) in Algorithm 1.
(3) (Very minor.) Reason why DAG is first introduced.
Is there a reason why when the tree model is introduced in lines 68-69, it is broadly defined as directed acyclic graphs (DAGs) rather than restricted to binary trees?
DAGs are often used as the structure of Bayesian networks. Are there any indications that the theoretical results of this study for binary trees may be extended to DAGs or Bayesian networks?
Finally, once again, I appreciate your sharing of some very interesting ideas.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper carefully questions the weaknesses of the proposed method in Section 5. It is well explained what kind of data and in what cases the proposed method is unlikely to show its true value.
On another point, I am slightly concerned whether the statistical model in the proposed method implicitly assumes i.i.d. data with a single cluster, as I have asked the authors in my question. However, this concern may be addressed by a response from the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weaknesses**
>> I have not been able to find any convincing weaknesses for this paper in the initial peer review. However, I have some concern about whether the statistical model assumed by the proposed method is causing model specification for data with so-called multiple clusters. I have inquired about this concern with the author in the following question. If the concern is resolved, I have not found any major weakness in this paper.
Please see below.
> **Questions:**
>> - Another aspect of the model assumptions (Section 2.1). ... It is very interesting to note that the only model mentioned in this paper is the chain rule by Equation 3, and everything else can be expressed nonparametrically. However, does Equation 3 imply that "all data are i.i.d. samples from a single cluster (a single nonparametric distribution that assumes only first order moments)"? In other words, is there model misspecification occurring for data sets with multiple clusters (e.g., a 2-cluster Gaussian mixture model)?
Thanks for this thought-provoking question.
We can cast a 2-cluster Gaussian mixture model as a special case of our model, but subject to the Gaussian cluster centers being random and themselves having the same mean. To set this up we need a tree with three vertices $\mathcal{V}=\\{0,1,2\\}$ with $0$ being the root, edges $(0,1)$ and $(0,2)$, and $\mathcal{Z}=\\{1,2\\}$. For simplicity, let $\mathbf{X}(0)$ be a constant (that is, not random), and let $\mathbf{X}(1)$ and $\mathbf{X}(2)$ be iid random vectors, each with mean $\mathbf{X}(0)$. If the vectors $\mathbf{E}_i$, for $=1,\ldots,n$ are Gaussian, then overall $Y_1,\ldots,Y_n$ are draws from a 2-component, Gaussian mixture with random centres, as claimed.
We need $\mathbf{X}(1)$ and $\mathbf{X}(2)$ to have mean $\mathbf{X}(0)$ in order for eq (2) to hold, which is in turn needed to establish the relationship between affinity $\alpha$ and merge height $m$ in Lemma 1. This is really fundamental to our approach.
So, mathematically speaking, if data were generated from a model with deterministic cluster centers, then technically our model of the latent cluster centers would be misspecified. However, if we are only given data $Y_1,\ldots,Y_n$ we believe it is not possible to identify whether cluster centers are random or not, i.e. this technical model misspecification for latent cluster centers does not imply misspecification of the induced model of observed data. Hence, overall we think there is no problem here.
>> - (2) The validity of greedily searching for the closest affinity pair in row 3 of Algorithm 1. From the viewpoint of an agglomerative algorithm, it seems quite reasonable to greedily look for the closest affinity pair in line 3 of the algorithm. On the other hand, from the perspective of a statistical model, a probabilistic selection (i.e., Markov chain selection), such as finding a single pair from a categorical distribution weighted by affinity proportions, also seems natural. ... The actual intent of this question can be attributed to something that is also closely related to question (1), as follows. For example, if the true data distribution is like a Gaussian mixture model with multiple clusters, is it possible to recover the true data distribution (mixture model structure) by, for example, truncating the tree at some hierarchy (resolution) of the tree structure obtained with the proposed model? In other words, would there not be a misspecification problem as a statistical model?
The topic of recovering the data distribution is a fascinating one, but not one we have considered before. We note in passing, it is true that if one knows all the pairwise merge heights of a dendrogram (in out setup) then one can reconstruct the entire tree - this is discussed in appendix 3.C. What you suggest about an alternative to the greedy merge sounds somewhat like the integration performed in the EM algorithm for mixture models, or Bayesian approaches to hierarchical clustering, where the associations between data points and clusters are integrated out. We have not attempted to analyse such approaches in our framework, but this could be an interesting and new avenue of future research -- perhaps some kind of consistency result for likelihood-based or Bayesian model fitting -- but this is well beyond the scope of the present paper.
>> - (3) (Very minor.) Reason why DAG is first introduced. Is there a reason why when the tree model is introduced in lines 68-69, it is broadly defined as directed acyclic graphs (DAGs) rather than restricted to binary trees? DAGs are often used as the structure of Bayesian networks. Are there any indications that the theoretical results of this study for binary trees may be extended to DAGs or Bayesian networks?
The point of not restricting the binary trees is just to emphasise that our theory tells us the algorithm can produce useful output, without assuming such binary structure, even though the algorithm itself involves binary merges. We haven't yet considered more general Bayesian networks, but again this could be an interesting and new avenue for future research.
Thanks very much for these stimulating questions!
---
Rebuttal Comment 1.1:
Title: Thank you for your detailed and insightful answers.
Comment: Thank you for your detailed and insightful answers. The author has given satisfactory answers to my three questions. I now have a better understanding of the current value of this study and new perspectives on future research. I would like to maintain my score as per my initial positive impression. | Summary: This paper presents new analysis and perspectives on one of the most widely used clustering algorithms, hierarchical agglomerative clustering (HAC). In particular, the authors consider the relationship between a particular generative process of data and a dot-product based linkage of HAC. Empirical and theoretical results are presented.
Strengths: This paper presents an interesting perspective on a widely studied clustering algorithm.
I don't think that the intention of the paper is to yield surprise that for certain kinds of data there might be one linkage function (dot product) that works better than others. Rather I think the intention of the paper is to draw connections between clustering models and HAC.
In particular, I see the strengths as:
* Connections between the model for latent tree structures in Eq. 1 and Theorem 2 and dot-product based average linkage
* Connections between models for trees with heights and HAC.
* Empirical analysis of some of the theoretical scaling properties: Figure 2.
Weaknesses: I think that the paper could be improved in the following ways:
* W1. I think that more explicit treatment of Eq. (2) and (3) would improve the presentation; e.g., showing where/why these hold under the model in Eq. (1).
* W2. My understanding is that Lemma 4 & Prop 1 looks quite similar to standard HAC proofs about reducibility? I think it would greatly improvement the treatment of the result to explain the differences / distinctions. Apologies if I have missed something.
* W3. While the evaluation metric seems quite reasonable, might also be interesting to show results across other metrics such as dendrogram purity or similar metrics. I see this as a minor point though.
* W4. In the advent of distributed (typically relatively low) dimensional representations from deep neural encoder models. I wonder if the authors could provide perspective on "theorem 2 says that affinity estimation error vanishes if the dimension p grows faster enough relative to n". E.g., should we think of this method as appropriate in such circumstances where p is likely in range of 128-1024 or so and n might be in the millions or billions?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Q1. How could the presentation be modified to address W1?
* Q2. (W2) How does reducibility related to the proofs of Lemma 4 / Prop 1?
* Q3. (W3) did you consider using any other metrics for evaluation? If you were to add another metric to provide a new slice of information, which metrics/measurements would you want to add?
* Q4. Please see W4
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weaknesses**
>> - W1. I think that more explicit treatment of Eq. (2) and (3) would improve the presentation; e.g., showing where/why these hold under the model in Eq. (1).
(2) and (3) are not a consequence of (1), but rather distributional assumptions we make about the ingredients of (1), we would happily clarify this in the manuscript. We could make (2) and (3) numbered assumptions, of the form (Ax), if that would be clearer.
>> - W2. My understanding is that Lemma 4 & Prop 1 looks quite similar to standard HAC proofs about reducibility? I think it would greatly improvement the treatment of the result to explain the differences / distinctions. Apologies if I have missed something.
The arguments involved in the proofs of Lemma 4 & Prop 1 are indeed similar in flavour to those concerning reducible linkage functions in e.g.:
Sumengen, Baris, et al. "Scaling hierarchical agglomerative clustering to billion-sized datasets." arXiv preprint arXiv:2105.11653 (2021).
and historical references therein. However we are not aware of any results in the literature which give us precisely what we need for Lemma 4 and Prop 1, and indeed we derived them from scratch. We could of course clarify this in the manuscript with appropriate references.
>> - W3. While the evaluation metric seems quite reasonable, might also be interesting to show results across other metrics such as dendrogram purity or similar metrics. I see this as a minor point though.
Thanks for this suggestion. Looking up dendrogram purity in:
Heller, Katherine A., and Zoubin Ghahramani. "Bayesian hierarchical clustering." Proceedings of the 22nd international conference on Machine learning. 2005.
it appears to be applicable in cases where the ground-truth labels specify a partition of the data points, i.e. a "flat" clustering, but are not hierachical in nature. By contrast, the tau-B correlation measure in our manuscript is constructred to compare estimated with ground-truth hierarchical labelling, and indeed our theoretical results concern the quality of estimated hierarchy. We agree it could be interesting to consider other metrics, but the connection of such metrics to our theory might be quite unclear if they don't directly quantify hierarchy recovery. This could be an avenue for future investigation.
>> - W4. In the advent of distributed (typically relatively low) dimensional representations from deep neural encoder models. I wonder if the authors could provide perspective on "theorem 2 says that affinity estimation error vanishes if the dimension p grows faster enough relative to n". E.g., should we think of this method as appropriate in such circumstances where p is likely in range of 128-1024 or so and n might be in the millions or billions?
The polynomial moment parameter $q$, appearing and the convergence rate $n^{2/q} / p^{1/2}$ in theorem 2 is key here. Under assumption A2, $q$ quantifies how light/heavy the tails of the distributions of the data are -- higher $q$ corresponding to lighter tails. In the numerator of $n^{2/q} / p^{1/2}$ we see increasing $q$ acts to ameliorate the effect of increasing $n$. Futhermore, as mentioned on line 272 in section 4.2, in line with our empirical findings (fig. 2) we conjecture that when A2 is strengthened from polynomial to exponential-of-quadratic (i.e. sub-Gaussian) moments, the convergence rate improves from $n^{2/q} / p^{1/2}$ to $(\log_e n)^{1/2} / p^{1/2}$. This means that, theoretically, convergence would occur as long as $p$ grows faster than $\log_e n$, in which case $p$ in the range $128-1024$ you mention could be enough to counter-act $n$ in the range $10^6 - 10^9$.
>**Questions:**
>> - Q1. How could the presentation be modified to address W1?
See above.
>> - Q2. (W2) How does reducibility related to the proofs of Lemma 4 / Prop 1?
See above.
>> - Q3. (W3) did you consider using any other metrics for evaluation? If you were to add another metric to provide a new slice of information, which metrics/measurements would you want to add?
Further to our response on this topic above, to illustrate the appliability of our theory in greater detail, it would be desirable to have ground-truth merge heights available, so that estimated versus ground-truth merge heights could be used as a metric. Such ground-truth merge heights might potentially be available in some phylogenetic application domains, and in future we hope to engage with experts in such domains to investigate this further. If in future we could extend out theoretical results from merge-height estimation to quantifying the quality of "flat" clusterings derived from those merge heights, then the dendrogram purity measure you suggest could be an interesting addition.
>> - Q4. Please see W4
See above.
---
Rebuttal Comment 1.1:
Comment: Thanks very much for your response. I appreciate the additional clarifications.
* I indeed think that writing (2) and (3) as numbered assumptions would be better. Along with examples where assumptions hold and do not hold.
* I think that mentioning reducibility in the discussion of the proofs would be an improvement to the paper.
* I agree with your assessment of dendrogram purity.
* Your comment for W4 is quite helpful, thank you. I think it would improve the paper to add this and perhaps more (e.g., more emphasis on this on Figure 2) be added as a remark (at least to supplement). | Summary: The authors discuss a phylogenetic reconstruction problem (I'm not 100% clear on the exact problem, though) and suggest to use the dot product as a measure of similarity (or affinity). In particular, they seem to use the UPGMA algorithm (Alg. 1) where similarity is defined by dot product (scaled by $1/p$).
The proposed method is shown to recover the underlying tree structure with guaranteed accuracy under reasonable theoretical assumptions. Empirical results indicate good performance in a number of cases where the ground-truth hierarchy is known.
Strengths: - the theoretical guarantees appear very strong
Weaknesses: - presentation should be improved so that the problem under study is clearer
- it seems that methodologically, there is no novelty beyond proposing to use the dot product as a similarity measure in the UPGMA method
- limited set of benchmark methods in the empirical part
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I am familiar with phylogenetics and probabilistic graphical models, but despite reasonable efforts, I fail to be able to comprehend the modeling setup. The authors should clearly explain what the model in Eq. (1) means. In particular, the meaning of the latent variables $Z_i$, and the two mappings, $X(Z_i)$ and $S(Z_i)$, that generate the observed data from the latent variables and the random noise, $E_i$, should be explained.
Another general thing I was left wondering is the logic behind the dot product as an affinity measure. It is common to use the cosine similarity $\cos \theta = \frac{\langle x, y\rangle}{||x|| ||y||}$, where $\theta$ is the angle between the vectors $x$ and $y$, and $\langle x, y\rangle$ is the dot product between them. In other words, the cosine similarity is the dot product scaled by the product of the Euclidean norms of the two vectors. Such scaling can be motivated by noting that without normalization, the dot product has the peculiar property that, for instance, $\langle Cx, x\rangle = C \langle x, x\rangle$, i.e., for $C>1$, the vectors $Cx$ and $x$ are more similar (in terms of the dot product) to each other than the vectors $x$ and $x$ (so the vector $x$ to itself). For normalized vectors, the dot product and the cosine similarity are identical. (Further, for normalized vectors, the Euclidean distance equals $\sqrt{2 - 2{\langle x, y\rangle}}$ so it too is a monotonic function of the dot product.) I would appreciate some comments on the motivation of using the dot product as opposed to commonly used metrics such as the Euclidean distance or the cosine similarity, and on when it is likely to be appropriate (and when not).
detailed comments:
- I suppose some would prefer to use the $x \cdot y$ notation for dot product since $\langle x,y \rangle$ often denotes the (more general) inner product
- p. 2: "distributional properties $\mathbf{X}$": should this be "... of $\mathbf{X}$"?
- p. 7: When you say you compare against the UPGMA method, you should say what distance metric you use in UPGMA. In fact, it would be interesting to see the results of UPGMA (and other methods) based on various distance metrics (dot product, cosine, Euclidean, ...). And indeed, why not include more phylogenetic methods such as Neighbor-Joining (NJ), etc?
- appendix C, proof of Lemma 2: the second line of the displayed equation should have a factor 2 in front of the dot product term (as in $||x+y||^2 = ||x||^2 + 2 \langle x,y \rangle + ||y||^2$)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Weaknesses**
> - presentation should be improved so that the problem under study is clearer
See response to questions below.
> - it seems that methodologically, there is no novelty beyond proposing to use the dot product as a similarity measure in the UPGMA method
Indeed beyond the point of using the dot-product affinity measure, we are not claiming methodological novelty. Rather our contribution is to put forward an entirely new perspective on a very well known and very widely used type of algorithm.
> - limited set of benchmark methods in the empirical part
Please see below for clarifications - especially regarding UPGMA and the additional numerical results in appendix D.
>**Questions**
> - I am familiar with phylogenetics and probabilistic graphical models, but despite reasonable efforts, I fail to be able to comprehend the modeling setup. The authors should clearly explain what the model in Eq. (1) means.
We would happily add some commentary. For example, for each $v\in\mathcal{Z}$ one can think of the vector $\mathbf{X}(v)\in\mathbb{R}^p$ as the random centre of a "cluster", with correlation structure of the cluster determined by the matrix $\mathbf{S}(v)$. The latent variable $Z_i$ indicates which cluster the ith data vector $\mathbf{Y}_i$ is associated with. The vectors $\mathbf{X}(v)$, $v\in\mathcal{V}\setminus\mathcal{Z}$, correspond to unobserved vertices in the underlying tree.
> -Another general thing I was left wondering is the logic behind the dot product as an affinity measure. It is common to use the cosine similarity ... I would appreciate some comments on the motivation of using the dot product as opposed to commonly used metrics such as the Euclidean distance or the cosine similarity, and on when it is likely to be appropriate (and when not).
We wonder if the reviewer may have missed appendix E in the supplementary material, "Understanding agglomerative clustering with Euclidean or cosine distances in our framework". We refer to appendix E on lines 46, 139, 232, 256 and 299 of the main part of the manuscript.
Section E.1 discusses use of Euclidean distance as a similarity meausre and section E.2 makes the connection between our algorithm and using cosine distance. Theorem 3 in section E.3 shows that using cosine distance as a similarity measure can work well under an alternative model in which errors are multiplicative, rather than additive as in eq. (1) in the main part of the manuscript, and the "error-free" data vectors $\mathbf{X}(Z_i)$ all have the same expected square magnitude. Contrasting this with theorem 2 highlights that the dot-product affinity is a good choice to help remove additive noise, and can cope with data where expected square magnitudes vary across data points - see also figure 1 of the main manuscript for interpretation of dendrogram height.
In appendix E.4 we discuss the limitations of our modelling assumptions and failings of the dot-product affinities.
In connection with what the reviewer writes about $\langle Cx,x\rangle,$ we had thoughts along the same lines: we chose the word dot-product "affinity" rather than "similarity" purposefully; indeed it is possible that $\langle Cx,x\rangle >\langle x,x \rangle$, whereas it would be confusing to suggest that $Cx$ and $x$ are more "similar" to each other than $x$ is to itself. We would happily add a note to explain this in the manuscript.
> - I suppose some would prefer to use the $\cdot$ notation for dot product since often denotes the (more general) inner product.
We see your point here and have experimented with other notation, but found the "$\cdot$"" notation to be less visually clear in many equations.
> - p. 2: "distributional properties ": should this be "... of "?
Thanks for catching this.
> - p. 7: When you say you compare against the UPGMA method, you should say what distance metric you use in UPGMA. In fact, it would be interesting to see the results of UPGMA (and other methods) based on various distance metrics (dot product, cosine, Euclidean, ...). And indeed, why not include more phylogenetic methods such as Neighbor-Joining (NJ), etc?
We had used "UPGMA" to refer specifically to the combination of average linkage and Euclidean distance, in retrospect we realise this is confusing. In fact results for average linkage with cosine distance (i.e. UPGMA with cosine distance) are already given in table 1 in the main manuscript (there labelled "Cosine distance"). We could clarify this.
Following the reviewers suggestion, in the .pdf attached to this response we give numerical results for UPGMA with Manhattan distance, and UPGMA with dot-product as a dissimilarity measure (hence "opposite" to our Algorithm 1). Performance of these algorithms is worse than that of Algorithm 1.
We wonder if the reviewer may not have seen the additional numerical results in table 2 in appendix D of the supplementary material, where we compare our algorithm against other combinations of cosine and Euclidean distances, with complete and single linkage functions. In total across the main manuscript and appendix D there are comparisons of our method against 8 others:
main manuscript, table 1:
- "Cosine distance" (i.e. UPGMA with cosine distance)
- HDBSCAN
- "UPGMA" (i.e. UPGMA with Euclidean distance)
- Ward's method
appendix D, table 2:
- Complete linkage with Euclidean distance
- Complete linkage with Cosine distance
- Single linkage with Euclidean distance
- Single linkage with Cosine distance
so including the new UPGMA results, the total number of algorithms we compare against is 10.
Thanks for the suggestion about neighbour-joining. Our theory and performance measure concern hierarchy recovery, and in the time available we haven't found an implementation of neighbour-joining which allows hierarchy (rather than a "flat" clustering) to be extracted. With more time we could investigate this further.
> - appendix C, proof of Lemma 2: the second line...
Thanks for catching this.
---
Rebuttal Comment 1.1:
Title: Still not following the intuition about dot product affinity -- but overall, changing my rating to accept
Comment: Thanks for the response.
The clarification about the underlying model is helpful. However, I'm still equally puzzled by the intuition behind usingt the dot product (rather than its normalized version, the cosine similarity) as an affinity measure. I had indeed not read Appendix E since I expect that any content that is essential for understanding the main ideas presented in the paper are included in the main paper (otherwise one can asek what is the point of the page limit). In Appendix E, the authors provide some remarks on the relationship between dot product and cosine similarity -- concluding, e.g., that under norm constraints, cosine similarity and dot product affinity approximate each other (Thn. 3), which is (on an informal level) obvious as pointed out in my review above. The authors interpret this as showing that the cosine similarity can work well under such constraints. Still, it doesn't help in building intuition about why dot product would be a sensible choice in cases where the norms are not nearly uniform.
In any case, having also read the other reviews and the rebuttals related to them, I can only conclude that the theoretical results (mainly Thm. 1) seem to support the authors' claims about the good performance, and since this is (at least to my intuition) somewhat unexpected, I believe the paper is worth publishing and exposing to the wider community's evaluation.
Considering all of the above, I am changing my rating to accept. | Summary: The paper studies hierarchical agglomerative clustering under a specified generative process for the underlying data vectors. The main focus of the paper is similarity based clustering that deals with inner products between vectors rather than computing pairwise distances that has been studied before.
The goal of the paper is to provide recovery guarantees for the underlying tree structure. The paper provides such guarantees under their specified generative model and they bound the maximum merge distortion by the affinity estimation error (see Theorem 1). Moreover, they provide tradeoffs for the estimation error based on the dimension of the data vectors and the sample size. Finally, the authors test their method in terms of runtime and quality against several other methods from the literature.
Strengths: +hierarchical tree recovery is not a well-understood problem so the overall research direction is interesting
+dot products are used a lot in real-world applications so having analyses based on them is important
+the flavor of the guarantees seem to be towards the right direction for what we would mean "hierarchical tree recovery"
+the technical aspects of the proofs
Weaknesses: -The generative model should be better explained and compared to other previously studied models for hierarchical clustering. In particular, it was not clear to me how to think of your model given that Hierarchical Stochastic Block Models and well-clusterable graphs are defined and seem more natural to me, see e.g. papers below:
Cohen-Addad, Kanade, Mallmann-Trenn, Mathieu: "Hierarchical clustering: Objective functions and algorithms"
Chatziafratis, Mahdian, Ahmadian: "Maximizing Agreements for Ranking, Clustering and Hierarchical Clustering via MAX-CUT"
Manghiuc, Sun: "HierarchicalClustering: O(1)-Approximation for Well-Clustered Graphs"
So overall, having a better exposition and examples for why this particular model you proposed is a natural one, would help the readers a lot.
-The difference from other papers that focus on distance-based hierarchical clustering is important for the mathematics, but conceptually the algorithm is very related. So here the innovation in terms of conceptual contribution is slightly weakened. However, from technical viewpoint there are interesting ideas the paper introduces.
-In the statement of the Theorems 1 and 2, I missed some interpretaion of the results and especially some comparison to related work on distance-based methods. Do we learn something interesting by this dot-product analysis? How are the dimension/sample/estimation tradeoffs different that those other papers?
-The numerical experiments: I am not sure they convey the benefits of your algorithm in terms of estimation error and recovery. In Table 1, I would clarify what are the most significant advantages of your algorithm as currently it is hard to grasp.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: From above:
Q1: Do we learn something interesting by this dot-product analysis and how are the dimension/sample/estimation tradeoffs different that those other papers that did distance-based analyses?
Q2: Perhaps easy, but giving more evidence to motivate your particular hierarchical tree model would be appreciated.
Q3: Technical question: Perhaps to help illustrate some of your techniques/ideas, it would be helpful to instantiate your model to a small-depth hierarchy. Say there are only 2 or 3 levels like you had in some of your experimental datasets (S&P500, 20Newsgroups etc). Can your theorems be more interpretable in this scenario?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I like the paper and I find that the authors addressing the concerns mentioned above will significantly improve their paper.
I don't find any major limitations, apart from better motivating their model, comparing it with the existing ones, and better illustrating their Theorems and what we learn from them (in comparison to distance-based methods).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The generative model should be better explained and compared to other previously studied models for hierarchical clustering. ... see e.g. papers below ... having a better exposition and examples for why this particular model you proposed is a natural one, would help the readers a lot.
Thanks for these references. In fact we already cite Cohen-Addad et al. "Hierarchical clustering: Objective functions and algorithms" in the last paragraph of section 1, it is item 14 in our bibliography. The paper of Manghiuc et al. falls within the cost-function based framework of Dasgupta (bibliograpy item 15) cited and discussed in section 1. There we note that this existing modelling approach involves assuming an underlying ultrametric space whose geometry specifies the unknown tree. The paper of Chatziafratis et al. you point to is closely related to the paper of Charikar and Chatziafratis "Approximate Hierarchical Clustering via Sparsest Cut and Spreading Metrics" (bibliography item 11) also cited in section 1. There we highlight that high-dimensionality $p\to\infty$ is a key consideration in our analysis, whereas in existing works dimension is either fixed, or not considered at all.
Regarding the matter of how to think of our model, our entire setup is really quite different and new compared to existing works, so direct comparisons may not be very meaningful. However, one key feature is that the "ground truth" tree structure underlying our model is the conditional independence graph (see eq. (2) in the manuscript), as opposed to e.g. the geoemtry of an ultrametric as mentioned above. This is really a new perspective, and we think it is important because of the fundamental role that conditional independence plays in hierarchical statistical modelling.
Regarding exposition, we intended fig. 1 to help the reader understand how this tree structure relates to data, via the specific notion of dendrogram which we introduce in the paper. Regarding examples, one way we could add some content here is to expand upon the simulation example described in appendix D, which we use in our numerical results in section 4.
We would like to clarify a somewhat subtle point about our objectives: it is _not_ our main objective to specify a model which we necessarily believe is more natural than those in other works. Rather, our intention is to uncover the fact that a fairly standard algorithm, albeit using a dot-product affinity, allows the conditional independence tree underlying data to be recovered, under assumptions which we believe are quite general.
> - The difference from other papers that focus on distance-based hierarchical clustering is important for the mathematics, but conceptually the algorithm is very related. So here the innovation in terms of conceptual contribution is slightly weakened.
Indeed we are not claiming significant methodological innovation, but rather a new perspective on a well-known type of algorithm.
> - In the statement of the Theorems 1 and 2, I missed some interpretaion ... especially some comparison to related work on distance-based methods. Do we learn something interesting by this dot-product analysis? How are the dimension/sample/estimation tradeoffs different that those other papers?
As noted above and in section 1 of the manuscript, in existing works, dimension $p$ is either fixed or does not appear at all, and various forms of convergence are driven by increasing sample size $n\to\infty$, whereas in our setup $p\to\infty$ drives convergence - see eq. (8) and (9). To our knowledge, this is the first time it has been established that high-dimensionality might have a beneficial effect on performance. The convergence rate $n^{2/q}/\sqrt{p}$ in theorem 2 is unusual. At first glance it may appear strange that sample size $n$ appears in the numerator, but this is a reflection of the very conservative $\max$-form of error we are considering on the r.h.s. of (8). Moreover, we see from $n^{2/q}$ that the polynomial moment parameter $q$ from assumption A2 ameliorates the effect of $n$ being large. These are all new insights.
Regarding comparison to distance-based methods, we note that supplementary material appendix E "Understanding agglomerative clustering with Euclidean or cosine distances in our framework" is dedicated to helping make such connections. In E.3 we establish a convergence result (theorem 3) similar to theorem 2 but concerning cosine distance, under a modified model in which noise is multiplicative rather than additive as in eq. (1).
> - The numerical experiments: I am not sure they convey the benefits of your algorithm... . In Table 1, I would clarify what are the most significant advantages of your algorithm as currently it is hard to grasp.
Thanks for this suggestion, the basic message here is that our algorithm does a better job of recovering ground-truth hierarchy, except for with the S/&P500 data - shortcomings of our modelling approach which may explain why are discussed in appendix E.4.
>**Questions**
> - Q1: Do we learn something interesting by this dot-product analysis and how are the dimension/sample/estimation tradeoffs different that those other papers that did distance-based analyses?
See above.
> - Q2: Perhaps easy, but giving more evidence to motivate your particular hierarchical tree model would be appreciated.
Also see above.
> - Q3: Technical question: Perhaps to help illustrate some of your techniques/ideas, it would be helpful to instantiate your model to a small-depth hierarchy. ... Can your theorems be more interpretable in this scenario?
Thanks for this thought-provoking question. Having considered this, we think that the number of levels as you suggest does enter very directly in to our theoretical framework, and so we can't see an obvious simplification here. This may seem suprising, but we believe it is related to the fact we quantfy performance with merge heights, pair-wise between data points.
---
Rebuttal Comment 1.1:
Title: Author Response
Comment: I would like to thank the authors for their detailed response. My initial positive score remains the same. | Rebuttal 1:
Rebuttal: The authors are very grateful for the effort which the reviewers have put in to reading our manuscript and providing feedback.
We are pleased to see that, on the whole, the reviewers have recognised and engaged with the new perspective on hierarchical clustering and tree recovery which we report in this paper. Since this perspective is novel in various ways (e.g., in terms of our probabilistic model-based formulation involving conditional independence, emphasis on dot-product affinities, and theoretical investigation of high-dimensional behaviour) it necessitates the use of some concepts and notation which are non-standard in the literature on hierarchical clustering. The authors are very grateful for the various suggestions made by the reviewers which will help us improve the presentation in this regard.
We would like to highlight appendices D and E in the supplementary material of the original submission, in case they were missed. Appendix D.6 reports additional comparative numerical results. Appendix E, entitled "Understanding agglomerative clustering with Euclidean or cosine distances in our framework", provides methodological and theoretical perspectives to help make connections between our framework and agglomerative clustering using more standard distance measures techniques.
Attached to this rebuttal is a .pdf containing additional numerical results in response to the comments from reviewers dt3f and FShC.
Pdf: /pdf/1f6b27077f16470299cf90da0badcef108e8709c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper discusses a new perspective on hierarchical clustering using the dot product as similarity measure instead of some distance. Under mild conditions on the probabilistic graphical model that generates the data, the proposed algorithm is shown to faithfully recover the underlying tree geometry. Surprisingly, the theoretical results show that the performance of the approach does not suffer, but instead benefits from high-dimensional data.
Strengths: The paper is well written and all theoretical results and definitions are accompanied by intuitive explanations (e.g., lines 84-90, lines 159-166, Fig. 1, or Sec. 3.3). This makes the paper accessible to an audience not versed in graphical models, for example. The results seem quite promising; it is noteworthy that the authors also included "negative" results for the S&P500 dataset, and that they made an effort in investigating the limitations of the proposed method.
Weaknesses: I see very little weaknesses in the paper (see below for a list of typos). There are only few points unclear (see Questions below). The only concern I have is that the results claim that the performance of the method improves if the dimensionality of the data increases. This is probably caused by the mixing condition, which ensures that more dimensions deliver more relevant information. This assumption is quite questionable in my opinion, as usually more dimensions do not reveal substantially more information about the underlying distribution. As an example, consider natural images: going from low resolutions to higher resolutions may at first improve clustering performance, but at some point increasing the resolution will bring only diminishing returns. The reason is that, as the resolution increases, also the correlation between dimensions increases, which is in contrast to A1. I would appreciate seeing results based on, e.g., lower numbers of TF-IDF features to understand how this affects clustering performance.
Another concern is that Fig. 3a and 3c seem to be inconsistent. In 3a, at the position of comp.windows.x, there are three dark blue squares, one light blue square, and (maybe?) a hidden pink x. In 3c, the top five topics contain two dark blue and two light blue squares. Please check if there was a mistake in generating the figure.
## Minor:
- line 56: "interpretation to in all..."
- line 78: "distributional properties OF $\mathbf{X}$"
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - In line 102, it is not clear why $\mathbb{E}[< Y_i, Y_j>|Z_1,\dots,Z_n]$ is conditioned on all $Z_n$
- What is the meaning of the operator $O_{\mathbb{P}}$?
- In Table 1, the results for simulated data do not depend on whether PCA was applied or not. Is this caused by the fact that the dimensions are independent?
- Since the PCA version of the algorithm operates on fixed dimension $r$, it is not clear to my why in (9) an increase of $p$ should bring any benefit. Could you expand on that?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors clearly outline the limitations of their study, which is highly appreciated. A possible further limitation of the method is that, apparently, good performance guarantees can only be given for high-dimensional data, which is slightly counterintuitive. Societal implications are not to be expected.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >The only concern I have is that the results claim that the performance of the method improves if the dimensionality of the data increases. This is probably caused by the mixing condition ... I would appreciate seeing results based on, e.g., lower numbers of TF-IDF features to understand how this affects clustering performance.
Thanks for this insightful comment. As requested, in the .pdf attached to this response, table 1 and figure 1, we give additional numerical results showing the effect of the number of TF-IDF features, i.e. varying $p$. These results were obtained by randomly choosing features to exclude, to give the desired $p$. We find that, as $p$ grows performance improves, without evidence of "diminshing returns". We note a slight increase in the standard error with $p$, but the standard errors are roughly 100 times smaller than the values of the performance measure (note the factor of $10^3$), so we think this is not a serious concern.
The mixing condition is indeed a key assumption. However, it is not the only reason that performance improves with dimension. To see why, note that from eq. (1),
$$
p^{-1}\langle\mathbf{Y}_i,\mathbf{Y}_j\rangle = p^{-1}\langle\mathbf{X}(Z_i),\mathbf{X}(Z_j)\rangle + p^{-1}\langle\mathbf{X}(Z_i),\mathbf{S}(Z_j)\mathbf{E}_j\rangle + p^{-1}\langle\mathbf{X}(Z_j),\mathbf{S}(Z_i)\mathbf{E}_i\rangle + p^{-1} \langle \mathbf{S}(Z_i)\mathbf{E}_i,\mathbf{S}(Z_j)\mathbf{E}_j\rangle.
$$
The mixing assumption allows us to prove $p^{-1}\langle\mathbf{X}(Z_i),\mathbf{X}(Z_j)\rangle-\alpha(Z_i,Z_j)\to 0$ (convergence in probability) as $p\to\infty$, whilst the properties that the "noise" vectors $\mathbf{E}_i$ are zero-mean, have independent entries, are independent of each other, $\mathbf{X}$ and $Z_1,\ldots,Z_n$, allow us to deduce that the second to fourth terms on the r.h.s. of the displayed equation above converge to zero as $p\to\infty$. In this latter sense, high-dimensionality can help remove noise, irrespective of whether the mixing condition A1 holds.
We agree that for image data as you describe, where $p\to\infty$ corresponds to increasingly high resolution, the mixing assumption is not realistic. Future work would be needed to investigate alternative assumptions in complete detail, but for example, in such situations it could possibly be realistic that $p^{-1}\langle\mathbf{X}(Z_i),\mathbf{X}(Z_j)\rangle-\alpha(Z_i,Z_j)$ is "close" to zero with some high probability for all $p$ large enough, without actually converging to zero. Another interesting possibility is that if one does _not_ assume the mixing condition A1, then one might consider $p^{-1}\langle\mathbf{X}(Z_i),\mathbf{X}(Z_j)\rangle$ as an affinity measure rather than $\alpha(Z_i,Z_j)$. Of course these are just hypotheses, but we believe the work in the manuscript could be a first step towards understanding such scenarios.
For some types of data, such as time series on increasingly long time scales, or geospatial data collected over increasingly large areas, $p\to\infty$ does _not_ correspond higher resolution, and the mixing assumption is arguably realistic. For other data types, such as the document and biological data sets in our paper, it is perhaps less easy to firmly rule the mixing assumption in or out, but we believe our numerical results are encouraging.
>Another concern is that Fig. 3a and 3c seem to be inconsistent. ... Please check if there was a mistake in generating the figure.
The explanation here is that in 3c we do not plot the average dot product affinity between comp.windows.x and itself. However, this is plotted in 3a; the highest dark blue square in the comp.windows.x column corresponds to dot-product with itself. We could update this, or add a comment to clarify.
>**Minor:**...
Thanks for catching these typo's.
>**Questions**
> - In line 102, it is not clear why $\mathbb{E}[\langle Y_i,Y_j\rangle|Z_1,\ldots,Z_n]$ is conditioned on all $Z_1,\ldots,Z_n$
Under our model indeed $\mathbb{E}[\langle Y_i,Y_j\rangle|Z_1,\ldots,Z_n]=\mathbb{E}[\langle Y_i,Y_j\rangle|Z_i,Z_j]$, we would happily amend line 102 to the latter.
> - What is the meaning of the operator $O_\mathbb{P}$?
Intuitively, the argument of $O_{\mathbb{P}}(\cdot)$ is the convergence rate. We would happily add a precise defintion of this "big Oh in probability" notation to the manuscript: if $X_{p,n}$ is random variable indexed by $p$ and $n$, then e.g. $X_{p,n}\in O_{\mathbb{P}}\left(n^{2/q} / \sqrt{p}\right)$ means that for any $\epsilon>0$ there exists finite $\delta$ and $M$ such that $n^{2/q} / \sqrt{p}\geq M$ implies:
$$
\mathbb{P}(|X_{p,n}|>\delta)<\epsilon.
$$
> - In Table 1, the results for simulated data do not depend on whether PCA was applied or not. Is this caused by the fact that the dimensions are independent?
From a rigorous mathematical point of view, we cannot answer this question definitively: it is possible that a conclusion similar to that in Theorem 2 for PCA holds with the mixing assumption A1 rather than independence assumption, but we haven't proved it. In numerical experiments, we have found that some dependence does not adversely effect PCA, but a detailed examination of this matter would necessitate a longer paper.
> - Since the PCA version of the algorithm operates on fixed dimension, it is not clear to my why in (9) an increase of $p$ should bring any benefit. ...
The proof of theorem 2 and eq. (9) involves analysing the top $r$ eigenvalues and eigenvectors of the $n\times n$ matrix whose $(i,j)$-th entry is $p^{-1}\langle\mathbf{Y}_i,\mathbf{Y}_j\rangle$. As discussed above, $p^{-1}\langle\mathbf{Y}_i,\mathbf{Y}_j\rangle - \alpha(Z_i,Z_j) \to 0$ when $p\to\infty$. So $p\to\infty$ helps us prove the above mentioned eigenvalues/vectors are close to those of the matrix with elements $\alpha(Z_i,Z_j)$. For this we need a stronger form of convergence than element-wise matrix convergence, this is where the additional assumption of independence is used.
---
Rebuttal Comment 1.1:
Title: Thanks!
Comment: Thank you very much for your detailed answer, especially to my main concern. I agree with your observations that large $p$ helps to decrease noise (even in the absence of mixing), and that mixing is a realistic assumption in many practically relevant cases. I will, for now, keep my score, but may revise it based on the discussion with other reviewers. | null | null | null | null | null | null |
No-Regret Learning in Dynamic Competition with Reference Effects Under Logit Demand | Accept (poster) | Summary: The paper studies gradient descent dynamics in duopoly competitions with reference effects and logit demand.
Convergence results are proven to show that Online Projected Gradient Ascent (OPGA) with decreasing step size converges to a stationary Nash equilibrium. This is a novel result requiring new analysis because the considered game is neither convex nor strongly monotone or variationally stable. The setup and results extend the ones of (Golrezaei et al., 2020) which however considered linear demands and a uniform reference price.
Strengths: - The paper is well written and the results are sound and original
- The authors did a good job of connecting the considered games with the existing literature and explaining why existing results do not apply
- The proven theorems are novel and the analysis seems somewhat original.
Weaknesses: - The analysis is very specific to the considered duopoly games. Although they are widely studied in the Marketing and Management literature, they may not be of high interest to the machine learning community. That said, online learning and convergence to NE is a relevant topic within NeurIPS.
- The authors repeatedly refer to no-regret (even in the title). However, the notion of regret is never defined and -- moreover -- bounds on the regret are not provided. I agree that last-iterate convergence to NE is a stronger guarantee, but it is not clear to me that this implies sublinear regret.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How realistic is it for the competing firms to compute exact gradients? Which set of information is required to do so at each round?
Also, as pointed out by the authors, one-point gradient estimates could be obtained. Does the current analysis allow draw conclusions on the resulting convergence results?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are discussed in combination with future directions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback. Please find our responses below. We hope they address your concerns and provide further clarity.
> W1: The analysis is ... relevant topic within NeurIPS.
A: We understand your concern and agree that deriving convergence results for general online games is a meaningful research direction. Meanwhile, we believe that analyzing specific instances with practical importance is also a great contribution to the community. In our competitive framework, we consider a popular demand model, namely the MNL, which is a famous choice model from [R1] and has been empirically validated for its good representation of consumer purchasing behavior. Additionally, the general convergence results for online games are often built on certain assumptions, such as concavity [R2], strong monotonicity [R3], or variational stability [R4]. Hence, the general results usually fail or become weaker when applied to problems without corresponding properties, such as our problem. Yet, by leveraging the distinctive properties of the MNL model and developing a new line of analysis, we demonstrate that the global convergence and a $\Theta(1/t)$ rate still hold for our problem under minimal assumptions (the market feedback mechanism). For the above reasons, we believe our work aligns well with the scope of NeurIPS in terms of both practical significance and theoretical analysis.
> W2: The authors repeatedly refer to no-regret ... sublinear regret.
A: We appreciate the reviewer for pointing out the source of confusion. We will include the following definition of regret in our revised paper.
In a competitive framework, the ***regret*** of player $i$ is a quantity used to measure the performance of an online learning algorithm, which is the difference between the total reward of the best fixed action in hindsight and the realized total reward of player $i$ over $T$ periods. Formally, the regret of player $i$ at period $T$ is defined as $$R_i^T:= \max_{p_i \in \mathcal{P}} \bigg\\{\sum_{t = 1}^T \Pi_i\big((p_i,p_{-i}^t), \mathbf{r}^t\big)\bigg\\} -\sum_{t=1}^T \Pi_i\big((p_i^t,p_{-i}^t), \mathbf{r}^t\big).$$
Then, an algorithm is said to be ***no-regret*** if $R_i^T$ grows sub-linearly with respect to $T$ for every player $i$. The established results in our paper imply that OPGA is no-regret, with reasoning as follows:
* The convergence of the OPGA algorithm to the SNE (see Theorem 5.1 of the paper) guarantees that $\lim_{t \rightarrow \infty} (\mathbf{p}^t, \mathbf{r}^t) = (\mathbf{p}^{\star\star}, \mathbf{r}^{\star\star})$. Hence, when $T$ is sufficiently large, by the definition of SNE, the best fixed price in hindsight for firm $i$ is the SNE price $p_i^{\star\star}$ (note that when $p_{-i}^{t}\rightarrow p_{-i}^{\star\star}$ and $\mathbf{r}^t\rightarrow \mathbf{p}^{\star\star}$, then $p_i^{\star\star}$ is the unique optimal price for firm $i$).
* In the meanwhile, since the revenue functions are Lipschitz continuous in prices and reference prices, we have that $\lim_{t\rightarrow\infty}\big\\{\Pi_i(\mathbf{p}^{\star\star},\mathbf{r}^{\star\star})-\Pi_i(\mathbf{p}^t,\mathbf{r}^t)\big\\}\rightarrow 0$. Therefore, it follows that $\lim_{T\rightarrow \infty} {R_i^T}/{T} = 0$, as $T\rightarrow \infty, \ \forall i\in \\{H, L\\},$ which indicates that $R_i^T = o(T)$, i.e., the regret is sublinear.
Finally, we remark that our result is stronger than merely being no-regret, as the no-regret alone does not guarantee convergence at all, let alone the convergence to SNE. In fact, the players may exhibit entirely unpredictable and chaotic behaviors under a no-regret policy [R5], with the only exception being the finite game, where players only compete for finitely many rounds (note that our problem does NOT fall under this category).
> Q1: How realistic ... compute exact gradients? Which set of info ... round?
A: Thank you for your comments. Computing the gradient $D_i^t$ in Eq. (9) of the paper is feasible and realistic in practice. For firm $i$, the required information includes its previously posted price $p_i^t$, its own sensitivities to price and reference price $(b_i, c_i)$ (which can be estimated from historical data), and its demand from the last period. We note it is reasonable to assume that the firm has access to the aforementioned information, as it exclusively pertains to its internal data, and not that of its competitor. This also aligns with the opaque market setup in our paper, where both computing gradient $D_i^t$ and implementing the OPGA do not require firms to have any knowledge of their competitors.
> Q2: Also, as pointed out ... convergence results?
A: Our current analysis is based on the exact gradient, and we recognize that it may be more practical to use noisy first-order feedback or even zeroth-order feedback. In terms of difficulties, we believe that the generalization to the noisy first-order oracle is relatively straightforward, where the main difference would be an extra error term in the analysis, causing the price path to converge to a neighborhood of SNE, whose size is determined by the magnitude of noises. Yet, the extension to the zeroth-order oracle might be a meaningful future work. In addition, we hope to highlight that the core of this paper is to show that firms can achieve a stable equilibrium while safeguarding their private data. Our convergence results, even with an exact first-order oracle, are highly non-trivial. The techniques developed for the convergence analysis are both original and innovative.
### Reference
[R1] Conditional logit analysis of qualitative choice behavior.
[R2] Bandit learning in concave N-person games.
[R3] Optimal no-regret learning in strongly monotone games with bandit feedback.
[R4] Learning in games with continuous action sets and unknown payoff functions.
[R5] Multiplicative weights update with constant step-size in congestion games: Convergence, limit cycles and chaos.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal and explanations.
About the regret, I was hoping that a sublinear *bound* could also be derived as a function of $T$ (and not only asymptotically), e.g., $R(T) \leq C\sqrt{T}$ for some appropriate problem-dependent constant $C$. This is typical in the existing online learning literature, where last-iterate convergence is instead harder to guarantee.
I will keep my score.
---
Reply to Comment 1.1.1:
Title: Replying to Further Comment by Reviewer aAW7
Comment: Thank you very much for your reply and your extremely helpful comment! Upon checking the proof, we realize that our sublinear regret bound can also be derived as a function of $T$ (and not only asymptotically) as you indicated. Indeed, our Theorem 5.2 (last-iterate convergence rate in terms of price and reference price) already guarantees the $O(\sqrt{T})$ regret bound. We attach the proof below and will add this as a corollary to the revised paper.
For the ease of notation, we denote $f^t(\cdot)=\Pi_i((\cdot, p_{-i}^t), \mathbf{r}^t)$. Then, we have that $R_i^T=\sum_{t=1}^T\left[f^t(p_i^{\star\star})-f^t(p_i^t)\right]$. By Theorem 5.2 in our paper, we have $|p_i^{\star\star}-p_i^t|=O(1/\sqrt{t})$. Hence, if $f^t(\cdot)$ is $\ell$-Lipschitz continuous $\forall t\geq 0$ for some $\ell>0$, we can derive that
$$R_i^T\leq\sum_{t=1}^T\ell |p_i^{\star\star}-p_i^t|=O\left(\sum_{t=1}^T\ell/\sqrt{t} \right)=O(\sqrt{T}),$$which is exactly the regret bound suggested by the reviewer.
The only point remaining is justifying the Lipschitz continuity of $f^t(\cdot)$ for all $t$. This can be directly done by computing the derivative of the revenue function: $$\dfrac{\partial\Pi_i(\mathbf{p},\mathbf{r})}{\partial p_i}=\dfrac{\partial (p_i\cdot d_i(\mathbf{p},\mathbf{r}))}{\partial p_i} = d_i(\mathbf{p},\mathbf{r})-p_i\cdot (b_i+c_i)\cdot d_i(\mathbf{p},\mathbf{r})\cdot (1-d_i(\mathbf{p},\mathbf{r})),$$
where the computation in the last step above is similar to our proof of Lemma E.5. Hence, provided that $p_i\in \mathcal{P}=[\underline{p},\overline{p}]$, we have that $$\left|\dfrac{\partial\Pi_i(\mathbf{p},\mathbf{r})}{\partial p_i}\right|\leq 1+ \overline{p}\cdot(b_i+c_i)/4,$$ where we use the fact that $x\cdot(1-x)\leq 1/4$ for any $x\in [0,1]$. Hence, it suffices to choose $\ell = 1+ \overline{p}\cdot(b_i+c_i)/4$.
Combining together, we have shown that $R_i^T\leq C\sqrt{T}$ for some constant $C$. We thank the reviewer again for the insightful comment, which we believe will further enrich the content of our paper. We hope this response provides further clarification : ) | Summary: The author consider a multi-period pricing competition problem between two firms, dubbed H and L. Each of these firms sells 1 product. At each time step, the demands on each product is governed by an MNL model, which is paramaterized by not only the firms' offered prices in the current time period, but also the current reference prices of the firms. These reference prices are dependent on the past prices. In the game theoretic setting, each of the firm is to set prices under the uncertainty on the other firms strategy. The authors' main result is that, when each of the firms applies the standard online gradient descent algorithm, they both converge to a stationary Nash equilibrium, and the rate of convergence is also provided.
Strengths: The pricing problem combines both the game theoretic element in a duopoly pricing setting, as well as the reference price setting, which are both well known in the literature. The provided algorithm is sensible and reasonable, and these results are supported with numerical experiments. The analysis in the gradient descent algorithm seems interesting, since it seems to deviate from the traditional Online Gradient Descent works that require convexity.
Weaknesses: - The model assumption on the demand model seems quite strong, since it requires the random noise to follow a particular probability distribution (Gumbel), in order for the result to hold. While there is another linearity assumption on the utility, I find it a milder assumption than the Gumbel assumption. While I understand that the Gumbel distribution, which gives rise to the MNL model, is a common assumption in assortment optimization, I find that it could be a limiting assumption in the pricing setting. Crucially, if the noise distribution is changed, it is not quite clear of the OPGA still converges to a stationary Nash equilibrium.
- While Theorem 5.2 is a more refined form than Theorem 5.1 in that the former provides a finite time performance guarantee, I am not clear on what is the hidden parameter in the $\Theta(\cdot)$ notation for the learning rate $\eta_t$. While the OPGA can be implemented with any sequence of learning rates, it is desirable to have a set of default learning rates that come with an explicit performance guarantee.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) Can the authors provide a discussion on how easy or how difficult it is to generalize from the Gumbel distribution noise to a more general class of noise (for example, 1-subGassuian)?
(2) As shown in Figure 1a, b, the choice of learning rate is rather important. Can the authors demonstrate a plot with the setting of learning rate $\eta^t = \Theta(1/t)$ in Theorem 5.2? I think it will be more informative than setting $\eta^t = 1/\sqrt{t}$ in Figure 1a. In addition, can the authors provide explicit expressions for the coefficients $d_r, d_p$ in Theorem 5.2, on how they depend on the MNL assumption and the model parameters?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This is a theoretical work, there is minimal negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments. We hope the responses below address your concerns. We highly appreciate a re-evaluation of our work and a kind reconsideration of the review score.
> W1: The model assumption ... equilibrium.
A: We agree with you that the Gumbel noise is somewhat restrictive. Yet the MNL is by far the most popular choice model, due to its elegant closed-form expression for market share. Beyond the assortment problem, the MNL choice model and its variants (e.g. nested logit) have also been widely studied in the pricing literature. We kindly refer the reviewer to [R1-- R9] for notable works on the pure pricing problem under the MNL and its variants.
> W2: While Theorem 5.2 ... performance guarantee.
A: We appreciate the reviewer's advice about having a default learning rate that guarantees the $\Theta(1/t)$ convergence rate. The hidden parameter in the choice of $\eta^t=\Theta(1 / t)$, denoted by $d_\eta$, hinges on Eq.(72):$$(\gamma d_\eta-1)d_p\geq C_3.(\*)$$
Recall that $\gamma$ is defined in Lemma E.3 and $C_3>0$ is defined in Eq.(68). Hence, if $\gamma d_\eta-1>0$, there exists suitable $d_p$ to make Eq.$(*)$ hold, which subsequently guarantees the $\Theta(1/t)$ convergence rate. Therefore, analytically, firms can adopt any step-size $\eta^t=d_\eta/(t+1)$ with $d_\eta>1/\gamma$. The only subtlety lies in $\gamma$, whose exact value depends on both firms' parameters and the SNE market share (Eq.(89)) and the derivation afterward). Below, we describe a mechanism for firms to obtain a suitable step-size without disclosing private information.
It suffices to find a lower bound $\underline\gamma\in(0,\gamma)$ and let the firms agree on the choice $d_\eta=1/\underline\gamma>1/\gamma$. By Eq.(89), we have$$\gamma\geq\min_i\{b_i+c_i\}\cdot\min_i\{d_i^{\star\star}\}\cdot(1-d_H^{\star \star}-d_L^{\star \star}).(\**)$$
In Prop 3.1, we establish lower and upper bounds for $p_i^{\star\star}$. For brevity, denote Eq.(6) as $m_i<p_i^{\star\star}<M_i$, and the utility at SNE as $u_i=a_i-b_ip_i^{\star\star}$. Then, each firm $i$ can bound $u_i$ by $u_i^m:=a_i-b_iM_i<u_i<a_i-b_im_i=:u_i^M$. Instead of sharing the true values, each firm $i$ can disclose an arbitrary lower bound $m_i^{bc}>0$ for $b_i+c_i$, and two arbitrary numbers $\underline u_i^m$ and $\overline u_i^M$ with $\underline u_i^m\leq u_i^m$ and $\overline u_i^M\geq u_i^M$. Using Eq $(**)$, the two firms can compute that $$\gamma\geq\min_i\\{m_i^{bc}\\}\cdot\min_i\left\\{\frac{\exp(\underline u_i^m)}{1+\exp(\underline u_i^m)+\exp(\overline u_{-i}^M)}\right\\}\cdot\frac{1}{1+\exp(\overline u_{H}^M)+\exp(\overline u_{L}^M)}=:\underline\gamma.$$
Thus, firms can obtain the constant $d_\eta = 1/\underline{\gamma}$ without disclosing their information.
> Q1: Can the authors ... (for example, 1-subGassuian)?
A: We kindly refer the reviewer to **General Response Q3** for background on choice models and the popularity of Gumbel noise. Shifting from Gumbel noise introduces challenges. For example, multivariate Gaussian noise results in the probit model, while arbitrary noise distributions lead to the mixed logit model. Both lack closed-form expressions for market share [R10], thereby making the properties of their revenue functions undefined and elusive. Further, changes in noise distribution can significantly affect equilibrium behavior in price competition [R5]. Specifically, while MNL models guarantee the existence and uniqueness of a Nash equilibrium (NE) [R11], such NE may not even exist in models with general noise distributions [R5]. This absence creates significant hurdles, as the existence of NE is fundamental for achieving a stable equilibrium.
Extending noise distributions beyond Gumbel is an intriguing research direction, with the key first step may be to show that NE exists under distributions of interest.
> Q2: As shown in Figure 1a, b, ... parameters?
A: We performed additional experiments with $\eta^t=\Theta(1/t)$ (see Figure 2 in the rebuttal PDF), and we will add this figure to the revised paper.
Below, we elaborate on the expressions of $d_p$ and $d_r$. According to Eq.$(\*)$, $d_p$ depends on the choice of $d_\eta$. Without loss of generality, let $d_\eta=2/\gamma$ (it is easy to observe $d_\eta = O(1/\gamma)$ yields the best $d_p$). By Eq.$(*)$, it suffices to take $d_p = C_3$ (see definition in Eq.(68)), which further implies that $d_p\leq C_1(d_\eta)^2+d_\eta kd_{rp}$ (the definitions of $C_1,k,d_{rp}$ can be found in Eqs.(54), (59), and (65)). After unfolding the definitions and using simple algebras, we observe that $$C_1=O(\sum_{i} (b_i+c_i)^2/\underline{p}^2)$$$$k= O((c_H+c_L)^2\sum_{i} (b_i+c_i)^2/\gamma)$$
$$d_{rp}=O\left(\frac{1+\alpha^2C_1}{(1-\alpha^2)^2\gamma^2} + \frac{3+\alpha^2}{1-\alpha^2}(\bar{p}-\underline{p})^2\right).$$
Combining the above pieces, we obtain that $$d_p = O\left(\frac{B^2}{\gamma^2(1-\alpha^2)}\left[\frac{B}{\gamma^2\underline{p}^2(1-\alpha^2)}+(\bar{p}-\underline{p})^2 \right]\right),$$
where $B=\sum_i(b_i^2+c_i^2)$. Finally, as $d_r=2d_p+2d_{rp}$ (see Eq.(76)), we conclude that $d_r$ has the same order as $d_p$.
### Reference
[R1] Optimal bundle pricing.
[R2] Product line selection and pricing with modularity in design.
[R3] Pricing multiple products with the multinomial logit and nested logit models: Concavity and implications.
[R4] Optimal pricing for a multinomial logit choice model with network effects.
[R5] Price competition under mixed multinomial logit demand functions.
[R6] Multiproduct price optimization and competition under the nested logit model with product-differentiated price sensitivities.
[R7] Dynamic pricing of perishable assets under competition.
[R8] Optimal pricing of correlated product options under the paired combinatorial logit model.
[R9] Optimizing Risk-Balancing Return Under Discrete Choice Models.
[R10] Discrete choice methods with simulation.
[R11] Discrete choice theory of product differentiation.
---
Rebuttal 2:
Comment: Thank you again for your insightful questions. We hope our further clarifications have addressed your concerns, and we'll include them in the revised paper. We are more than happy to discuss any additional concerns you may have. Looking forward to your reply : ) | Summary: The authors' objective is to develop an algorithm to aid the firms in converging to a stationary Nash equilibrium (SNE). In pursuit of this, the authors have:
* Proposed an online projected gradient ascent (OPGA) algorithm.
* Proven the global convergence of the OPGA to SNE within the given problem setting.
* Established the convergence rate of the proposed algorithm to SNE.
Strengths: I commend the authors for their clear and engaging writing. The background, related literature, concepts, algorithms, and more, are all explained lucidly. The proposed algorithm, and the results derived from it, are noteworthy contributions to the field. The technical addition to extend the algorithm's convergence beyond linear demand and convex loss function is particularly valuable.
Weaknesses: I noticed certain assumptions and explanations that might require further substantiation. Please refer to the "questions section" for a detailed account. I believe addressing these points could add considerable depth and credibility to the paper's findings and arguments.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Despite the significant merits of the work, I do have a few queries. If the authors could address these, I would be more than pleased to highly rate the paper:
* My first question might be rather straightforward. Could you please elaborate on why online mirror descent, proposed to solve a similar problem, does not apply in your problem setting? If online mirror descent does apply, what advantages does your OPGA algorithm offer?
* As for the assumptions listed above, I'm somewhat puzzled:
- The last assumption posits that each firm can access its own demand from the last period. However, I would argue that this "demand" differs from the one defined by
$$d_i(\bm{p}^t, \bm{r}^t) = \frac{\exp(u_i(p_i^t, r_i^t))}{1+\exp(u_i(p_i^t, r_i^t)) + \exp(u_{-i}(p_{-i}^t, r_{-i}^t))}.$$
which, to the best of my understanding, represents the market share of firm $i$ at period $t$. While the "demand" that firm $i$ can access using the proposed method is the quantity of product sold by firm $i$, such as 1000 electronic devices. Aren't these two different concepts? I may be overlooking something, but it would be helpful if you could clarify the potential connection. It would also be instructive if you could provide an example explaining the use of $d_i(\bm{p}^t, \bm{r}^t)$ in the numerical example.
- It is assumed that $b_i$ and $c_i$ are known to firm $i$. However, I believe these parameters can only be learned from data. Therefore, what is the role of the learning process in the convergence to SNE? Do we handle them as a two-stage problem, or can we combine the two learning processes?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful questions. Please find our responses below. We highly appreciate your re-evaluation of our work and a kind reconsideration of the review score.
> Q1: Discussion about online mirror descent.
A: For consistency, we assume a maximization form and use the term Online Mirror Ascent (OMA).
* Firstly, we clarify that we did not eliminate the application of OMA to our problem. We are aware that OMA is a widely used online learning method, and OPGA is indeed a special case of OMA by using the Euclidean distance as the Bregman divergence. We've conducted numerical tests on other OMA variants (see Figure 1 in the rebuttal PDF), such as the multiplicative weight update (MWU). The results suggest that OMA could very likely converge to the SNE for our problem.
* We choose OPGA due to its simplicity, efficiency, and the geometric structure of our problem. The general OMA is known for its capability in exploiting specific geometric structures of the problem (e.g. when the feasible region is a probability simplex). However, the feasible price region in our problem is just a rectangle, making the simple gradient update a natural and intuitive choice. Indeed, our numerical results also demonstrate that OPGA and MWU show comparable performances with OPGA exhibiting less fluctuant price updates at the early stages. Besides, with OPGA, we do not suffer from the complexity of evaluating certain complicated mirror maps.
* Lastly, we discuss a potential way to generalize our proof to OMA. Let $\Phi(x)$ be a $\kappa$-strongly convex function and define the associated Bregman divergence as $D_\Phi(x,y)=\Phi(x)-\Phi(y)-\langle\nabla\Phi(y),x-y\rangle$. The OMA update can be written as$$p_i^{t+1}=\text{Proj}_{\mathcal{P}}[(\nabla\Phi)^{-1}(\nabla\Phi(p_i^t)+\eta^t D_i^t)].$$
Recall that Eq. (29) and Eq. (52) are two key inequalities we use for the convergence of OPGA. Now, we derive a corresponding inequality for OMA and use Bregman divergence to measure the convergence. By the generalized Pythagoras identity, we have$$D_\Phi(p_i^{\star\star},p_i^{t+1})\leq D_\Phi(p_i^{\star\star},p_i^t)-D_i(p_i^t,p_i^{t+1})+\eta^tD_i^t(p_i^{\star\star}-p_i^t+p_i^t-p_i^{t+1}).$$
Applying the strong convexity to $D_i(p_i^t,p_i^{t+1})$ and maximizing the right-hand side (RHS) w.r.t. $(p_i^t-p_i^{t+1})$, we derive$$D_\Phi(p_i^{\star\star},p_i^{t+1})-D_\Phi(p_i^{\star\star},p_i^t)\leq \eta^tD_i^t(p_i^{\star\star}-p_i^t)+\frac{(\eta^tD_i^t)^2}{2\kappa}. (\*)$$
Similar to the proof in our paper, the RHS of $(\*)$ is controlled by the first term when $\eta^t$ is small. In addition, when $\mathbf{r}^t$ and $\mathbf{p}^t$ are close, we have $D_i^t\approx(b_i+c_i)\cdot G_i(\mathbf{p}^t,\mathbf{p}^t)$. Hence, by Lemma E.2, we can show from $(\*)$ that the price of at least one product $i$ will strictly approach the SNE during the update unless it is already close to $p_i^{\star\star}$. After $\mathbf{p}^t$ enters a neighborhood of SNE with a small enough step-size, we consider the sum of Eq. $(\*)$ for two products. By Lemma E.3, we can show that the RHS of the summation is non-positive (compare to Eq. (52)-(54)), which implies that the price will stay in the neighborhood.
While there may exist subtleties to work out, we believe this proof sketch is a promising starting point for proving the convergence of OMD. A comprehensive study on this topic will be part of our future work.
> Q2-1: Connection between demand (quantity sold) and market share.
Due to the length limit in the response for each reviewer, we kindly refer you to our **General Response Q1 and Q2**, where we clarify the assumption and the connection between demand and market share.
> Q2-2: Estimation of $b_i$ and $c_i$.
A: Thank you for your insightful feedback. We believe both approaches you mentioned are feasible. Below, we elaborate on the estimation process.
* When treated as a two-stage problem, the firms can estimate $b_i,c_i$ from historical data. This falls in the domain of MNL parameter estimation, a well-explored topic in economics. The most common method is maximizing the log-likelihood, i.e., the product of market share for all firms. Notably, [R1] shows that the log-likelihood function for MNL is globally concave in parameters $(\mathbf{a}, \mathbf{b}, \mathbf{c})$. This nice concavity property facilitates the computational algorithms. Specifically, several prominent algorithms (i.e., (1) Newton-Raphson, (2) BHHH-2 [R3], and (3) the steepest ascent) studied in Chapter 8 of [R2] are all guaranteed to get an improved estimation at each iteration. We refer the reviewer to [R2] for a more detailed analysis of the strengths and drawbacks of each algorithm.
* Second, the learning process of $b_i, c_i$ can also be integrated with the OPGA aglorithm. Indeed, existing papers like [R4, R5] have done so for the monopoly setting. The benefit of this method is that little historical data is required, and the firms can refine their estimations along the repeated competition. Meanwhile, the firms may incur additional regret during the exploration stage. However, as long as firms can estimate $b_i, c_i$ accurately, our existing analysis ensures that the firm can reach the SNE. Hence, the overall regrets for both firms are still sublinear.
Finally, we remark that, when $b_i$ and $c_i$ must be estimated from data, it is more practical to assume that firms can obtain a noisy estimation for the derivatives. Though our current analysis is based on the exact gradient, we believe the generalization to the noisy first-order feedback is a promising yet interesting future research.
### Reference
[R1] Conditional logit analysis of qualitative choice behavior.
[R2] Discrete choice methods with simulation.
[R3] Estimation and inference in nonlinear structural models.
[R4] Multi-product dynamic pricing in high-dimensions with heterogeneous price sensitivity.
[R5] Demand learning and pricing for varying assortments.
---
Rebuttal Comment 1.1:
Title: Additional Response to Reviewer M19q
Comment: We thank the reviewer again for the comments. We've been continuously thinking about the second question you raised these days, and we would like to share our new thought with you. We believe this will enrich the content of our paper and provide further justification for our assumption. We highly appreciate your time in reading our response and your kind reconsideration of the review score.
As we explained in our previous response, there are multiple ways to obtain the market share and estimate the gradient. Yet, a practical issue to consider is the approximation accuracy: the estimation is likely to bring noises to the gradient, but our current analysis assumes the exact gradient. Now, we have rigorously proved that: **if the noises are bounded by some constant $\delta>0$, the price and reference price would converge to a neighborhood with radius $\mathcal{O}(\delta)$ of the SNE**. This is a typical type of conclusion in optimization with noisy first-order oracles. The proof is built on our current analysis, and we detail the roadmap below.
Recall that, the basis of our proof for Theorem 5.1 is Eq. (29), which shows that the price path would steadily approach the SNE if it stays in the same quadrant defined in Eq. (28). Subsequently, through a contradiction-based argument, we demonstrate that even when the price path does not stay in the same quadrant, it will still converge toward the SNE along the boundary regions. In both parts, our proof relies on the properties of $G_i(\mathbf{p}^t,\mathbf{p}^t)$ (e.g., Lemma E.2), which serves as a good approximation for the scaled derivative $G_i(\mathbf{p}^t,\mathbf{r}^t)$ provided that $\mathbf{p}^t$ and $\mathbf{r}^t$ are close. The only difference under the noisy first-order oracle is that, besides the difference term $||\mathbf{p}^t - \mathbf{r}^t||_2$, we will have an additional error term proportional to $\delta$ in Eq. (29) and subsequent steps. Therefore, if $\delta$ has the same order as $||\mathbf{p}^t - \mathbf{r}^t||_2$, our current analysis is directly applicable and the convergence follows. More practically, when $\delta$ is a fixed threshold, some steps in our analysis can fail if $\delta$ dominates the quantity $G_i(\mathbf{p}^t,\mathbf{p}^t)$. However, if this happens, we can show that the current price is already pretty close to the SNE.
More precisely, what we need to prove is that $\mathcal{G}(\mathbf{p})=\mathcal{O}(\delta)$ also implies $||\mathbf{p} - \mathbf{p}^{\star\star}||_2=\mathcal{O}(\delta)$ (recall that $\mathcal{G}(\mathbf{p})$ is a sum of $G_i(\mathbf{p}, \mathbf{p})$ defined in Eq. (80)).
This can be viewed as a refinement of current Lemma E.2: instead of showing that $\mathcal{G}(\mathbf{p}) \geq M_{\epsilon}$ if $\varepsilon(\mathbf{p}) \geq \epsilon$, we can show that
$$\mathcal{G}(\mathbf{p})\geq C\cdot||\mathbf{p}-\mathbf{p}^{\star\star}||_2.$$
Hence, this ensures that $||\mathbf{p}-\mathbf{p}^{\star\star}||_2 = \mathcal{O}(\delta)$ as long as $\mathcal{G}(\mathbf{p}) = \mathcal{O}(\delta)$. The above inequality can be derived from the current proof of Lemma E.2. Suppose $p_H>p_H^{\star \star} \text { and } p_L \geq p_L^{\star \star}$. Similar as Eq. (81), we have that
$$\mathcal{G}(\mathbf{p})=\mathcal{G}(\mathbf{p})-\mathcal{G}(\mathbf{p}^{\star\star})=\sum_{i}\dfrac{1}{b_i+c_i}\left(\dfrac{1}{p_i^{\star\star}}-\dfrac{1}{p_i}\right) + d_0(\mathbf{p},\mathbf{p})-d_0(\mathbf{p}^{\star\star},\mathbf{p}^{\star\star}).$$
By definition, it is easy to observe that $d_0(\mathbf{p},\mathbf{p})-d_0(\mathbf{p}^{\star\star},\mathbf{p}^{\star\star})>0$. Hence, we have that $$\mathcal{G}(\mathbf{p})>\sum_{i}\dfrac{1}{b_i+c_i}\cdot\dfrac{p_i-p_i^{\star\star}}{p_i^{\star\star}p_i}\geq\sum_{i}\dfrac{1}{b_i+c_i}\cdot\dfrac{p_i-p_i^{\star\star}}{p_i^{\star\star}\overline{p}}=\sum_i \dfrac{|p_i-p_i^{\star\star}|}{(b_i+c_i)p_i^{\star\star}\overline{p}}.$$
The right-hand side of the above equation is a weighted $\ell_1$ distance between $\mathbf{p}$ and $\mathbf{p}^{\star\star}$. By the equivalence of norms in Euclidean space, we know there exists some constant $C$ such that $\mathcal{G}(\mathbf{p})\geq C\cdot ||\mathbf{p}-\mathbf{p}^{\star\star}||_2$. When $\mathbf{p}$ belongs to other quadrants, we can derive the same lower bound for $\mathcal{G}(\mathbf{p})$ like above. For example, if $p_H\leq p_H^{\star \star} \text { and } p_L \geq p_L^{\star \star}$, the derivation is similar to Eq. (82). This concludes the proof.
In summary, when the level of noise is small compared to the gradient, our current analysis is still applicable, showing that the price and reference price must converge toward the SNE. If the size of noise becomes comparable to the gradient, the above argument demonstrates that the price path is already within a $\mathcal{O}(\delta)$ neighborhood of the SNE, where $\delta$ characterizes the size of noises.
---
Rebuttal 2:
Comment: Thank you again for your valuable comments. We hope our response addressed your questions, and we'll incorporate them into the revised paper. If you have additional concerns, we are very delighted to discuss them further. Looking forward to hearing from you : ) | Summary: This paper investigates the problem of dynamic pricing in a competitive environment, where there are two revenue-maximizing competing firms selling substitutable products to customers and each of them has no access to the information of its competitor. In addition, customer’s utility follows a linear function with current price and reference prices, while the demand function is captured by a MNL model. The authors propose a simple online projected gradient ascent algorithm to update the price based on the historical prices and observed demands. The main contribution of this paper is that the authors provide the convergence guarantee of this algorithm to a unique stable Nash equilibrium and shows the convergence rate when choosing step size optimally.
Strengths: 1. This paper analyzes a very interesting problem --- dynamic pricing in the presences of competition across firms and reference prices of customers and provides a very succinct formulation to abstract this problem.
2. The paper is well-written and I really enjoy reading this paper even though I don't have much background on MNL model.
3. The results are promising that the authors show that a simple online projected gradient ascent algorithm works well in practice and it can converge to the unique stable NE.
4. The paper is technically solid and the proof is nontrivial to me, however, I have no expertise to determine the novelty of the proof shown in the paper compared with the related work [21].
Weaknesses: In general, I find the paper interesting, technically solid and shows a very promising result. Though I still have some questions needed to be clarified.
1. I find the uniqueness of the stable NE in this paper very important and crucial to the convergence results. Can authors discuss whether this property can be generalized? In addition, with this property, can we replace online projected gradient ascent algorithm by other online optimization algorithm to achieve similar convergence rate?
2. If I understand correctly, the authors need $d_i(p^t, r^t)$ can be directly observed, right? If so, please clarify it more explicitly in the algorithm.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our work. We hope our responses below provide further clarity.
> S4: Comparison with the related work [21].
A: We thank the reviewer for the feedback. We'd like to clarify the distinctions between our work and [21] from two perspectives.
**Model Formulation:** Our paper employs the MNL demand, which is distinct from the linear demand used in [21]. The linear demand in [21] results in a quadratic revenue function, which possesses favorable properties such as supermodularity and concavity. These properties facilitate the convergence analysis of online games (see e.g., [R1]). On the other side, the revenue function under the MNL model lacks these properties, and it does not satisfy other useful characteristics such as cocoercive or variational stability (see the discussion in Section 2). Moreover, in Section 4 and Appendix A, we demonstrate that standard techniques from multi-agent games and dynamical systems are also not applicable to our problem, making it necessary to develop a novel analysis for the convergence of the OPGA.
Besides the demand, another difference lies in the reference price formulation. In particular, the duopoly competition in [21] assumes that the two firms share a common reference price, whereas we allow firms to have different reference prices. Our formulation offers greater flexibility in modeling. We note that our analysis also applies to the scenario where two firms share a common reference price.
**Proof Technique:** While both our work and [21] utilize a two-part proof for showing the asymptotic convergence, the essence of our proofs is significantly different from that of [21]. A key result in [21] is their Lemma 9.1, which ensures the following inequality holds globally (see Eq. (19) and Eq. (20) in [21] and note that their problem adopts the minimization formulation): $\sum_{i} (g_i^{\star\star}-g_i^t)(p_i^{\star\star}-p_i^t) > 0$, where $g_i$ is used to denote the derivative of their revenue function. This property is known as variational stability (see e.g., [R2]), under which the convergence results have been established for various algorithms. Our work, however, doesn't benefit from such properties. To address this, we introduce two distinct metrics respectively for the two parts of our convergence analysis. In Part 1, we divide the feasible price range into four quadrants with the SNE $\mathbf{p}^{\star\star}$ being the origin. Our proof is based on drawing contradiction: suppose that the price vector does not converge to the SNE, we can prove the following in sequence (1) the price path cannot always stay in the same quadrant; (2) when $t$ is large, the price path can only oscillate between adjacent quadrants; (3) the price path would converge to the boundaries between some adjacent quadrants; (4) afterward, the price path would stay close to the boundaries and converge to the SNE, resulting in the contradiction. In Part 2, we exploit a local property of our model (see Lemma E.3) by showing that the auxiliary function $\mathcal{H}(\mathbf{p})$ is lower-bounded by some quadratic function around the SNE. This enables us to establish our final conclusion.
In summary, we believe that our paper is technically novel and significantly different from [21].
> W1: I find the uniqueness of the stable NE ... by other online optimization algorithm to achieve similar convergence rate?
A: Thank you for your insightful comments. The uniqueness of SNE plays an important role in the convergence analysis of this paper. This property naturally results from the MNL demand model used in this paper. Yet, we believe similar convergence results can also be derived for certain models where the SNE is not unique.
In fact, we are currently considering an extension to the asymmetric reference effects, which exhibits the non-uniqueness property as the reviewer mentioned. Under asymmetric reference effects, the utility for product $i$ changes to
$$u_i(p_i^t, r_i^t)=a_i-b_i \cdot p_i^t+c_i^{+} \cdot(r_i^t-p_i^t)_{+}+c_i^{-} \cdot(r_i^t-p_i^t)\_-,$$
where the notations $(\cdot)\_{+}:=\max \\{\cdot, 0\\}$ and $(\cdot)\_{-}:=\min \\{\cdot, 0\\}$ are adopted to account for consumers' potentially asymmetric reactions to discounts and surcharges, respectively. When $c_i^- <c_i^+, \forall i\in \\{H,L\\}$ (referred to as the loss-averse scenario), it can be shown that there exists a set of SNEs. With some slight modifications of the OPGA, we have established the convergence result. The difference is that, instead of converging to a unique SNE, the algorithm converges to the set of SNEs, and the limit may depend on the initialization. Due to the length restriction on the rebuttal response, we regret that we couldn't elaborate more on this extension. If the reviewers are interested, we are delighted to share more details about this modified algorithm and analysis during the discussion period.
Finally, we believe it is possible to generalize to other online learning methods, such as online mirror ascent (OMA). In Figure 1 of the rebuttal PDF, we verify that another variant of OMA, the Multiplicative Weight Update (MWU) method, also converges to the SNE. Thus, we expect the general OMA can also converge for our problem. We kindly refer the reviewer to our response to reviewer **M19q (Q1)** for a more detailed discussion on OMA and a potential method to generalize our proof.
> W2: The authors need $d_i(p^t, r^t)$ can be directly observed.
A: Yes, we have discussed this assumption in Section 3.2. In the revised paper, we'll declare this requirement more clearly in Section 4 when presenting the OPGA. Due to the length limit, we kindly refer the reviewer to **General Response Q1 and Q2** for a detailed discussion about the assumption needed for observing $d_i(\mathbf{p}^t,\mathbf{r}^t)$.
### Reference
[R1] Bandit learning in concave N-person games.
[R2] Learning in games with continuous action sets and unknown payoff functions.
---
Rebuttal Comment 1.1:
Comment: Thanks authors for their detailed response. I am happy to see this response to appear in the next version of this paper. My score remains the same.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply! We will incorporate these new discussions into the revised paper. | Rebuttal 1:
Rebuttal: # General Response
We would like to express our sincere gratitude to the reviewers for reading our paper and providing valuable feedback. Below, we answer two common questions and provide a background for the discrete choice model. Please find our responses to other questions in the personalized rebuttals.
> Q1: Clarification on observing market share $d_i$, and the connection between demand (quantity sold) and market share (CZUh W2, M19q Q2-1).
A: Sorry for the confusion. To observe $d_i$, each firm needs to know its own last-period demand (quantity sold), and both firms should agree on an estimated market mass. While the first point is rather intuitive, we'll elaborate on the market mass.
The realized demand and market share $d_i$ are connected via the market mass (the largest potential demand, including no-purchase quantity). For instance, if firm H sells 1000 units and firm L sells 1500, with a market mass of 5000, then their market shares are $d_H=0.2$ and $d_L=0.3$, respectively, and the no purchase market share is $d_0=1-d_H -d_L=0.5$. Hence, given the market mass, a firm can easily convert the sold quantity into its market share. We'll now justify the assumption of knowing the market mass from two points.
* First, adding market mass information is feasible and practical. Indeed, it is sufficient for the two firms to agree on an estimated market mass. Under this agreement, the market will still reach an SNE under the OPGA. We only need to account for the error in market mass estimation. This is equivalent to using the noisy first-order oracle, which is a direct extension of our current analysis. Additionally, we note that in the fields of economics and operations research, the assumption of knowing the market mass is prevalent and standard under the MNL model (see, e.g., [R6--R13]), where this stream of papers typically uses the market share as the demand function while doing the pricing and assortment optimization on the MNL model.
* Second, having the information on market mass will NOT disclose a firm's market share or demand to its competitor, thereby safeguarding the firm's privacy. Note that even with an agreement on the market mass, firms remain unaware of their rivals' market share, as the market mass encompasses the quantity of no purchases. Thus, having this piece of information will NOT compromise the objective of this paper, i.e., achieving the SNE while preserving firms' privacy.
> Q2: Accessing the gradient $D_i^t$ under the opaque market (CZUh W2, M19q Q2-1, aAW7).
A: Obtaining $D_i^t$ in Eq.(9) is identical to accessing the first-order oracle, which is a common assumption in the optimization and gradient-based online learning literature [R1,R2,R3,R4,R5]. In this paper, rather than abstractly assuming direct access to a first-order oracle, we'd like to explain it in a more concrete and interpretable way, i.e., what elements do we need to get this oracle? Indeed, to compute $D_i^t$, a firm requires only (i) its own sensitivity parameters $(b_i, c_i)$ and (ii) its last-period demand. This aligns with the opaque market setup, i.e., there's no dependency on a rival's information.
A more practical setting may be accessing a noisy first-order oracle or even zeroth-order feedback. Yet, the main message we want to convey in this paper is that under the MNL, firms can reach an SNE while protecting their privacy, and our conclusions in the context of an exact first-order oracle are already highly non-trivial. We believe further relaxing the exact first-order oracles would be a very interesting future work (in fact, the generalization to a noisy first-order oracle is likely to be straightforward, with the zero-order feedback being the exciting one).
> Q3: Background on discrete choice models and prevalence of Gumbel noise.
A: The discrete choice models, built on random utility theory [R14], help predict consumer choices among multiple options. The utility $U_i$ for product $i$ comprises a deterministic part $u_i$, and a random part $\epsilon_i$. Consumers are assumed to pick the product with the highest utility (with no-purchase utility being 0). The choice probability for product $i$ thus becomes $$P(U_i > U_j, \forall j \neq i)=P(\epsilon_j - \epsilon_i < u_i - u_j, \forall j \neq i). (\*)$$
In fact, only when $\epsilon_i$ follows the Gumbel does the market share in Eq.$(*)$ have the closed-form expression [R15]. The MNL model, while simple, offers accurate predictions and speedy parameter estimation through techniques like MLE. In contrast, models using other noise distributions lack closed-form expressions, requiring less efficient numerical simulation for parameter estimation. More detailed discussion on choice models can be found in this book [R15].
### Reference
[R1] Introductory lectures on convex optimization: A basic course.
[R2] Online first-order framework for robust convex optimization.
[R3] Learning in games with continuous action sets and unknown payoff functions.
[R4] Faster first-order methods for stochastic non-convex optimization on Riemannian manifolds.
[R5] Adaptive learning in continuous games: Optimal regret bounds and convergence to Nash equilibrium.
[R6] Discrete choice analysis: theory and application to travel demand.
[R7] Discrete choice theory of product differentiation.
[R8] The theory and practice of revenue management.
[R9] When prospect theory meets consumer choice models: Assortment and pricing management with reference prices.
[R10] Multiproduct price optimization and competition under the nested logit model with product-differentiated price sensitivities.
[R11] Pricing multiple products with the multinomial logit and nested logit models: Concavity and implications.
[R12] Product line selection and pricing with modularity in design.
[R13] Dynamic pricing and inventory control of substitute products.
[R14] Conditional logit analysis of qualitative choice behavior.
[R15] Discrete choice methods with simulation.
Pdf: /pdf/cd10a95a12bd177ad73ea99bdb06aff86e546cd2.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample Complexity for Learning Single Index Models | Accept (oral) | Summary: Summary: The paper closes the gap between the previously established lower bound on learning a single index model with Gaussian inputs, giving a modified learning algorithm that matches the best possible sample complexity.
Strengths: Strengths: The alteration to SGD is simple but clever, and the given arguments are quite clear. I particularly appreciate the proof sketch showing how the choice of smoothing impacts the drift term. Tightening the gap left by [1] is a considerable contribution.
[1] Arous, Gerard Ben, Reza Gheissari, and Aukosh Jagannath. "Online stochastic gradient descent on non-convex losses from high-dimensional inference." The Journal of Machine Learning Research 22.1 (2021): 4788-4838.
Weaknesses: Weaknesses: I may be misunderstanding, but it appears the experiments are using an analytic formulation of the smoothed loss, based on the hidden knowledge that the link functions (in the experiments) are hermite polynomials. If I understand correctly, this corresponds to an “infinite synthetic sample” setting where the expectation in the smoothed loss is calculated exactly. This wouldn’t impact the sample complexity in terms of querying the true single index model, but it raises a computational question. Is there any reason to suspect that the analysis would change if, say, at every timestep the smoothed loss was approximated empirically with fresh samples drawn from the appropriate subset of the sphere?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the detailed and thoughtful review. Please let us know if you have any further questions or if anything is still unclear.
> experiments are using an analytic formulation of the smoothed loss, based on the hidden knowledge that the link functions (in the experiments) are hermite polynomials. If I understand correctly, this corresponds to an “infinite synthetic sample” setting where the expectation in the smoothed loss is calculated exactly. This wouldn’t impact the sample complexity in terms of querying the true single index model, but it raises a computational question. Is there any reason to suspect that the analysis would change if, say, at every timestep the smoothed loss was approximated empirically with fresh samples drawn from the appropriate subset of the sphere?
There are two ways to compute the smoothed gradients. The most naive way is to approximate the expectation in $L_\lambda$ using Monte Carlo. This requires roughly $d^{k^\star/2}$ draws of z at every step; however, it does not affect the sample complexity.
The more efficient method is the one that we used for the experiments in section 6. It uses the observation from Lemma 7 that the smoothed gradient is a function of only two parameters: $w \cdot x$ and $\|x\|$. Explicitly, there exists a $g$ such that $\nabla L_t = g(w \cdot x, \|x\|)$ where the closed form for $g$ follows from Lemma 7. This $g$ can be efficiently numerically computed for any activation function $\sigma$ in $O(1)$ time. Furthermore, this computation only needs to be done once after which this $g$ can be reused. When $\sigma$ is a polynomial, this $g$ has a closed form (see Appendix B.3) which we used for the experiments in section 6. We’ve extended Appendix E to include a discussion of the various methods for computing the smoothed gradients. | Summary: This paper considers the problem of learning single index models in high dimensions, i.e., functions of a high-dimensional parameter that only depend on a one-dimensional projection of this parameter. This paper is interested in the case where the link function (the function of the one-dimensional projection) is known to the statistician, but the direction of the projection is unknown.
This paper studies the sample complexity of stochastic gradient descent (SGD). Its contribution is to show that smoothing the gradients allows to take larger stepsizes, and thus improves the sample complexity. The papers draws the connection with similar results for tensor decomposition. Finally, as previous results suggests that SGD has the effect of locally averaging the gradients, the authors present their work as a first step towards improving the sample complexity bounds for SGD.
Strengths: This paper is well-written; in particular, the proof sketch does a good job at explaining the technical contributions of the paper. This paper proves rigorously the remarkable phenomenon that smoothing improves the sample complexity. This is a significant steps towards understanding the benefits of stochasticity in SGD.
Weaknesses: There are a few spots where I would like some clarifications, see questions below.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: *Remarks*
(1) Running Algorithm 1 requires to be able to compute smoothed gradients. Is it easy to compute these? Are there closed-form formulas or do we need to use sampling over $z$? It would be nice to make this explicit as it would help to understand the significance of the results: is it a suggestion for better practical performances when learning single-index models, or is it an intermediary results towards proving improved convergence bounds for SGD?
(2) Could you detail how stochastic gradient descent is a CSQ algorithm? We can show that if we use $n$ samples, then $|\hat{q} - E_{x,y}[q(x,y)]| \leq \tau / n^{1/2}$ w.h.p., but not almost surely, right? Relatedly, you write that $\tau \approx n^{-1/2}$ is a "heuristic" (l.163), thus how rigorous is the CSQ lower bound? How rigorous are the claims of "optimality"?
(3) From the proof sketch, I did not understand what is special about $\lambda = d^{1/4}$. My intuition is that smoothing with a diverging $\lambda$ should be very close to smoothing with $\lambda = \infty$. How would an algorithm with $\lambda = \infty$ (in the first steps) perform?
*Minor remarks.*
- When referring to [1], it is written many times that $n \geq d^{k^*-1}$ samples are necessary (up to log factors), however my understanding is that this assumes $k^* \geq 2$.
- l.86: what is $r$?
- Why do you consider the correlation loss rather than the square loss? I think the two would be equivalent; however it is quite unusual to optimize the correlation loss.
- Thm 1: maybe it would be good to remind that $w_0 \cdot w^* \geq d^{-1/2}$ can be guaranteed with probability 1/2.
- The statement of Thm 1 is a bit confusing. Certainly it is false that all stepsizes satisfying the O(.) bounds do not satisfy the statement (I can take the stepsizes to be 0), but maybe what you mean is that there exists a choice of stepsizes with these bounds that satisfy the statement?
- I did not understand how you count the sample complexity in the parallel with the CSQ lower bound. My understanding is that to get one query with precision $n^{-1/2}$, you need $n$ samples. So the total sample complexity should depend on the number $q$ of queries? In light of this, I did not understand l.163-164.
- Eq. below l.259: what does this tensor notation mean?
*Typo.*
- "nad" -> "and"
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the detailed and thoughtful review. We’ve corrected the typos you identified and we’ll try to clarify the higher level questions below. Please let us know if you have any further questions or if anything is still unclear.
> Running Algorithm 1 requires to be able to compute smoothed gradients. Is it easy to compute these?
There are two ways to compute the smoothed gradients. The most naive way is to approximate the expectation in $L_\lambda$ using Monte Carlo. This requires roughly $d^{k^\star/2}$ draws of z at every step; however, it does not affect the sample complexity.
The more efficient method is the one that we used for the experiments in section 6. It uses the observation from Lemma 7 that the smoothed gradient is a function of only two parameters: $w \cdot x$ and $\|x\|$. Explicitly, there exists a $g$ such that $\nabla L_t = g(w \cdot x, \|x\|)$ where the closed form for $g$ follows from Lemma 7. This $g$ can be efficiently numerically computed for any activation function $\sigma$ in $O(1)$ time. Furthermore, this computation only needs to be done once after which this $g$ can be reused. When $\sigma$ is a polynomial, this $g$ has a closed form (see Appendix B.3) which we used for the experiments in section 6. We’ve extended Appendix E to include a discussion of the various methods for computing the smoothed gradients.
> Could you detail how stochastic gradient descent is a CSQ algorithm? How rigorous are the claims of "optimality"?
You are correct and the CSQ lower bound does not constitute a rigorous lower bound for gradient descent. We’ve included additional discussion in the CSQ section of our rebuttal.
> From the proof sketch, I did not understand what is special about $\lambda = d^{1/4}$.
This threshold shows up in the computation for the noise term in the SNR after smoothing. We’ve added an additional section in the proof sketch that computes this variance and derives the $\lambda = d^{1/4}$ threshold. For now, we’ve added this sketch to the “Computing the Variance” section of the general rebuttal. However, we will include it in the proof sketch section of the next revision of our paper.
> I did not understand how you count the sample complexity in the parallel with the CSQ lower bound.
The key is that each query can be answered with the same $n$ samples. In particular, any SQ algorithm can be implemented with probability $1-\delta$ with $n \ge \tau^{-2} \log(\text{queries}/\delta)$ samples: for every query $q(x,y)$ return $\frac{1}{n} \sum_{i=1}^n q(x_i,y_i)$. For fixed $q$ this will be of order $\sqrt{\frac{\log(1/\delta)}{n}}$ so the result follows from a union bound. As the query complexity only appears through a log and is generally assumed to be polynomially large in $d$, it is often omitted. See the CSQ section in our general rebuttal for additional discussion.
> Eq. below l.259: what does this tensor notation mean?
$M_n$ is defined by taking the $k$ tensor $T$ and iteratively contracting indices until you are left with a vector or a matrix. This is sometimes written using $\mathrm{Tr}$ notation, for example if $T$ is a 6-tensor then
$$
M_n = \mathrm{Tr}_{(3,4),(5,6)}(T)
$$
meaning that you contract the third/fourth indices and the fifth/sixth indices of $T$ to get a matrix. For our tensor notation, we use $T[A]$ to denote the contraction of $T$ with $A$ along the last $dim(A)$ dimensions of $T$ (Definition 5). In this case $A = I_d^{\otimes 2}$ is the tensor product of the identity matrix with itself and $M_n = T[A]$. This is the higher dimensional analogue of $M[I] = \langle M, I \rangle = \mathrm{tr}(M)$ when $M$ is a matrix.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal. Two points:
**About the CSQ lower bound**. I now understand better the comparison with the CSQ model; in particular, it is a heuristic comparison. In light of this, many phrasings of the paper seem to be overclaiming. For instance, the claim of "optimality" in the title corresponds to a CSQ lower bound, but it is only heuristic. Most of the introduction also hides this important subtlety. Do I understand this correctly?
It might be possible that this is a widespread confusion in the community. However, I would recommend to be much more explicit about this in the paper, especially for non-expert readers.
**Tensor notation**. I'm not sure I understand. What does it mean to "contract" indices? Is there properly defined somewhere in the paper?
Also, some of my remarks above were left unanswered and could be worth a clarification.
---
Reply to Comment 1.1.1:
Comment: We apologize for leaving some remarks unanswered. Please let us know if we missed any additional questions you had or if anything is not clear.
### About the CSQ Lower Bound
We attempted to make explicit throughout the paper that the lower bound only applies to CSQ algorithms (e.g. lines 6-7 in the abstract and lines 31-32, 52-53 in the introduction). Regarding the claims of optimality, learning single index models falls under the class of problems in which there is a conjectured statistical-computational gap. In particular, the information theoretic lower bound for learning a single index model is $n \gtrsim d$, independent of $k^\star$. However, it is widely believed that no computationally efficient algorithm can solve the problem given only $O(d)$ samples for complicated link functions. Similar problems that exhibit a computational-statistical gap are tensor PCA, community detection, sparse PCA, and planted clique [1,2,3,4].
None of these problems have rigorous lower bounds of the form: no polynomial time algorithm can solve this problem with fewer than $X$ samples. Such rigorous lower bounds are out of reach for current techniques. The strongest lower bounds for such problems generally use either the statistical query, low degree polynomial, or sum-of-squares framework to limit the class of learning algorithms so that a lower bound can be proven. However, it is widely believed that these statistical-computational gaps extend to all computationally efficient algorithms in the presence of mild label noise.
We will update the introduction to add a discussion of the conjectured computational-statistical gap and to emphasize that $d^{k^\star/2}$ is the conjectured statistical threshold for this problem among computationally efficient algorithms, although we only prove the lower bound for the class of CSQ algorithms.
[1] Feldman et al., 2012, Statistical Algorithms and a Lower Bound for Detecting Planted Cliques
[2] Diakonikolas & Kane, 2017, Statistical query lower bounds for robust estimation of high-dimensional gaussians and gaussian mixtures
[3] Goel et al., 2020, Statistical-Query Lower Bounds via Functional Gradients
[4] Dudeja & Hsu, 2020, Statistical Query Lower Bounds for Tensor PCA
### Tensor Notation
Our tensor notation is defined in Appendix A – explicitly, if $T$ is a $k$ tensor and $A$ is a $j$ tensor in $d$ dimensions with $j \le k$ then
$$
T[A] = \sum\_{i\_{k-j+1},\ldots,i\_k} T\_{i_1,\ldots,i\_k} A\_{i\_{k-j+1},\ldots,i\_k}
$$
This can also be interpreted as “flattening” $T$ into a $d^{k-j} \times d^j$ dimensional matrix, flattening $A$ into a $d^j$ dimensional vector, computing the matrix vector product, then “unflattening” the resulting $d^{k-j}$ dimensional vector into a $k-j$ tensor.
Due to space constraints we cannot add Appendix A to the main paper but we will add a sentence in Section 7 that provides a reference to Appendix A for our tensor notation.
### Additional Remarks
Due to the character limit our response overflowed into the next comment. | Summary: This paper studies the sample complexity of learning a single-index function \sigma(w*^Tx) via SGD on a smoothed correlation loss. The authors show that when k* is the first non-zero Hermite coefficient of \sigma, with optimally tuned smoothing, defined as averaging the loss over a sphere of radius \lambda centered at the iterate, the SGD will converge to error epsilon in d^{k*/2} + d/epsilon iterations. This improves over the iteration complexity d^{k* - 1}. This is a tight analysis, since it meets the CSQ lower bound.
The analysis is based on analyzing a certain signal-to-noise ratio which arises from comparing the alignment of the gradient with the ground truth direction, and the norm of the gradient. The authors show that when the smoothing increases, both of these terms decrease, but the norm decreases more.
Strengths: - The paper achieves a significant result which enhances our understanding of SGD and achieves a known lower-bound.
- The paper is written very clearly in a way that highlights the main analysis techniques in the main body. It is useful that the authors first explain the vanilla SGD and then move on to the smoothed case.
- The paper explains the connection to related work and particularly Tensor PCA very well.
Weaknesses: - Discussion of the analysis of E[|v|^2] which seems a very key part of the proof is lacking in the main body. If the authors do not have space for many more details, could they include at least some intuition or a simple example (perhaps in low dimension or for a simple sigma?) for why E[|v|^2] has the stated dependence on lambda? And then if would be helpful if there were some pointers to the appendix for where the real proofs of the main steps can be found.
- The paper only studies the correlation loss which is not frequently used in practice. Could the authors at least state why they use this, and if they believe their techniques would extend to the MSE loss?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - The authors say in line 81 that the class of CSQ algorithms contains gradient descent. I am not certain this is true? Could the authors include a citation. Or perhaps it requires some qualifications?
- In line 86, I do not see a definition of r and p.
- In line 89 could the author explain (or at least define) what they mean by "ERM" or point to section 7.2
- Section 7.2 is somewhat confusing, because it skips from discussion general objectives to the Tensor PCA. The paragraph starting on line 287 is hard to understand (why will GD converge to the expectation over z of the gradient?
- Line 212 there is a v that should be a z?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors should discuss the limitations of using the correlation loss for learning single index functions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the detailed and thoughtful review. We’ve corrected the typos you identified and we’ll try to clarify the higher level questions below. Please let us know if you have any further questions or if anything is still unclear.
> Discussion of the analysis of E[|v|^2] which seems a very key part of the proof is lacking in the main body.
We originally omitted the variance calculation due to space constraints but as this is a crucial part of the proof sketch, we’ve added a brief sketch for the $\lambda$ dependence. See the “Computing the Variance” section of our rebuttal.
> The paper only studies the correlation loss which is not frequently used in practice. Could the authors at least state why they use this, and if they believe their techniques would extend to the MSE loss?
Because our parameters are constrained to the sphere, there is no difference between the correlation and MSE losses in population. In particular,
$$
\mathbb{E}\_{x,y}\left[\frac{1}{2}(y - f_\theta(x))^2\right] = \mathbb{E}\_{x,y}\left[1 - yf_\theta(x\right].
$$
The difference is that the noise structure is changed due to the additional $f_\theta(x)^2$ term. We believe that if you directly smooth the model, rather than smoothing the loss, our analysis should still go through. However, the correlation loss significantly simplifies the computations.
> The authors say in line 81 that the class of CSQ algorithms contains gradient descent. I am not certain this is true? Could the authors include a citation. Or perhaps it requires some qualifications?
The key connection is that GD with square loss only interacts with the labels $y$ through correlational queries. For example for GD with model $f_\theta$ and with square loss you have $$\nabla L(w;x,y) = (f_\theta(x) - \underbrace{y)\nabla f_\theta(x)}_{\text{query}}.$$
The other term in the gradient only depends on the distribution of $x \sim N(0,I_d)$ which is known so it does not enter the sample complexity. See the CSQ section of the general rebuttal for additional discussion.
> In line 89 could the author explain (or at least define) what they mean by "ERM" or point to section 7.2
We used ERM to refer to empirical risk minimization where the goal is to directly minimize the empirical loss $L_n := \frac{1}{n} \sum_i L(w;x_i,y_i)$ using an algorithm like gradient descent or minibatch SGD. In this setting, each sample is seen multiple times, in contrast to the online setting studied in the rest of the paper.
---
Rebuttal Comment 1.1:
Title: Read Author Rebuttal
Comment: Thank you for your reply. I appreciate the discussion of the variance term, and this would be great to include if there is space. If there is no space, perhaps the authors could include this sketch in the appendix.
Yes, I see that GD with square loss is included in CSQ. Perhaps it was not clear in the paper that this was referring to square loss, could the authors specify that?
---
Reply to Comment 1.1.1:
Comment: Yes – we will update the exposition in Section 4 (Main Results) and we will add reference to a new section in the appendix where we clarify the connection between CSQ and GD with square/correlation loss. | Summary: This paper aims to fill the gap between the sample size complexity of online SGD and CSQ lower bound for learning a single index model. Inspired by implicit regularization of minibatch SGD, the authors show that online SGD on a smoothed correlation loss only needs sample size $n=\Omega(d^{k^*/2})$ to efficiently learn the feature direction. This smoothed loss helps us to avoid poor local minima of previous unsmoothed loss which reduces the number of samples for learning a single index model and also matches the CSQ lower bound. The authors also present a connection with a partial trace algorithm for tensor PCA.
Strengths: Overall, I found the paper well-written and easy to follow. The proof sketch in Section 5 provides a clear relation among SNR of the feature alignment, the smoothed loss, and sample complexity. The paper presents many interesting insights into bridging between smoothed loss landscape and sample complexity. I believe this paper will further help us to understand the training dynamics of minibatch SGD for learning a single-index model in future work.
Weaknesses: The main concern is the CSQ lower bound. The gradient-based algorithm for correlation loss is a correlation statistical query learner. But can gradient descent of square loss be fully described by correlation statistical query? There should be some additional term in gradient descent of square loss which is not a correlation query. It would be better to have a clarification on this problem.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. [Tan and Vershynin, 2019] also studied phase retrieval via online SGD and got a sharp bound similar to [1].
2. How about the misspecification case when the link target function is different from the activation function? Most of the analysis in Section 5 still works well in this case.
3. Footnote 1 on page 3: you should mention $T_w$ is the tangent space at $w$.
4. Line 125: typo $He_0(x)=1$.
5. Line 178: should it be $v_t\perp w_t$? In the analysis below Line 178, how do we ensure that the gradient norm $||v_t||=O(1)$?
6. Line 212, should it be $z\perp w$?
7. Equations below Lines 212 and 214, index $k$ should be changed into $k^*$.
8. In Section 6, for the experiments, how do you choose the batch size and learning rate? And in Figure 2, if we use the square loss for minibatch SGD training, do we need a larger sample size? It would be better to compare these two cases in the simulation to visualize the difference.
9. Line 433, typo
10. Maybe you should provide references for Lemmas 4 and 5 in Section A.2. Similar to the equation below Line 272, you should distinguish Hermite polynomial and Hermite tensor in the multivariate case.
11. In Lemma 7, you should explain the notion $z\sim\mathbb{S}^{d-2}$ and its relationship with $z_1$. Or use $\text{Unif}(\mathbb{S}^{d-2})$. Is $z_1$ the first entry of vector $z$? Similar issue for Lemma 25.
12. Equation below Line 537: should be $\nabla_w L_\lambda (w)$
13. Lemma 14, what is $\mathbb{E}_{\mathcal{B}}$?
14. Below Line 692, what is $m$? It should be $q$ in Theorem 2.
15. Equation below Line 726, $z$ in the integral should be $z_1$.
=====================================================================================================
- Tan, Y.S. and Vershynin, R., 2019. Online stochastic gradient descent with arbitrary initialization solves non-smooth, non-convex phase retrieval. arXiv preprint arXiv:1910.12837.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations of the work are well addressed by the authors. I do not believe this work has any particular negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the detailed and thoughtful review. We’ve corrected the typos you identified and we’ll try to clarify the higher level questions below. Please let us know if you have any further questions or if anything is still unclear.
> The main concern is the CSQ lower bound. The gradient-based algorithm for correlation loss is a correlation statistical query learner. But can gradient descent of square loss be fully described by correlation statistical query? There should be some additional term in gradient descent of square loss which is not a correlation query. It would be better to have a clarification on this problem.
You are correct that there is an extra term, but this extra term does not interact with the labels $y$. As we are in the setting where the distribution over $x \sim N(0,I_d)$ is known, this extra term can be directly estimated by a CSQ learner. We’ve included additional details in the CSQ section of the rebuttal.
> [Tan and Vershynin, 2019] also studied phase retrieval via online SGD and got a sharp bound similar to [1].
Thank you, we will add this reference to the revision.
> How about the misspecification case when the link target function is different from the activation function? Most of the analysis in Section 5 still works well in this case.
Our result naturally extends to the misspecified setting but in a somewhat nontrivial way. Let the target link function be $\phi$ with information exponent $k^\star$ and the learner activation be $\sigma$ with information exponent $s^\star$. In the SNR calculation, the signal remains unchanged but the noise is equal to $d \lambda^{-2 s^\star}$. Going through the rest of the proof gives that the final sample complexity is $d^{k^\star - \max(1,s^\star/2)}$. Therefore when the information exponents of $\sigma$ and $\phi$ are equal, the analysis and final result are unchanged. However in the case when $\sigma$ has a lower information exponent, the sample complexity is strictly worse. We will include this discussion in the next revision of our paper.
> In the analysis below Line 178, how do we ensure that the gradient norm $\|v_t\| = O(1)$?
The derivation in this part of the proof sketch is heuristic so we do not directly track all of the error terms. In fact, $\|v_t\| = O(d)$ so this should be $\eta^3 d$. We believe explicitly tracking these error terms does not add to the proof sketch but to avoid being misleading we’ve replaced the $O(\eta_t^3)$ with $\ldots$ to represent the higher order terms which need to be carefully bounded.
> In Section 6, for the experiments, how do you choose the batch size and learning rate?
We give explicit formulas that we used to pick $\eta$ and the batch size (line 231-232) for all of the experiments.
> And in Figure 2, if we use the square loss for minibatch SGD training, do we need a larger sample size? It would be better to compare these two cases in the simulation to visualize the difference.
We believe that the sample complexity would remain unchanged so long as the smoothing is applied to the function $f$ rather than to the loss.
> Lemma 14, what is $\mathbb{E}\_\mathcal{B}$
Our apologies, $\mathcal{B} = (x,y)$ denotes the current sample; however, you are correct that this was never defined. We’ve replaced this notation throughout the paper with $\mathbb{E}\_{x,y}$.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thank you very much to the authors for their detailed response. Regarding the clarification in the general rebuttal, I have confidence in the overall correctness of their proof and would like to increase my score to recommend an accept. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their detailed and thoughtful reviews. We’ve addressed the most common questions in this rebuttal section.
## On the CSQ lower bound
The connection between the CSQ framework and gradient descent with square loss is that GD only interacts with the labels $y$ through correlational queries. For example if the model is $f_\theta$, the gradient is equal to $$\nabla L(w;x,y) = (f_\theta(x) - \underbrace{y)\nabla f_\theta(x)}_{\text{query}}.$$
The other term in the gradient only depends on the distribution of $x \sim N(0,I_d)$ which is known so it does not enter the sample complexity. However, we emphasize that this connection is only heuristic as the errors in GD are random while the errors in the SQ/CSQ framework are adversarial. However, such SQ/CSQ bounds have been commonly used to argue the existence of statistical-computational gaps in various learning problems [1,2,3,4].
[1] Feldman et al., 2012, Statistical Algorithms and a Lower Bound for Detecting Planted Cliques
[2] Diakonikolas & Kane, 2017, Statistical query lower bounds for robust estimation of high-dimensional gaussians and gaussian mixtures
[3] Goel et al., 2020, Statistical-Query Lower Bounds via Functional Gradients
[4] Dudeja & Hsu, 2020, Statistical Query Lower Bounds for Tensor PCA
## Computing the Variance and why we need $\lambda \le d^{1/4}$.
We originally omitted the variance calculation due to space constraints but as this is a crucial part of the proof sketch, we’ve added a brief sketch for the $\lambda$ dependence. This is also the part of the proof that introduces the fundamental constraint $\lambda \le d^{1/4}$ which is crucial for the final sample complexity. Here is the sketch:
Recall that $L_\lambda(w;x,y) = 1-y\sigma(w \cdot x)$. Differentiating through the smoothing operator gives:
$$
\nabla_w L_\lambda(w;x,y)
= -y \nabla_w~ \mathcal{L}\_\lambda(\sigma(w \cdot x)) \approx \lambda^{-1} x \mathcal{L}\_\lambda(\sigma'(w \cdot x)).
$$
We have that $y = O(1)$ and $\|x\| = O(\sqrt{d})$ so it suffices to bound $\mathcal{L}\_\lambda(\sigma'(w \cdot x))$. The variance of this term is equal to:
$$
\mathbb{E}\_{x}[\mathcal{L}\_\lambda(\sigma'(w \cdot x))^2] = \mathbb{E}\_x \left[ \mathbb{E}\_{z \sim \mu\_w} \left[\sigma'\left(\frac{w + \lambda z}{\sqrt{1 + \lambda^2}} \cdot x\right)\right]^2\right].
$$
To compute this expectation, we will create an i.i.d. copy $z'$ of $z$ and rewrite this expectation as:
$$
\mathbb{E}\_{x}[\mathcal{L}\_\lambda(\sigma'(w \cdot x))^2] = \mathbb{E}\_x\left[\mathbb{E}\_{z,z' \sim \mu\_w}\left[\sigma'\left(\frac{w + \lambda z}{\sqrt{1 + \lambda^2}} \cdot x\right)\sigma'\left(\frac{w + \lambda z'}{\sqrt{1 + \lambda^2}} \cdot x\right)\right]\right].
$$
Now we can swap the expectations and compute the expectation with respect to $x$ first using the Hermite expansion of $\sigma$. As the first nonzero Hermite coefficient of $\sigma'$ is $k^\star - 1$, this variance is approximately equal to the correlation between $\frac{w + \lambda z}{\sqrt{1 + \lambda^2}}$ and $\frac{w + \lambda z'}{\sqrt{1 + \lambda^2}}$ raised to the $k^\star-1$ power:
$$
\mathbb{E}\_{x}[\mathcal{L}\_\lambda(\sigma'(w \cdot x))^2] \approx
\mathbb{E}\_{z,z' \sim \mu_w}\left[\left(
\frac{w + \lambda z}{\sqrt{1 + \lambda^2}} \cdot \frac{w + \lambda z'}{\sqrt{1 + \lambda^2}}
\right)^{k^\star-1}\right]
= \mathbb{E}\_{z,z' \sim \mu\_w}\left[\left(\frac{1 + \lambda^2 z \cdot z'}{1 + \lambda^2}\right)^{k^\star-1}\right]
$$
As $z,z'$ are random unit vectors, their inner product is order $d^{-1/2}$. Therefore when $\lambda \le d^{1/4}$, the first term in the numerator is dominant while when $\lambda \ge d^{1/4}$, the second term is dominant. Combining these regimes gives that the variance is of order $\min(\lambda,d^{1/4})^{k^\star-1}$ which motivates our optimal choice of $\lambda = d^{1/4}$. Combining this with the fact that $y = O(1)$ and $\|x\| = O(\sqrt{d})$ gives that the gradient of the smoothed loss has variance $d\lambda^{-2k^\star}$ for $\lambda \le d^{1/4}$. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Understanding, Predicting and Better Resolving Q-Value Divergence in Offline-RL | Accept (poster) | Summary: This paper studies the divergence phenomenon in Q-value iteration methods (e.g. Q-learning), especially focusing on the offline RL scenario. They introduce a theoretical framework for studying this issue, predicting divergence and even the training step at which it is likely to happen. Such analysis, which is based on the NTK (Neural Tangent Kernel), fully contains the linear case study (i.e. deadly triad analysis in linear settings) and extends it to non-linear value approximation. The insights they draw allow them to show that reducing a new metric, SEEM which measures self-excitation using NTK, reduces divergence by regularising the network. Regularising the generalisation of Q-networks as opposed to the dominant view of constraining the policy results in a new viewpoint in handling divergence in Q estimation and could yield many improvements in this domain in the future. They experiment with various regularisation methods from the deep learning toolbox, including dropout, BatchNorm, and LayerNorm, showing that LayerNorm is best able to stabilise training (i.e. avoid blowup in Q estimates) and achieve more homogenous and lower SEEM. They show that BC (behaviour cloning) also achieves a low SEEM but at the cost of bias due to policy constraints, which does not allow it to find the best solution or perform well in more challenging domains.
Strengths: - The regularisation viewpoint on this phenomenon is a new one (as far as I know) yet a natural one.
- The regularisation viewpoint frees us from introducing policy constraints (which introduce bias) and allows us to simply solve the problem using off-the-shelf methods from the deep-learning toolbox.
- The theorems (if proofs are correct, I couldn't verify them personally with certainty) bring highly beneficial theoretical insights to the problem.
- The experiments are remarkable at showing that: (1) SEEM is a powerful metric in signifying divergence, (2) LayerNorm reduces SEEM and more homogeneously does so during training than other regularisation methods, resulting in stable Q estimation, (3) LayerNorm combined with SOTA offline RL methods results in significant improvements, especially under scarce access to offline data.
Weaknesses: - I believe a short description of offline RL methods used could enhance the exposition of the ideas: policy constraint is not formally introduced because the methods incorporating it are not discussed in any depth.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Given that this paper does not consider EMA or frozen targets (as in DQN), I am wondering how much using such gradual target updates contributes to alleviating divergence in Q estimation. In relation to the above question, I'm also uncertain how much using double Q-learning alleviates such issues in the offline RL setting. Wouldn't it be useful to have an ablation study to see, e.g., how much adding each of these in combination with each other helps performance and how much each of these help in isolation? Say, use **DDPG w/o EMA** vs. **DDPG w EMA** vs. **DDPG w EMA and Double Q-learning** vs. **DDPG w EMA and Double Q-learning and LayerNorm**, showing that additional advancement improves performance. Or compare in isolation, e.g.: **DDPG w/o EMA w LayerNorm** vs. **DDPG w EMA w/o LayerNorm**.
2. Are the results in Fig. 9 in combination with EMA? If not, what if you use EMA to show how the insights from the scenario w/o EMA carry over to the with-EMA case?
3. Does using LayerNorm as in your solution have the potential of allowing online DQN-style methods to step away from EMA/frozen target approaches altogether towards using the same online and target networks?
4. Does your solution allow DQN to solve Baird’s Counterexample (which was first used to show the potential divergence of Q-learning with function approximation)?
5. Does the analyses apply also to discrete-action pure critic settings such as DQN?
**Minor:**
- Can you elaborate on the way BC introduce explicit bias?
- I assume from the size of the $d_0$ layer that the architecture of the Q-function is action-in, correct?
Line 82: don’t -> do not
Line 119: To summary -> In summary
Line 123: explains -> explain
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: 1. Analysis does not directly apply to the standard practice of using Exponential Moving Average (EMA; as in DDPG) or target-network freezing (as in DQN).
2. It is not clear how much of the contributions would carry over to the online learning setting and what would be the implications of using LayerNorm in online RL.
3. The analysis seems to apply to MLP architectures. It is not directly discussed how much of the contributions would apply to commonly used architectures such as ConvNets, ResNets, etc.
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes
Flag For Ethics Review: ['No ethics review needed.'] | Rebuttal 1:
Rebuttal: Thank you for your time and constructive feedback. Please see the response below.
# Q1: Clarification on EMA, Double-Q, and LayerNorm's Contributions
Your suggestion to illustrate the interplay of EMA, Double-Q, and our LayerNorm in mitigating Q-value divergence is particularly insightful. To this end, we conducted an ablation study in the following sequence: "DDPG without EMA", "DDPG + EMA", "DDPG + EMA + DoubleQ", and "DDPG + EMA + DoubleQ + LayerNorm". Each successive step built upon the last. We carried out these experiments in settings of 10% and 50% Mujoco - environments notably susceptible to divergence. We report the total scores across nine tasks; please refer to Figure 1(a) in the attached PDF in global response for specific results.
It clearly reveals that the EMA and double-Q, which are used as common pracitce in popular offline RL algorithms such as CQL, TD3+BC contribute to preventing diverngence and thus boosting performance. Nevertheless, **built upon EMA and DoubleQ, our LayerNorm can futher significantly boost the performance by surpressing divergence.**
# Q2: Whether results in Fig. 9 use EMA?
You're correct. Fig.9, as well as all experiments in Section 5 (Performance Evaluation), **incorporate EMA**. This is in line with previous practical algorithms such as TD3, CQL, and IQL. We conducted a theoretical analysis without EMA to reduce the complexity of the derivation. Nevertheless, the significant outcomes presented in Section 5 imply that insights derived from scenarios without EMA still apply effectively when EMA is included.
# Q3: LayerNorm Allows Online Methods without EMA
Upon your suggestion, we tested SAC without EMA in two environments in **online settings**: Hopper and Walker. Please refer to Figure 2 in the attached pdf for the curves of return. Surprisingly, we discovered that **LayerNorm solution allows the SAC without EMA to perform equivalently well as the SAC with EMA**. In contrast, SAC without EMA and LayerNorm behaved aimlessly, maintaining near-zero scores. It reveals LayerNorm (or further regularizations discovered by SEEM later) have the potential of allowing online DQN-style methods to step away from EMA/frozen target approaches altogether towards using the same online and target networks.
Your advice led us to these unexpected yet intriguing findings. These results partially reveals potential implications of SEEM in an online learning setting, as written in the limitations section of your review. We leave SEEM in online setting as future work.
# Q4: Baird’s Counterexample
We would like to clarify that Baird's Counterexample specifically targets **linear** approximation. In our experiment, we found **neural networks** exhibited stable convergence within the Baird counterexample scenario. Furthermore, we investigated cases of divergence within linear situations and discovered that **incorporating LayerNorm in front of linear approximation could effectively lead to convergence** (see Fig. 1(b) in the attached pdf). This finding is not entirely surprising. Since a linear regression model can be considered a special case of neural networks, our analysis can be directly applied to linear settings.
# Q5: Discrete-action pure Critic Settings
Indeed, our analyses are not limited to continuous scenarios; they are equally applicable to discrete settings. The theoretical insights can be seamlessly transferred to a discrete setting, requiring only negligible alterations. **We have verified our analysis using a simple discrete-action gridworld with DQN**. Notably, in the gridworld scenario, all conclusions mentioned in the paper are observed, including the Q-value divergence in sync with the rise of SEEM, linear decay of Q-value's inverse with the SGD optimizer, and etc. We've also assessed LayerNorm's effectiveness in curbing divergence in the discrete gridworld. Please do not hesitate to inquire further on this topic.
# Response to Minors
[The way BC introduces explicit bias] **Simply speaking, BC restricts any learned policy to be near the behavior policy. This bias is harmful when the behavior policy is sub-optimal**. An illustration is the example in Fig. 7 In our paper. Specifically, the offline dataset of Antmaze-large-play features behaviors of various qualities. When employing a strong BC (with a large BC coefficient of 3 or 10) to circumvent OOD actions and subsequent divergence, the learned policy is forced to mimic some sub-optimal actions, resulting in poor performance.
[Architecture of the Q-function] You are correct. The Q-function is action-in.
# Conclusion
In conclusion, we appreciate the opportunity to address these vital concerns. We hope that our responses sufficiently clarify your question. We would be grateful for your reconsideration of the **confidence**, in light of the explanations provided. If there are any additional questions about our research, please do not hesitate to reach out.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, thoroughly addressing my questions/concerns. I also read through the discussion with reviewer T4o6 and believe that the reviewer's concerns are being addressed. I have now raised my confidence score and would be glad to see the paper accepted.
---
Reply to Comment 1.1.1:
Comment: Thanks for taking the time to review our paper and for your prompt response. We also appreciate your constructive suggestions, which help us more clearly identify LayerNorm's effectiveness. We are honored to have received your support for its acceptance. | Summary: The paper theoretically investigates the problem of value function overestimation in offline RL through the lens of neural tangent kernel (NTK). Additionally, the paper presents empirical findings that validate the effectiveness of incorporating LayerNorm before each activation function in mitigating value network divergence. The paper conducts extensive experiments on D4RL AntMaze and D4RL MuJoCo Gym while varying the dataset size and demonstrates that LayerNorm ensures stable value convergence and leads to state-of-the-art performance even with significantly small datasets.
Strengths: - The paper is well written and organized. The paper provides empirical evidences to support its theoretical claims (Figures 3, 4).
- The proposed method can be easily integrates with existing offline RL algorithms (e.g., CQL, IQL, TD3+BC) and consistently improves performance.
Weaknesses: - It is worth noting that DR3 has already investigated the dynamics of Q-learning in offline RL using NTK [1]. It is crucial for the author to properly reference this prior work and to establish a clear connection between the two studies. The author should explicitly highlight the similarities and differences between the proposed analysis and the findings presented in DR3.
- The idea of applying LayerNorm (or GroupNorm) has already been proposed in Scaled QL [2].
- There should be a proper reference to support the claim that the neural network becomes a linear function when the input’s norm is too large (line 234).
- It is unclear why the linearity of the neural network leads to an improperly large kernel value (line 234).
- The experimental setup in Section 5.2 is very similar to DR3 [1].
[1] Aviral Kumar et al., DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization, ICLR 2022. \
[2] Aviral Kumar et al., Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes, ICLR 2023.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please see the weaknesses above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and thoughtful review. Please see the response below.
# Linear NTK Value
Please refer to the global response for a detailed explanation.
# About Missing References
Thank you very much for pointing out these two papers that we missed. Since we conducted literature research along the line of works about Q-value divergence and deadly triad in offline RL, we were unaware of these two papers when writing the draft. We apologize for such unintended oversight.
After carefully re-examing these two papers, we agree that these two papers are very relevant to our work, hence we will add them to our related work section with discussions in all future versions.
**However, despite similar observations, we would like to highlight some important differences between our work and these two references.**
# Comparison with DR3
Indeed, our work shares similar observations of the connection between Q-value divergence and feature rank collapse (called feature co-adaptation in DR3). However, our work is different in the following aspects.
## 1. More General Setting & Different Perspectives
While DR3 attributes feature rank collapse to implicit regularization where $L(\theta)=0$, we propose a different perspective.
To start with, we first find that
* (i) Normalization-free networks' value prediction of the dataset sample and extreme point (i.e. action $A_{ex}$ with maximum value entries) exhibit a strange but strong correlation (having large NTK value)
* (ii) Such networks also tend to output large values for these $A_{ex}$ (see global response), which makes these extreme points easily become policy actions. In Fig. 3 in global response pdf, you can see policy actions tend to move toward extreme points when Q-value divergence occurs in all D4RL tasks.
It is then shown that (i) and (ii) are the root cause for many consequences, including feature co-adaptation and Q-value divergence.
1. The Q-value estimation network overestimates some out-of-sample (typically extreme point) action $A_{ex}$'s Q-value $\hat{Q}(s', A_{ex})$.
2. This inflated value becomes bootstrapped target, passing its influence to state $s$ by TD update, where the offline dataset contains transition tuple $(s,a,s')$. The Q-network is updated to boost $\hat{Q}(s,a)$.
3. Due to the large NTK value between $(s', A_{ex})$ and $(s,a)$ (see Fig. 5), TD updates the network's parameter to increase $\hat{Q}(s', A_{ex})$ even more than $\hat{Q}(s,a)$.
4. A loop is formed by 2 and 3. $\hat{Q}(s,a)$ keeps chasing $\hat{Q}(s', A_{ex})$'s value, leading the model's parameter to diverge along certain direction to infinity, which causes every input's feature becomes parallel.
This framework is **not restricted to over-parametrized regime which requires $L(\theta)=0$, nor does it require the label noise $\varepsilon$ from SGD**. Actually, such near-zero critic loss assumption is mostly not true in the real practice of RL. Also, our analysis does not rely on any n-dimensional basis or sub-space assumptions.
**Therefore, our analysis provides an alternative explanation for feature rank collapse and Q-value divergence from the perspective of norm-free network's pathological extrapolation. It applies to more general settings and provides new information.** These novel insights and implications are not covered in previous work, because it requires several non-trivial observations together with a rigorous theory to encapsulate them. As reviewer 4Xya said, our work "results in a new viewpoint in handling divergence in Q estimation and could yield many improvements in this domain in the future."
## 2. Divergence Metric & Precise Dynamics
We established a measure (SEEM) that can synchronously signify divergence. **SEEM is not just an index to indicate divergence, but also a new handle to solve the problem**. Moreover, DR3 does not accurately characterize model's dynamics as our work does. We formally prove in Theorem 1, 2, 3 that the model's parameter $\theta$ **evolves linearly for the Adam optimizer** (see Fig 4 and 14), and **collapses at a certain iter for SGD** by solving an ODE (see Fig 3). This accurate prediction demonstrates our precise understanding of neural network dynamics and is absent in previous work like DR3 and Scaled-QL.
## 3. Simpler and Cheaper
DR3 requires hyperparameter tuning for $c_0$, which involves more effort in searching proper values across different environments. Moreover, it introduces extra computation when computing the gradient of $\phi(s',a')$ to get $\overline{\mathcal{R}}_{exp}(\theta)$. Our method is free of all these shortcomings by simple layernorm.
# Comparison with Scaled-QL
Thank you for mentioning this work. Indeed, as we acknowledged in Line 242-247, some previous works like LB-SAC, RLPD have empirically utilized LayerNorm. We will further add Scaled-QL to reference. Nevertheless, compared to Scaled-QL, our contribution is three-fold:
## 1. Theoretical Justification
We provide rigorous theoretical justification for why normalization helps where previous works do not. **The fundamental mechanism of Q-value divergence and its relation to the extrapolation of Q-networks are studied.**
## 2. Comprehensive Empirical Evaluation
The sole application of Scaled QL on CQL doesn't show LayerNorm's effectiveness for other popular offline RL algorithms, while our research confirms the efficacy of regularization methods that can diminish SEEM and improve performance **across a range of offline RL algorithms like CQL, IQL, TD3+BC, and Diffusion-QL, including the primary classes of offline RL.**
## 3. Beyond Normalization
As the other reviewers acknowledged, the SEEM metric "will lead others to further engineering that will yield even better-performing adjustments than the simple addition of the LayerNorm". With the SEEM metrics, **we can examine regularization methods from the DL toolbox and even create new methods.** This could potentially inspire further advancements in the field.
---
Rebuttal Comment 1.1:
Comment: Thank you for your explanation on the differences between the proposed analysis and that of DR3. While Theorem 3 in your paper and Theorem 3.1 in DR3 arrive at similar conclusions, DR3 adopts more assumptions. I find this quite intriguing. Have you verified the validity of the assumptions and lemmas you made? Specifically, regarding Lemma 1, it is not common that an MLP with ReLU activation is a homogeneous function in the parameter space.
---
Reply to Comment 1.1.1:
Comment: Thank you for your time and reply. Here is our further explanation, we hope this can help to resolve your confusion.
# About Theorem 3 and Theorem 3.1 in DR3
Yes, these two results do seem to be similar. But they are actually derived from quite different foundations. Our results convey novel implications by capturing a mechanism that is more realistic, so that it requires fewer assumptions to reach a similar conclusion.
Specifically, DR3 Theorem 3.1 analyzes the condition for $\theta^*$ to be the stable fixed point of TD-update, namely $Q_{\theta^*}\left(s_i, a_i\right)=r_i+\gamma Q_{\theta^*}\left(s_i', a_i' \right)$ for every $(s_i,a_i,s_i')$. This means DR3's analyzing framework **focuses on the setting where the model perfectly fits the offline training data**, and the SGD noise plays a role as implicit regularization. This noise continues to increase the Q-value for the unseen data point $Q_{\theta^*}(s_i',a_i')$ in the null space of $G$, while preserving its fixed-point property in training data at the same time. As a consequence, all the dataset points' value inflates to infinity.
Our key difference and advancement is that, we found such divergence does not happen in the parameter subspace where the Q-network perfectly fits the training data's Q-value Bellman equation. Actually, the model is "chasing its shadow" rather than staying at the stable fixed point of Bellman update. Once the pathological extrapolation of MLP starts the so-called self-excitation procedure, **the training loss will never become near-zero, but keep exploding**. The underlying mechanism is explained in our initial rebuttal response from step 1 to 4. Each TD update step on $\theta$ tries to let $Q_{\theta}(s_i, a_i)$ get closer to $r_i+\gamma Q_{\theta}\left(s_i', a_i'\right)$, but increase the latter bootstrpped target even more due to improper generalization.
In conclusion, our theory for explaining the divergence mechanism differs at the very beginning. It captures a mechanism that has yet to be fully discovered before. Therefore, being more fundamental, our analysis is able to reach a conclusion without assumptions like zero training loss or assumptions on SGD noise's covariance, etc.
# Homogeneity of ReLU-activate MLP
Indeed, ReLU-activated MLP **with bias** is not rigorously homogeneous in the entire parameter space. But bias-free ReLU activated MLP, namely function $f(x)=W_L \sigma(W_{L-1} \sigma (\cdots \sigma(W_1 x)))$ is homogeneous with respect to $\theta=(W_1, W_2, \cdots, W_L)$. You can verify this from simple fact that $W_i x$ and ReLU activation $\sigma(\cdot)$ are homogeneous functions ($\max(kx,0)=k\max(x,0)$), and the composition of homogeneous functions is homogeneous.
However, **it is important to point out that our analysis also applies to ReLU-activated MLP with bias in each layer.** The main reason is a little subtle. We empirically found that such homogeneity still holds with high precision for MLP with bias, at least good enough in signifying the value divergence and giving correct asymptotic order. We hypothesize that when value divergence is going to happen, the product dot $W_i z_i$ is relatively large compared to the scalar $b_i$, and the effect of bias in output is negligible.
We also run experiments to validate the homogeneity. We define a family of 3-layer ReLU-activated MLPs (with bias term) with different scaling $\lambda$ of the same network parameter $\theta$ (from D4RL training checkpoints when value divergence happens), and feed these networks with the same input. We show one example in the below table, which confirms that the $L$-degree homogeneity of output, gradient and NTK is valid at high precision, the NTK is almost parallel, too. The larger the scaling factor is, the more accurate $k^L$ increasing pattern is for output ( $k^{L-1}$ for gradient and $k^{2(L-1)}$ for NTK), as described in Lemma 1. Further, we empirically validated the $L$-degree homogeneity of NTK holds for all checkpoints in D4RL experiments where divergence happens.
| $\lambda$ | 5 | 10 |
| ---- | ---- | ---- |
| $f_{\lambda \theta}(x)/f_{\theta}(x)$ | 124.99 ($5^3$) | 999.78 ($10^3$) |
| grad scale | 25.001 ($5^2$) | 100.001 ($10^2$) |
| NTK scale | 624.99($5^4$) | 9999.84 ($10^4$) |
| NTK cos | $>1-10^{-6}$ | $>1-10^{-6}$ | | Summary: This paper analyzes Q-value divergence in Offline-RL by considering a
neural tangent kernel for the value function. They show that
consideration of this kernel is predictive of Q-value divergence.
This analysis further leads to the observation that using a LayerNorm
yields a kernel that behaves more like one would hope -- with nearer
values being more impacted than farther ones. Their empirical tests
on D4RL benchmarks similarly show the benefits of LayerNorm.
Strengths: I like the insights in this paper and suspect that its publication
will lead others to further engineering that will yield even better
performing adjustments than the simple addition of the LayerNorm.
Figure 5 suggests there is more to be done. Why does the figure on
the left indicate almost the opposite of what one would hope for. At
a minimum, should the value right at x_0 be red?
Weaknesses: The paper needs a number of editing improvements. First, it needs an
English grammar checker. The errors mostly don't lead to difficulties
in understanding, but, the paper should not be published with the
current level of English quality.
In section 5.1, the BC term should be explained even though the
reference is given for it.
The reference to a score of 81.4 in that section does not appear to match
the numbers in Table 1.
The legends in the figures need to use a larger font size.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and thoughtful review. On your questions, please see the response below.
# Explanation for Figure 5
Please note that the color in Figure 5 represents the normalized NTK. Indeed, in the left figure, the absolute NTK surrounding $x_0$ is positive. However, values farther from $x_0$ are significantly larger. As a result, the NTK around $x_0$ is minimal and normalized to a value close to zero, as indicated by the blue color.
# Addressing Weaknesses
We appreciate your suggestions on grammar, font size, and other issues and will correct them in our formal version. We will conduct a thorough grammar check to correct grammar and spelling mistakes. Additionally, we intend to provide a formal introduction to BC and adjust the font size of the legends in subsequent versions of the paper.
The value 81.4 in Line 289 is a previous result obtained using fewer seeds, whereas 80.8 in the table represents the accurate result computed with 10 seeds. We will rectify this discrepancy. | null | null | Rebuttal 1:
Rebuttal: **Note**: We appreciate the constructive feedback from all reviewers. We have prepared a PDF document for reviewers, containing figures about extensive experiments and results that further illustrate and support our response in the rebuttal.
# Why linear function and large NTK value?
Some reviewer raises questions about why ReLU-activated MLP without norm layer becomes a linear function when the input's norm is too large, and why the linearity of the neural network leads to an improperly large kernel value. Here is a more detailed explanation with intuition.
- **Linearity Prediction Outside Dataset Range**
As has been pointed out in [1] (Fig. 1), ReLU-activated MLP without any norm layer becomes a linear function for the points that are excessively out of the dataset range. This fact can be comprehended in the following way: Consider an input $\lambda x$, the activation state for every neuron in the network becomes deterministic when $\lambda \to \infty$. Therefore, the ReLU activation layer becomes a multiplication of a constant 0-1 mask, and the whole network degenerates to a linear function.
- **Large NTK Value between extreme point and dataset sample**
If the network becomes a linear function for inputs $\lambda x_0, \lambda\geq C$, this will cause an arbitrarily large NTK value between extreme points $\lambda x_0$ and dataset point $x_0$. Because now $f_{\theta}(\lambda x)$ can be written in equivalent form $W^T (\lambda x)$, so $\phi(\lambda x_0)=\nabla_{W}W^T (\lambda x_0)= \lambda x_0$ becomes linear proportional to $\lambda$. This further induces linear NTK value between $x$ and $\lambda x$, which can be unbounded.
This phenomenon can be intuitively interpreted as below, since the value prediction of $f_{\theta}(x)$ on line $\lambda x_0$ is almost a linear function $W^T x$ for large $\lambda$, any subtle change of effective parameter $W$ will induce a large change on $\lambda x_0$ for large $\lambda$, proportional to $||x||_2$. This demonstrates a strong correlation between the value prediction between point $x_0$ and far-away extreme point $\lambda x_0$. The rigorous proof can be found in Proposition 1.
Note that such strong correlation is not restricted to rigorous paralleled inputs $x_0$ and $\lambda x_0$, but also between many dataset small-norm input $x_0$ and extreme large-norm input $x_{ex}$, once they share similarly activated neurons and effective linear parameters. The so-called "extreme" points' norm does not need to be several magnitudes larger than dataset point $x_0$ either. It just needs to be constantly larger to be able to start the self-excitation procedure, therefore starting the divergence loop.
[1] How Neural Networks Extrapolate: From Feedforward To Graph Neural Networks. Xu et al.
Pdf: /pdf/5a05b5ce4c8675eb79d9b87d5745dad6c6ffc31c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
VPGTrans: Transfer Visual Prompt Generator across LLMs | Accept (poster) | Summary: This paper mainly focuses on the transferability of visual prompt generator in VL-LLM. The authors conduct extensive experiments among different LLM types and sizes. By combining the experiment results they propose a new VPN training pipeline that engages projector warmup stage and vanilla finetuning stage.
Strengths: 1. This paper is well written.
2. The experiments are comprehensive.
Weaknesses: 1. I wonder if different structures for the projector, e.g. more layers, would lead to large performance gap.
2. It is kind of weird to directly compare the training time between training the whole model and using the proposed transfer method, since you have to first get another VL-LLM. In this way the time used for training the original smaller VL-LLM should also be considered. It is inappropriate to claim the method is more efficient just because of other off-the-shelf models.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Please refer to the above weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: No obvious negative societal impact observed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and constructive reviews, which will definitely help consolidate our paper. We are also grateful that you acknowledge the strengths of our work. Following, we present the response to address your concerns.
---
**Q1: I wonder if different structures for the projector, e.g. more layers, would lead to large performance gap.**
**A:** Thanks for your suggestion.
First of all, we want to clarify that existing VL-LLMs mainly adopt a linear projector and thus we just follow them in this paper.
Of course, it is quite interesting to explore different projector structures.
We compare 3 types of projectors (linear, 3 layers MLP, 1 layer transformer) under OPT$_{\text{125M} \rightarrow \text{1.3B}}$ scenario.
To evaluate the acceleration speed, we compare the performance of COCO caption (CIDEr) at different epochs as follows:
|Projector Type| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
|-|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
|Linear| 131.5 | 133.1 | 134.1 | 135.1 | 135.7 | 135.9 | 136.1 | 136.2 | 137.0 | 136.8 |
|3 Layers MLP| 96.8 | 120.9 | 125.5 | 126.7 | 128.3 | 129.4 | 130.4 | 129.7 | 131.5 | 131.9 |
|Transformer| 132.3 | 134.7 | 134.7 | 134.8 | 135.6 | 135.4 | 135.8 | 135.0 | 135.9 | 135.7 |
To compare the visual perception ability, we show their best VQA performances:
|Projector Type| VQAv2 (acc.) |
|-|--------------|
|Linear| 48.9 |
|3 Layers MLP| 49.6 |
|Transformer| 49.2 |
We have the following observations:
- Linear can achieve fast convergence, while the VQAv2 performance is the lowest.
- 3 layers' MLP will result in much slower convergence and weak COCO caption performance, while achieves the best VQA results after convergence.
- Transformer can achieve both good convergence speed and good performance on COCO caption and VQAv2,
which may serve as a potential exploration direction for the projector structure design.
Overall, thanks for indicating this point. We will further explore it in the revision.
---
**Q2: It is kind of weird to directly compare the training time between training the whole model and using the proposed transfer method, since you have to first get another VL-LLM. In this way the time used for training the original smaller VL-LLM should also be considered. It is inappropriate to claim the method is more efficient just because of other off-the-shelf models.**
**A:** Thanks for your careful reading and for mentioning this aspect. Here we report the source models' GPU hours as follows:
| Model | Src Model Cost (hours) | Transfer Cost (hours) |
|-|-|-|
| BLIP-2 OPT 6.7B | 631.5 | 0 |
| VPGTrans OPT 6.7B (ours) | 459.0 | 59.0 |
| BLIP-2 FlanT5 XXL | 684.0 | 0 |
| VPGTrans FlanT5 XXL (ours) | 435.0 | 32.4 |
Compared with training a large model from scratch, our VPGTrans enables training both the smaller and the larger models with even less time.
We will add the table in the revision.
Moreover, we would like to re-emphasize that the existence of off-the-shelf models is not a demanding assumption.
When building a new VL-LLM, it is common to validate on a smaller model first, which can serve as a transfer source.
There are also existing open-sourced VL-LLMs like BLIP-2, which can serve as the base for cross LLM types transfer.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the detailed response. I have no further questions. | Summary: The paper discusses the transfer of a visual prompt generator (VPG) across different vision-language language models (VL-LLMs) to reduce computational costs. VL-LLMs include a VPG module that bridges the gap between vision and language, encoding visual inputs into fixed-length soft prompts. To reduce the cost of building new VL-LLMs, the authors propose a two-stage transfer learning framework, VPGTrans, for VPG transfer across different LLM sizes and types. VPGTrans includes a projector warm-up (stage-1) and vanilla fine-tuning (stage-2). They conduct exploratory analyses to determine key factors for transfer efficiency and show that VPG transfer across frozen LLMs is feasible and can lead to substantially lower computational costs. The authors demonstrate the practical value of VPGTrans by customizing two novel VL-LLMs with recently released LLMs. The paper highlights the need for further research into VPG transfer across LLMs to further reduce the cost of building new VL-LLMs. Overall, the paper's main contributions include identifying the feasibility and effectiveness of VPG transfer across LLMs, smaller LLM sizes leading to more speed-up and better performance during transfer across LLM sizes, and VPG trained on smaller LLMs outperforming larger ones in most conditions, and then propose a transfer learning framework with empirical evidence supporting its efficiency.
Strengths: 1. The multimodal large language model (MM-LLM) is considered the most promising path towards achieving Artificial General Intelligence (AGI). However, a major challenge is the significant computational cost associated with training an MM-LLM. This obstacle hinders the widespread research and exploration of this topic. The presented paper offers an excellent pilot study that demonstrates the feasibility of VPG transfer. By showcasing the viability of VPG transfer, the paper provides valuable insights into mitigating the computational cost of training MM-LLMs. This finding has significant implications for future research in this field. The pilot study serves as a promising starting point, paving the way for further investigations and advancements in the development of MM-LLMs with higher efficiency.
2. I quite enjoy the didactic/exploratory nature of this paper. I like that the paper reads like an investigation, and begins with a preliminary exp, as well as diagnosing reasons for the case, before then presenting the VPGTrans method. And then they delve into the transfer of VPGs under two settings. This, from the shallower to the deeper, feels more organic and the lessons learned along the way are insightful. Also the introduction with clear bullet point allows easy reading.
3. The proposed method, VPGTrans, can be fairly straightforward but effective, which is a merit.
4. The experimental work in this study is extensive and solid. It includes validation across a wide range of LLM sizes and types, providing thorough coverage.
5. This work yields numerous interesting and meaningful findings, which will serve as important empirical evidence for future explorations in LLM/MM-LLM efficiency. The conclusions drawn from this research have the potential to guide and shape subsequent investigations in this field.
6. Additionally, this paper makes good contributions by introducing two new state-of-the-art MM-LLMs in the community: VL-LLaMA and VL-Vicuna. Overall, I like this paper, and I believe this work will show important impact to the community.
Weaknesses: 1. The major possible limitation of this method is its reliance on the existence of a well-functioned VPG model, and without this assumption, this work is of less meaningfulness. How does the quality of the existing VPG influence the transfer efficacy and efficiency?
2. This work evaluates on merely the captioning and VQA tasks. It would be interesting to see the performance on other different VL tasks and more benchmark datasets, to see the broad-coverage trends.
3. On the other hand, the proposed VPGTrans, though straightforwardly simple, relies too much on the empirical tuning, e.g., when to inherit the raw VPG and when to tuning or fixing some modules. Are there any intuitions and theoretical supports to build up the method?
4. When transferring the VPG, what has been changed of the visual soft prompts? Are the visual prompts of the raw VPG changed, to adapt to the new LLM backbone?
5. The claim of ‘our VPGTrans helps achieve a BLIP-2 ViT-G OPT2.7B→6.7B transfer with less than 10% of the GPU hours and 10.7% training data required for the original model training’ can be a little misleading, as the comparison between VPGTrans and the training of the raw VL-LLM is not strict fair and counterpart; should compare the VPGTrans with the hard VPG transfer method and then draw the conclusion.
6. Lack of exploration of potential limitations or drawbacks of the proposed method, which may limit its generalizability or practical usefulness.
7. How did you choose the specific VL-LLMs for your experiments?
8. In Section 4.2, you mention that the VPG trained on smaller LLMs may outperform larger ones in most conditions. Can you provide more explanation and insights for this observation of why it brought better task performance?
9. Can you explain in more detail the warm-up training of the linear projector and how it can prevent performance drops?
10. In Section 6, you mention that you customized two novel VL-LLMs with your VPGTransfer framework. How generalizable do you think this framework is to other novel VL-LLMs, and what steps would need to be taken to adapt it to different models?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: See Weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: I do not foresee any potential for negative societal impact from this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful that you acknowledge the strengths of our work so much. Your support definitely encourages us to improve the work further and push forward. Following we present the response to address your concerns.
---
**Q1: Reliance on the existence of a well-functioned VPG model.**
**A:** Thanks for pointing this out.
We want to clarify that the existence of VPG model is not a demanding requirement:
- When building new MM-LLMs, it is common to first validate on small models and then scale up, where the small VPG models can be used for transfer.
- There are already released VPG models like BLIP-2, which can be used.
---
**Q2: How does the quality of the existing VPG influence the transfer efficacy and efficiency?**
**A:** We discuss the question under TaS and TaT scenarios:
- TaS: A better VPG can typically result in higher efficacy and efficiency under TaS.
As shown in Figure 7 of the main paper, the VPG trained on OPT 125M is the best performed one,
which also achieves the highest acceleration rate in Table 3 and the best performances given the same target LLM.
- TaT: When considering TaT, there will be a trade-off between the VPG ability and model gap.
For example, although the VPG trained on smaller OPT shows better results, it is also less transferable to FlanT5 models (cf Appendix D.1).
---
**Q3: More tasks and datasets.**
**A:** Thanks for the suggestion!
Please refer to the Reviewer#7KJM Q2 and Q3 for results on more datasets (medical, robustness VQA).
---
**Q4: Intuition of VPGTrans.**
**A:** We first dissect the success of VPGTrans into 3 parts and explain the intuitions one by one:
- 1. Projector warm-up can accelerate normal pre-training (stage-2) and prevent performance drop.
Intuitions: (1) projector warm-up can work as a good weight initialization for the stage-2 and thus accelerate. (2) the gradient passed through a randomly initialized projector will distort VPG's ability (performance drop), while the warm-up can avoid.
- 2. Word converter initialization can accelerate projector warm-up. Intuition: a good initialization can lead to faster convergence.
- 3. Projector enables fast convergence with extremely large learning rate. Intuition: projector has very limited parameters, and thus easier to train.
---
**Q5: Theoretical supports of VPGTrans.**
**A:** In this paper, we mainly focus on the empirical study. We try to discuss some potential theoretical analysis directions that may help:
- Warm-up Aspect: [1] illustrates that training the classifier first and then fine-tuning can help the OOD performance.
The paper shows that gradient passed through a randomly initialized classifier will distort pretrained features.
However, [1]'s theoretical analysis is limited to linear feature extractor assumption.
- Visual Prompt Aspect: there are limited theoretical works on the mechanism of visual soft prompt.
For empirical study, [2] and our work try to interpret visual prompts with word embeddings (cf Appendix Fig. 11).
---
**Q6: When transferring the VPG, what has been changed of the visual soft prompts?**
**A:** Good question! We visualize the visual prompts (cf Appendix Fig. 11), where the last token is always nearest to EOS token of OPT,
and other tokens represent visual contents like flower.
After transferring to larger OPT (e.g. OPT 2.7B), the last token is also nearest to EOS token.
However, the nearest neighbour of each content token for a given image will be changed to other content (e.g. flower->table).
**Q7: Comparison with hard VPG transfer.**
**A:** We show the result of hard VPG transfer with similar amounts of training data:
| Model (OPT 6.7B) | VQAv2 | GQA | OKVQA | Data |
|-------------------|-------|------|-------|-------|
| Hard VPG Transfer | 50.9 | 33.9 | 32.7 | 13.8M |
| VPGTrans (ours) | 57.2 | 36.2 | 39.8 | 13.8M |
We see that VPGTrans can achieve better performance with the same amount of training data.
We will also add it in the revision.
---
**Q8: Limitations and Drawbacks**
**A:** We discuss the social impact and limitations in the lines 584-594 of Appendix.
The limitation mainly includes the hallucination problem of constructed VL-LLMs.
---
**Q9: How did you choose the specific VL-LLMs for your experiments?**
**A:** For the VL-LLM architecture, we choose BLIP-2, as it is the most powerful open-sourced MM-LLM when we initialize this project.
For the LLM choices, we try to cover both encoder-decoder (FlanT5) and decoder-only (OPT) models across different sizes.
---
**Q10: VPG trained on smaller LLMs may outperform larger ones. Potential explanations.**
**A:** First of all, training VPG on LLMs will change the weight of the VPG, which will affect the pretrained VPG's ability. This phenomenon can be considered as a kind of VL alignment tax.
Then, we empirically find that using larger OPT model usually takes more tax.
The conclusion can be validated by the confusion matrix (cf Figure 7), where the VPG is fixed to see which VPG is better.
We also check the absolute update of transfer sources (OPT 125M, 350M, 1.3B) and find that the VPG of OPT-1.3B has the largest absolute update in the 1st epoch.
Finally, it is still unclear why the larger LLM will have more effect on VPG.
We hypothesize it might be caused by the more complicated mechanism, which may require more effort for VPG to align.
We will further explore this problem in the future.
---
**Q11: Why warm-up can prevent performance drop.**
**A:** Please refer to Q4-1-(2).
---
**Q12: Adaptions to different models.**
**A:** The adaptation will just be similar to the construction of VL-LLaMA and VL-Vicuna.
- Find an existing VPG (e.g. ImageBind-LLM's VPG), and an LLM (e.g. LLaMA-2).
- Then connecting them with VPGTrans' two-stage training.
---
**Q13: Potential Social Impacts.**
**A:** Please refer to Q8.
---
[1] Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution.
[2] CLIPCap: CLIP Prefix for Image Captioning.
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: Thank you for addressing my inquiries. The majority of my concerns have been satisfactorily resolved, and I generally concur with the authors' assertions.
Furthermore, regarding one of the points posed by Reviewer #GwBd, I would like to inquire whether the overall training pipeline might become intricate and challenging to implement in practical scenarios. I've observed that the proposed framework involves a substantial number of pipelines, have these been automated when using it? Do I need to care about the intermediate operations?
In regard to one of my questions (Weakness#5), specifically the comparison between VPGTrans and hard VPG transfer, I would greatly appreciate a comprehensive analysis encompassing factors about operational efficiency, for instance, execution time and costs. Currently, you have presented the end task performance instead of the efficiency comparisons.
---
Reply to Comment 1.1.1:
Title: Re-rebuttal to Reviewer #mi8b
Comment: Thanks for your reply!
It is encouraged to see that you are satisfied with our rebuttal.
In the following, we try to address your current concerns point to point.
---
**Q1: Whether the overall training pipeline might become intricate and challenging to implement in practical scenarios.
I've observed that the proposed framework involves a substantial number of pipelines, have these been automated when using it?
Do I need to care about the intermediate operations?**
**A:** Thanks for the question.
Actually, our VPGTrans is quite simple to implement.
- In the code level, the main workload lies in the word converter trainer implementation,
while the code can be implemented by adding a new file without modifying the original VL-LLM code.
Thus, the practitioners do not need to dive into the implementation details of the original VL-LLM to modify it.
- In the experiment level, the framework can be automated without considering the intermediate results.
Some researchers may concern about the hyperparameters about the intermediate operations.
We want to clarify that the main hyperparameter, the 1st stage's learning rate, is super robust
in the training (cf lines 524-528 Appendix), which does not require careful adjustment.
---
**Q2: I would greatly appreciate a comprehensive analysis encompassing factors about operational efficiency, for instance, execution time and costs. Currently, you have presented the end task performance instead of the efficiency comparisons.**
**A:** Sorry for missing your point.
We have a comparison between VPGTrans and VPG transfer (denoted as VPG inherit in our paper) in **C.3 of our Appendix**,
where the speed-up rate is reported in Table 5 and the convergence curve comparison is shown in Figure 13.
A main conclusion is that our VPGTrans can achieve better performance for over 74% Transfer-Task variants,
among which **our VPGTrans can also achieve better acceleration rate for over 69% conditions**.
More analysis will be added in the revision. | Summary: * This paper presents an interesting study on how to effectively transfer the small VL model to the large VL model, which is very practical under the fact that the LLMs are very expensive to finetune.
* The empirical study shows the effectiveness of the methods.
Strengths: * The paper proposes an effective word converter to better speed up the model training on a large model
* The stage-wise training gives good results for the final performance.
Weaknesses: * The paper writing is very complex to understand
* This technical contribution is very limited. The core is word converter, which better aligns the knowledge between large LLMs and small LLMs.
* The performance improvement is very obvious. Because the whole model training involves the small LLM, which contains more model parameters.
* The whole training pipeline is very complex. There are multiple stage for training.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * It is better to include the GPU hours and training data of a small model for a fair comparison
* Is it possible to inject the Alignment loss during the tuning projector stage?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for the valuable suggestions. Following, we present the point-to-point response to address your concerns. And if you feel our responses are effective, please kindly raise your evaluation.
---
**Q1: The paper writing is very complex to understand.**
**A:** Thanks for your comment. We'd like to make a clarification on our paper organization.
Different from the traditional _introduction-method-experiment_ structure, we adopt an **exploratory structure**.
The exploratory structure aims at deriving the method from a series of exploratory experiments over existing materials, which should be a merit for the topic of our focus.
As admired by the Review#mi8b (Strengths 2), _'the paper reads like an investigation, from the shallower to the deeper, feels more organic and the lessons learned along the way are insightful'_.
To make it easier to follow for you, we list the outlines of the paper:
- Sec. 1: Motivations and paper summary;
- Sec. 2: Descriptions/preliminaries of VL-LLMs;
- Sec. 3: Exploratory analysis $\to$ Our proposed method;
- Sec. 4: Experiments for cross LLM sizes transfer;
- Sec. 5: Experiments for cross LLM types transfer;
- Sec. 6: Building new VL-LLMs with our method.
We will further revise the section titles to make each part clearer.
---
**Q2: This technical contribution is very limited. The core is word converter, which better aligns the knowledge between large LLMs and small LLMs.**
**A:** We kindly re-emphasize our major contributions as to offer a thorough investigation to demonstrate the feasibility of VPG transfer for efficient accessibility of VL-LLMs, and devise a novel transfer method VPGTrans. In summary, our contributions are multifaceted:
- We propose a VPGTrans framework, which significantly reduces the cost of building new VL-LLMs to lab-level resources.
- We reveal intriguing insights and findings of the VPG transfer, and shed light on further research on this topic.
- With VPGTrasn, we customize two novel VL-LLMs that are open-sourced for the community.
That being said, although not presenting ground-breaking technical novelty, we do contribute with innovations from the technical perspective of our proposed VPGTrans framework:
- We proposed the _projector warm-up_ strategy, which helps achieve transfer with both high efficiency and high efficacy.
- We devised a novel _word converter_ method that can accelerate projector warm-up for over 1.5 times.
- We also proposed _training the projector with an extremely large learning rate_, via which we find can further reduce the warm-up consumption to 1/3.
Most importantly, we validate the feasibility of VPG transfer with systematical analysis via our VPGTrans system, where the findings help future researchers to obtain better practices in VPG transfer for building VL-LLMs.
We sincerely hope you can re-evaluate our contributions and the significance of this work.
---
**Q3: The performance improvement is very obvious. Because the whole model training involves the small LLM, which contains more model parameters.**
**A:** We respectfully disagree. The truth is, the **small and large models are trained and tested separately** in our scenario; but the scaling advantage you indicated in the comment mainly happens when conducting training and testing with one joint model.
In fact, it is non-trivial to conduct the transfer with high efficiency and efficacy. For example, the direct transfer (_VPG inherit_) even achieves worse performance than training from scratch.
But we think this is a wise question, and thanks for pointing this out. We will strengthen this point in the revision.
---
**Q4: The whole training pipeline is very complex. There are multiple stages for training.**
**A:** Our method actually contains two stages, and we don't think it is complex if we compare it with current VL-LLMs.
Recent works on VL-LLMs like LLaVA (2 stages) and mPlug-Owl (2 stages) all entail 2 stages of training.
In addition, our source codes are made very easy and convenient for practitioners to implement.
Thus, there will be only very small cost to use our system and apply it into realistic applications.
---
**Q5: It is better to include the GPU hours and training data of a small model for a fair comparison.**
**A:** Thanks for the suggestion. The training costs are shown as follows:
| Model | Src Model Cost (hours) | Transfer Cost (hours) |
|:-|:-:|:-:|
| BLIP-2 OPT 6.7B | 631.5 | 0 |
| VPGTrans OPT 6.7B (ours) | 459.0 | 59.0 |
| BLIP-2 FlanT5 XXL | 684.0 | 0 |
| VPGTrans FlanT5 XXL (ours) | 435.0 | 32.4 |
The total training data of the small model + transfer is the same as the larger ones.
Compared with the one training from scratch, our VPGTrans can use even less time to train both the smaller and larger models.
We will add the table in the revision.
---
**Q6: Is it possible to inject the Alignment loss during the tuning projector stage?**
**A:** Thanks for the question, good idea!
We try to discuss different ways for the alignment loss.
We list the elements when conducting the transfer: (1) source VPG (2) source LLM (3) target LLM, where the alignment loss can be designed as these:
- source VPG$\to$source LLM: already aligned.
- source VPG$\to$target LLM: it is worth exploring the **alignment between visual soft prompts and word embeddings**.
Actually, current visual soft prompts (visual features) generated by VPG are not fully aligned with the word embedding.
While the cosine similarity is high sometimes, the norm of two types of embeddings are quite different (Appendix line 519-520).
Thus, it will be interesting to explore the influence of alignment with different distance metrics.
- source LLM$\to$target LLM: it is what the word converter does.
We will discuss this in the revision.
---
Rebuttal Comment 1.1:
Title: We are looking forward to your feedback
Comment: Dear Reviewer#GwBd,
Thanks so much for your great efforts and valuable feedback on our paper.
Your comments are essential to help us improve the quality of our work.
We have carefully addressed your concerns and questions in our responses.
For example, we report the cost of training the source model for a fair comparison,
and we systematically discuss the possibility of incorporating the alignment loss.
We kindly hope that you can take some time to re-evaluate our paper based on our replies.
If you have any further concerns or questions, please do not hesitate to let us know.
We will be happy to address them promptly.
Best Regards,
Paper#1904 Authors | Summary: The paper presents a technique for VPG (Visual Prompt Generator) transferability across LLMs where the transferability can be between LLMs of different sizes (eg: OPT125M -> OPT2.7B) or across LLMs of different types (eg: OPT350M-> Flan T5base). This is achieved using a two stage strategy where a projector is learned by freezing VPG and LLM in the first stage and vanilla finetuning of VPG and projector in the second stage. The proposed technique significantly reduce training time (specifically when transfer from small to large LLMs).
Strengths: 1. The VPG transferability can significantly improve the training time/cost in some cases (from LLM_small->LLM_large) can have many practical applications when training larger LLMs.
2. The proposed technique consistently reduces the training time/cost compared to baseline (TFS) while achieving similar performance
Weaknesses: Novelty: The proposed technique is a standard technique used in fine-tuning a model for a specific task where the newly added layer (head which is similar to projector here) is initially trained with backbone frozen and once trained, the whole network (backbone and head) is trained/fine-tuned on the target data. Please see [a, b] for references that talk about standard fine-tuning strategies. The authors extend this to VPG transferability between two LLMs. Although, the technique is good for practitioners, I don't see any technical insights in the proposed technique.
Generalization: Since the proposed technique uses a limited vocabulary from the datasets COCO, SBU-captions [1.4M image-caption pairs] for the word converter, the performance of the target LLM on the datasets with domain GAP could be affected. It wold be interesting to see the performance of the target LLM (and comparison with source LLM) on domain specific datasets such as RSVQA [c] and PathVQA [d] datasets.
Robustness: To check the validity of the proposed technique, It would also be interesting to see the affect of transferability on Robustness of the target model which [target-LLM compared src-LLM] can be done by evaluating on datasets AdVQA [e], VQA-CE [f] or VQA-Rephrasing [g]
[a] https://cv-tricks.com/keras/fine-tuning-tensorflow/
[b] https://lightning-flash.readthedocs.io/en/stable/general/finetuning.html
[c] https://rsvqa.sylvainlobry.com/#overview
[d] https://paperswithcode.com/dataset/pathvqa
[e] https://adversarialvqa.org/
[f] https://paperswithcode.com/dataset/vqa-ce
[g] https://facebookresearch.github.io/VQA-Rephrasings/
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: I would like authors to discuss the three points raised above : 1) Novelty, 2) Generalization, [3] Robustness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and the careful review. Your suggestions will definitely help improve our paper. Following, we present the response to address your concerns. And if you feel our responses effectively relieve your concerns, please kindly reconsider your evaluation.
---
**Q1: Novelty: The proposed technique is a standard technique used in fine-tuning... Although, the technique is good for practitioners, I don't see any technical insights in the proposed technique.**
**A:** While it is a fact that the transferring technique in our work coincides with the existing standard method for the practices of transfer learning, we would like to argue that our work is far more than the simply leverage of the model transfer itself.
- First, we have actually made certain technical improvements (simple yet effective) for the scenario of LLM transfer over the existing 'standard transfer learning'. For example, we devise the **word converter based initialization** (cf Sec. 3.2), without which, the warm-up of the projector will be 1.5 times slower. On the other hand, getting rid of complicated architectural design, we rather believe being simple to use to achieve prominent efficiency improvement is a merit of our method.
- Second, compared to the technical innovation, our key contribution/novelty more lies in the experimental explorations. We first performed systematical and in-depth analyses to confirm the feasibility, before we decided to conduct the VPG transfer and propose our method. We note that, without such preliminary explorations, the direct transfer of VPG will be significantly less effective compared to ours.
- Third, we for the first time presented a rich amount of intriguing findings/insights and meaningful explanations for the scenario of VPG transfer across LLMs, which will largely pave the way for all the furture explorations in this topic. For example, we show that, for OPT based VL-LLMs, the VPG trained on smaller src-LLM typically yield better transfer result.
- Finally, our contributions to the community are substantial and evident, especially for the practical applications in production environment. Our VPGTrans enables building new VL-LLMs with much reduced cost (e.g. 10% cost), making it possible to **build new VL-LLMs with lab-level resources**. The VL-LLaMA and VL-Vicuna presented in our paper are good examples. For example, for some startup companies want to build their own VL-LLMs, our VPGTrans offers a valuable path to **save thousands and millions of dollars**.
---
**Q2: Generalization: Since the proposed technique uses a limited vocabulary from the datasets COCO, SBU-captions for the word converter, the performance of the target LLM on the datasets with domain GAP could be affected. It would be interesting to see the performance of the target LLM (and comparison with source LLM) on domain-specific datasets such as RSVQA and PathVQA datasets.**
**A:** Thank you for carefully going through our paper. We want to clarify that our model is also trained on millions of web images (similar to BLIP-2's data composition) during **VPGTrans stage-2** (after the word converter training), i.e., the model sees data in multiple domains, which ensures a good generalizability over domains.
Moreover, the fact is that boosting the amount of training data of word converter can be quite cheap and effortless. For example, using 100M data for 1 epoch training will only take **<40 min** consumption with one A100 (80G).
During the rebuttal days, we run the experiments on domain-specific datasets for the LLMs. The experiments are with FlanT5-XL (src) and FlanT5-XXL (tgt):
| Model | Data | PathVQA (yes&no acc. / F1) | RSVQA (acc.) |
| :-------:|:--------:|:-----------------------------:|:--------------:|
| src | 121.6M | 50.6/27.2 | 40.3 |
| tgt | 5.3M | 51.2/26.9 | 41.7 |
| tgt | 9.4M | 51.3/27.0 | 44.0 |
And we find that our model can achieve even higher/comparable performance on these two datasets, compared with the source model.
Moreover, we notice that including more data can further improve the VQA results on other domains.
Thanks for indicating this, and we will consider adding it into revision.
---
**Q3: Robustness: To check the validity of the proposed technique, It would also be interesting to see the effect of transferability on Robustness of the target model which [target-LLM compared src-LLM] can be done by evaluating on datasets AdVQA, VQA-CE or VQA-Rephrasing.**
**A:** Thanks for the suggestion, good idea.
Also, we run the experiment on the datasets you indicated, and the results are as follows. Likewise, the experiments are with FlanT5-XL (src) and FlanT5-XXL (tgt):
| Model | Data | AdVQA (acc.) | VQA-CE (acc.) | VQA-Reph. (acc.) |
| :-------:|:--------:|:-----------------------------:|:--------------:|:--------------:|
| src | 121.6M | 37.2 | 36.7 | 60.2 |
| tgt | 5.3M | 39.7 | 39.0 | 62.8 |
| tgt | 9.4M | 40.3 | 39.4 | 63.2 |
Compared with the source model, our tgt model can achieve better performance on all of three VQA datasets, demonstrating its capability on robustness.
This part will be added into revision, thanks again.
---
Rebuttal Comment 1.1:
Title: We are looking forward to your feedback
Comment: Dear Reviewer#7KJM,
We would like to express our sincere appreciation for your great efforts and valuable feedback on our paper.
Your comments are essential to help us improve the quality of our work.
To address your concerns of generalization and robustness, we compare the source model and target model on RSVQA, PathVQA, AdVQA, VQA-CE, and VQA-Rephrasing datasets.
We kindly hope that you can take some time to re-evaluate our paper based on our replies.
If you have any further concerns or questions, please do not hesitate to let us know.
We will be happy to address them promptly.
Best Regards,
Paper#1904 Authors | Rebuttal 1:
Rebuttal: # General Response to All Reviewers
Dear reviewers,
Thanks for all of your time to write valuable and constructive comments. Your feedback will definitely assist us in enhancing the quality of our paper, and thus we are committed to incorporating your suggestion in our revision process.
Meanwhile, we feel encouraged that the reviewers find our method efficient and effective (Reviewer 7KJM, GwBd and mi8b), and our experiments solid and comprehensive (Reviewer iPdz and mi8b). Your support means a lot to us!
At this juncture, we would like to re-emphasize the significance of this work.
The AI community has now entered the era of Large Language Models (LLMs), wherein multimodal Vision-Language LLMs (VL-LLMs) have demonstrated a powerful understanding across vision&text modalities, becoming a focal point of future LLM research. However, we recognize that obtaining a VL-LLM can be costly, especially when training from scratch. This has motivated our work: **exploring the model transfer learning approaches to significantly reduce the cost of acquiring VL-LLMs**, such as transitioning from existing smaller models to larger models.
With such background, this work contributes to the following key aspects:
1. We propose a VPGTrans framework, which significantly reduces the cost of building new VL-LLMs (e.g. in only 10% GPU hours) with the help of existing bases.
2. We reveal intriguing insights and findings of the VPG transfer across LLMs, and provide potential explanations to shed light on further research on this topic.
3. With VPGTrasn, we customize two novel VL-LLMs (i.e., VL-LLaMA and VL-Vicuna) that are open-sourced for the community.
Although this paper may not present ground-breaking technical novelty (pointed out by Reviewer#7KJM), we mainly contribute by offering a thorough investigation to demonstrate the feasibility of VPG transfer for efficient accessibility of VL-LLMs, and devise a novel transfer method **VPGTrans**.
As recognized by Reviewer#mi8b, _'the pilot study serves as a promising starting point, paving the way for further investigations and advancements in the development of MM-LLMs with higher efficiency'_.
Our VPGTrans enables more swift building of new VL-LLMs with lab-level resources, and thus will enable more researchers into this area.
We much believe VPGTrans will show a broad impact on the future research of LLMs. Thus, we will release all the codes and resources upon acceptance.
In response to the reviewers' comments, we have thoroughly reviewed our paper, performed additional experiments, and prepared a comprehensive response.
We will fix all the typos and improve the manuscript according to your comments.
We hope that our paper adequately addresses your concerns.
We kindly request a **re-evaluation of our work** based on the updated information, and look forward to your recognition.
Best regards. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Causes and Effects of Unanticipated Numerical Deviations in Neural Network Inference Frameworks | Accept (poster) | Summary: The authors a wide range of hardware platforms including both CPUs and GPUs and found inference result (bitwise) of ML model is not consistent across the platforms and even non-deterministic on the same platform. The authors identified the causesof these numerical deviations and attibuted them to accumulation/aggregation rounding errors due to finite precision and different convolution algorithms.
Strengths: * The authors have studied a wide range (75) of hardware platforms, including both CPUs and GPUs, which represents the majorities of ML inference platforms and quatified the deviations through EQCs and Remaining Precisions
* The causes of numerical deviations on different platforms are clearly presented, e.g., CPUs mainly due to precision issues of various parallelisms (SIMD/Multi-cores), and GPUs additionally due to convolution algorithms
Weaknesses: * The numerical deviations author disclosed are well known to the industry, e.g., arithmetic precisions due to accumulation/aggregation order/times and various convolution algorithms. What's new to me is the non-determinism of convs is actually due to runtime variance of microbenchmarks, however which can be surpressed by forcing Frameworks to enable determinism like in TensorFlow (as suggested by authors at line 300) or Pytorch, which is a common practice when serving models. So I don't think authors found significant new sources of numerical deviations.
* Though authors provided a metric "Remaining Precisions" to quantify numerical deviations, it is unclear how it translates to the final inference accuracy/capacity of the model. In the end, unlike traditional software development or high performance computing, ML applications are more robust to numerical deviations in the neurons, and people are even considering "mortal computing" through analog devices. So while "Remaining Precisions" might be insightful to computer arithmetic community, it doesn't directly reflect the capacity of a hardware/software system for ML applications. Studies to bridge the connection between "Remaining Precisions" and accuracies might be desired.
* The takeaway/value of this work is not clear for broader audience of NeurIPS readers. While it reveals the causes of numercial deviations across platforms, and might be valuable to certain domains like security (line 293-296), it doesn't provide a methodology for general audience to benchmark/quantify the numerial precisions of new hardware/softwares, e.g., transformer engines of H100. For example, users could have collected average numerical results of a model from a few mature platforms and run it on a new hardware and compare the remaining precisions, which might refelect how the reliability of a new hardware.
* If my understand is correct, all cases studied are single CPU/GPU and mainly on single floating point precisions, however, in practice, a lot of ML workloads are run in quantized version (bf16/fp16/int8, etc) and in a distributed fashion, which also heavily rely on the ML framework/versions. These might present a first order deviations, compared to the SIMDs/Conv Algos studied by authors.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The remaining precisions remain relatively stable (don't diminish to 0) throught the entire model (Figure 5) and the authors attribute it to activaiton layers like ReLU/Pooling reduce information and the author also indicated that EQCs are not related to weight distributions(line 273).
Theoretically, if weights are properly distributed (e.g., like Xavier/He initialization that scales to sqrt(accumulation size)) both the activations and their deviations should follow the same distribution (with similar mean/var) throughout the model, regardless of what activation functions are used. So one question is whether the authors have checked if the weight distributions indeed have such property. And to verify the stable remaining precision is indeed due to ReLU/Pooling that reduce information, can the authors try Xavier initialization + sigmoid activation (instead of He + ReLU in typical Resnet) or replace MaxPool to MeanPool to see if this behavior still holds. This should tell which is the cause of non-diminishing remaining precision.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have talked about a few in line 315, additionally, as discussed by last point in Weakness, the study is limited to single device and single precision, and these may not be a significant source of deviations compared to distributed systems and quantizations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your careful analysis of our paper!
> "The numerical deviations author disclosed are well known to the industry [...] So I don't think authors found significant new sources of numerical deviations."
Clearly, the sources of numerical deviations are known. This is why we do not claim to have identified new sources of numerical deviations. Instead, we offer the so-far most comprehensive evaluation of causes and effects of (known) numerical deviations in CNN inference, spanning a wide range of relevant platforms. The novelty lies, among other things, in associating them to properties under control of the machine learning engineer, such as layer type or activation function.
> "[…] What's new to me is the non-determinism of convs is actually due to runtime variance of microbenchmarks, however which can be surpressed by forcing Frameworks to enable determinism […]"
Indeed, some frameworks allow users to enable deterministic operations. This disables microbenchmarks and selects the first supported algorithm instead of the fastest one. The selected algorithm, however, can again vary between GPUs, especially of different generations. Thank you for pointing this out. We will clarify this in the final version of the paper.
> "[…] it is unclear how it ["Remaining Precisions"] translates to the final inference accuracy/capacity of the model. […] while "Remaining Precisions" might be insightful to computer arithmetic community, it doesn't directly reflect the capacity of a hardware/software system for ML applications. […] "
Our remaining precision metric is not meant for measuring system-level accuracy. The metric is intended to facilitate analysis and discussion of deviations. It is a refinement of the simple (but crude) EQC metric and is much easier to reason about than, e.g., difference norms and cosine distances.
Moreover, while users are rarely affected *at this point in time*, they might be in the future when ML results are used other applications that require deterministic and reproducible inputs.
> "[…] might be valuable to certain domains like security (line 293-296), it doesn't provide a methodology for general audience to benchmark/quantify the numerial precisions of new hardware/softwares, e.g., transformer engines of H100. […]"
Our understanding is that a broad audience is indeed concerned about security and replicability. By providing our infrastructure on GitHub, we enable other researchers to use our metrics and codebase to trace deviations through the model and produce visualizations like Fig. 5.
New hardware, such as H100 (released just before the submission deadline, and only very recently available on AWS), can be compared to existing hardware for concrete models. As demonstrated in the paper, deviations have multiple causes at various level of the stack. A single metric that can predict the reliability of new hardware in general may not exist.
> "[…] all cases studied are single CPU/GPU and mainly on single floating point precisions, however, in practice, a lot of ML workloads are run in quantized version (bf16/fp16/int8, etc) and in a distributed fashion […].",
"[…] the study is limited to single device and single precision, and these may not be a significant source of deviations compared to distributed systems and quantizations."
Thank you for pointing this out. The primary goal of this paper is to answer the question if inference is deterministic and to identify causes of non-determinism. For this reason, we chose the approach of analyzing single CPU/GPU machines. Our measurement and orchestration code enables researchers to analyze multi CPU/GPU machines without much effort.
Please recall that quantization to fp16 is covered in Figure 6 of the paper. As bfloat16 and int8 are not directly compatible with our method of converting the models, we were not able to generate results in time for this rebuttal but will include them in the camera-ready version of the paper.
> "[…] if weights are properly distributed (e.g., like Xavier/He initialization that scales to sqrt(accumulation size)) both the activations and their deviations should follow the same distribution […] So one question is whether the authors have checked if the weight distributions indeed have such property.
> […] to verify the stable remaining precision is indeed due to ReLU/Pooling that reduce information, can the authors try Xavier initialization + sigmoid activation (instead of He + ReLU in typical Resnet) or replace MaxPool to MeanPool to see if this behavior still holds."
Thank you for this observation. We ran new experiments to confirm that choosing Sigmoid/MeanPool (Xavier initialization) over ReLU/MaxPool (He initialization) does not change the pattern of remaining precision throughout the model. The results of this experiment are shown in Figures R1 and R2 of the attached PDF. The first and last Sigmoid activation layers decrease the remaining precision, whereas the remaining layers increase or do not affect it. This confirms our reasoning that activation functions simply propagate deviations. The single pooling layer after the first convolution increases the remaining precision for both MaxPool and MeanPool.
We also measure the effect for a small model with a second pooling layer and find that Sigmoid activation reduces the remaining precision in three out of four cases. MaxPool increases the remaining precision for both layers, whereas MeanPool increases it for one and leaves it unaffected for the other layer.
Moreover, Table R1 shows the influence of activations on the remaining precision and Table R2 compares the number of deviations before and after activations for each layer.
Again, thank you for your valuable input. It helped us to improve the quality and clarity of our work!
If you have any further suggestions or questions, please do not hesitate to let us know.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed explanation, in addition to the experiments you run according to my suggestions. They are highly appreciated.
1. Comments on the new experiment you ran first. Based on the result, my read is there is no significant difference between ReLU/Sigmoid or MaxPool/MeanPool, and you said,
> This confirms our reasoning that activation functions simply propagate deviations.
if both relu/sigmoid just propagate deviations rather than reducing it, based on your hypothesis in line 206,
> convolutional and other parallel data processing layers tend to introduce deviations.
then we should expect an monotonic decrease of remaining precision. Otherwise if both relu/sigmoid can reduce deviations, what's the reason behind sigmoid? My take from the experiment is with proper initialization, the remaining precision would stay relatively constant regardless of the activation/pooling you used. Maybe you can just remove activation/pooling completely with Xavior and see how the remaining precision changes. Without a clear understanding of this, the first rebuttal point,
> associating them to properties under control of the machine learning engineer, such as layer type or **activation** function.
can hardly be justified.
2. Regarding precision. It is true that fp16 data are shown in Figure 6 and related section, however if my undertand is correct, the majority of the study is carried out with fp32 and so are the conclusions. Although the same experiment methodology should be able to be applied to fp16 directly, and potentially similar observations would be found, it should be presented to the readers directly in the paper rather than having readers to carry out experiments themselves. Similar comments apply to a distributed system:
> Our measurement and orchestration code enables researchers to analyze multi CPU/GPU machines without much effort.
Without these data points I can hardly justify the work has been focusing on the 1st order causes of numerical deviations.
3. Regarding the broad impact and audience. Again, I highly appreciate the tremendous of amount of platforms authors have studied and I believe this can serve as a good tech report to many system engineers. I am just not convinced that the insights it revealed is of great interests to the audience of NeurIPS. I think this is a bit subjective and I will leave to ACs and other reviewers to make the call.
---
Reply to Comment 1.1.1:
Comment:
> Thank you for your detailed explanation, in addition to the experiments you run according to my suggestions. They are highly appreciated.
You are welcome and thank you for your response.
Concerning our additional experiments:
> then we should expect an monotonic decrease of remaining precision
It is not straightforward to reason about the monotonicity of remaining precision (and we don't do this in the paper).
First, for any two EQCs, the remaining precision can increase or decrease by simple addition of a bias.
Here is an example for a hypothetical 4-bit mantissa: 0010 vs 0011 -> RP 3; adding bias 0001 we obtain: 0011 vs 0100 -> RP 1.
Second, remaining precision is a tail metric in Fig. 5, see line 76 in the paper.
Because it is not easy to reason about remaining precision, we included tables R1 and R2 in the rebuttal and would like to bring them to your attention again.
Table R1 shows a difference in the effect on remaining precision between the ReLU and sigmoid activation functions (both, as always, initialized with the correct distribution).
This supports our statements in lines 209-211.
To avoid the difficulty of interpreting aggregate remaining precision, table R2 simply counts the number of deviations.
Observe the difference between ReLU and sigmoid in terms of how many deviations they eliminate.
> [...] can hardly be justified.
We hope, with the above explanation, you find the evidence in tables R1 and R2 convincing.
Regarding the floating point precision, we would like to refer to our rebuttal to Reviewer u1sW:
> To address your comment, we extended the [fp16 and fp64] experiment to all three models used in the paper.
> We find that the pattern we report holds across all tested models and samples. [...]
> These results will be included in the supplemental material.
Choosing fp32 as a baseline is still reasonable given that all major platforms use it as default.
> I believe this can serve as a good tech report to many system engineers.
Thank you for seeing value in our research.
> I think this is a bit subjective and I will leave to ACs and other reviewers to make the call.
For this to happen, we would kindly ask you to consider revising your score from one that suggests technical flaws to a neutral one. | Summary: This paper explores why the same code & data can result in different results from a trained neural network on multiple different, or even the same, architectures. Considering CPU, GPU, and algorithmic implementation, the paper isolates several key factors that cause variance in the calculated results, which hardware group together, which can cause variation on the same hardware, and eliminate other factors as confounders.
This really is a great paper. I could go into more detail spitting the nuances back at the author, but this is excellent and valuable needed within the ML/DL community.
Strengths: 1. Tackles a core issue and question of reproducibility within the research community
2. Performs an extensive analysis requiring immense work to obtain many different hardware platforms to perform the tests.
3. Identified surprising results on the impact of low-precision in mitigating the issue but in unstable ways.
Weaknesses: 1. There is some missing related work that should be incorporated for scholastic completeness. In particular the history of numerical differences in ML was documented in "A Siren Song of Open Source Reproducibility" - which covers some older history of related work, but also has two recent works that must also be included: "Problems and Opportunities in Training Deep Learning Software Systems: An Analysis of Variance" and "Randomness In Neural Network Training: Characterizing The Impact of Tooling". Neither work was as thorough in platforms and nailing down what exactly the problem is as this work, so I see this as no barrier to accepting the paper. But we should give completeness to the work by citing the ones who have documented this space previously.
2. The work doesn't fully prepare the reader for their journey in the introduction. It would help the reader to summarize the results in the introduction, and then the last paragraph can explain where/how these results will be reached.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Was any experiment performed to alter the depth/size of the network under test, to see if that would impact the probability of a divergence occurring as depth increased?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: There are no limitations worth noting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for pointing us to additional related work, which we unfortunately overlooked when doing our literature search.
Zhuang et al.’s work on training variance gives valuable insights on the training process as a whole, whereas we focus on pinpoint observations concerning inference. Combining their methodology with our instrumentation is promising future work, and we will indicate it as such.
Pham et al. review the influence of implementation-level variance during training, including the same observations that led to our study of microbenchmarks, on the final performance of the trained models. They also survey how often such variances are mentioned and accounted for in the literature, as well as how prevalent knowledge of such variances is among ML researchers. We will cite their paper in our coverage of microbenchmarks.
Raff et al.’s discussion of the influence of code on reproducibility and replicability focuses on the process of ML research. We share the sentiment that current practices of publishing research code are often insufficient. We will include these points in the discussion and cite the reference.
We will expand our discussion section by the points raised in all three papers to highlight the problem of lacking reproducibility, and in fact the lack of a common definition of this term in the first place.
> "The work doesn't fully prepare the reader for their journey in the introduction. It would help the reader to summarize the results in the introduction, and then the last paragraph can explain where/how these results will be reached."
Thank you for pointing this out. We will restructure the introduction to make it perfectly clear what results will be presented, and in what order.
> "Was any experiment performed to alter the depth/size of the network under test, to see if that would impact the probability of a divergence occurring as depth increased?"
There is no experiment that explicitly cuts, shrinks or enlarges middle layers to investigate the effects. However, the results in Figure 5 were obtained by outputting intermediate layer results, and the results in Figure 4 can be interpreted as varying the size of a single convolutional layer. Both figures show clear trends on how the size affects the number of EQCs as well as the remaining precision.
Our analysis of TensorFlow and its underlying Eigen library tells us that actually cutting the layers would result in the same remaining precision and EQCs as shown in Figure 5 because implementation choice on CPUs depends only on the hardware and not on the model.
On GPUs, cutting layers will affect these metrics, as early layers will take up memory on the GPU and affect the microbenchmarks of subsequent layers.
Unfortunately, the page limit of NeurIPS forced us to aggressively select which questions we answer in the paper. We hope to convince you that our focus on breadth (for a comprehensive coverage of causes) while compromising on depth (concerning potential interactions between causes) is the right choice at this stage of the research.
If you have any further suggestions or questions, please do not hesitate to let us know.
---
Rebuttal Comment 1.1:
Comment: >Unfortunately, the page limit of NeurIPS forced us to aggressively select which questions we answer in the paper. We hope to convince you that our focus on breadth (for comprehensive coverage of causes) while compromising on depth (concerning potential interactions between causes) is the right choice at this stage of the research.
No convincing was necessary, was just curious. I think the above would be useful for an appendix - I don't think it's conclusive enough to make a hard statement, but its never the less valuable for future work.
I think (7) is still the right score for this paper, but whole heartedly encourage my fellow reviewers to raise theirs. | Summary: The paper explores the causes and effects of the same model and data for inference on different platforms that result in different variations of numerical values. Furthermore, many different hardware specifications within the cloud which are mostly CPU platforms are chosen to evaluate different factors that cause this discrepancy. For the CPU environments, the emphasis is on parallelism and certain Instructions while for the GPU, the focus is on the convolution algorithm that is selected by the GPU at a given time with respect to multiple factors. Finally, this paper offers some mitigation strategies to solve these issues.
Strengths: ++ Good effort to separate each factor of deviation for evaluation
++ High diversity of CPUs used for the experiments
++ Most of the graphs are informative and hold useful pieces of information
++ EQC is a good metric to measure diversity
Weaknesses: – The abstract and introduction section could be more organized and well written
– Minor Writing issues(transitions, cohesion, word usage)
– More description of related work could be utilized to differentiate in detail between the paper and other works in the literature
– Real environments might take multiple factors all at once to generate different inference results. This real-world scenario could be explored in the experiments instead of evaluating each factor separately
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: C1. It would be better to list the contribution of the work in the introduction. Also since this paper is an analysis of cause and effect, it would be better to evaluate some of the mitigation strategies to show effectiveness.
C2. Although the title suggests a general analysis of neural networks, most of the analysis is about layers of CNN or ResNet and ways of calculating convolution. Therefore it would be better to either analyze other neural networks as well or indicate this in the introductory sections of the paper.
C3. It would be better to have a list of CPU flags and corresponding clusters as an appendix. Also, More description on how often certain SIMID flags within a cluster are activated would help the readers to understand the importance of SIMID instructions for differences in inference results.
C4. Figure 6 might need more explanation regarding the importance of floating point precision. Also Maybe other Neural Networks could be tested to see if they follow the same pattern for single-point precision.
C5. More description about how different the convolution algorithms do the calculations might extract new insights into how they affect EQCs. Also, some results might confuse the reader, for example, how different explicit and implicit GEMMs are and the reasoning behind a GPU choosing one over the other.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your remarks regarding the organization of sections, writing issues, and description of related work. We will update the revised version of the paper to ensure the contributions of our research are well-articulated.
> "Real environments might take multiple factors all at once to generate different inference results. This real-world scenario could be explored in the experiments instead of evaluating each factor separately."
In addition to our study of isolated influences, our main results in Figure 1 of the paper, as well as Tables 1 and 2 in the supplemental material, in fact present the results of a large-scale validation study using real models in real CPU and GPU environments. We will revise line 92 to explicitly state that these measurements capture the interplay of all influences in real-world scenarios.
> C1. It would be better to list the contribution of the work in the introduction. Also since this paper is an analysis of cause and effect, it would be better to evaluate some of the mitigation strategies to show effectiveness.
We will include a clear and concise description of the main findings in the introduction. We will also highlight that our results of Section 3.4 do evaluate a mitigation strategy (varying the floating point precision) and analyze its effectiveness.
> C2. Although the title suggests a general analysis of neural networks, most of the analysis is about layers of CNN or ResNet and ways of calculating convolution. Therefore it would be better to either analyze other neural networks as well or indicate this in the introductory sections of the paper.
This assessment is fair. We will clarify the scope of our work in the introduction. By publishing the measurement and orchestration code we make it easy for researchers to extend our research to other layer types and models. As an indication that our findings generalize, our experiments for Figure 5 show that batch normalization can introduce deviations.
> C3. It would be better to have a list of CPU flags and corresponding clusters as an appendix.
We have generated a full table including all CPU flags and added it to the supplemental material. (Unfortunately, we cannot provide it here as it would exceed the page limit for the rebuttal document.)
> C3. (cont.) Also, More description on how often certain SIMID flags within a cluster are activated would help the readers to understand the importance of SIMID instructions for differences in inference results.
We fully agree with your comment and have in fact made efforts to measure the usage of SIMD instructions in TensorFlow. However, despite numerous attempts at dynamic instrumentation (gdb, rr, valgrind/cachegrind, and TensorFlow's own profiler) and code analysis (of TensorFlow and its underlying Eigen library), we were unable to obtain reliable results due to the complexity of TensorFlow's monolithic architecture. While certainly desirable, we believe that the value of our work does not hinge on this description.
> C4. Figure 6 might need more explanation regarding the importance of floating point precision. Also Maybe other Neural Networks could be tested to see if they follow the same pattern for single-point precision.
We recognize the need for more explanation of Figure 6 and will provide a more detailed description in the final version of the paper. To address your comment, we extended the experiment to all three models used in the paper. We find that the pattern we report holds across all tested models and samples. Due to the 1-page limit of the rebuttal document we cannot include dendrograms, but can confirm that the EQCs follow the same patterns as shown in Figure 6. These results will be included in the supplemental material.
> C5. More description about how different the convolution algorithms do the calculations might extract new insights into how they affect EQCs.
Due to the page limit we omitted a detailed description of calculations of different convolution algorithms. We do agree that they might provide insights into their effect on EQCs and have therefore included them in the supplementary material along with references to the relevant literature and vendor documentation.
> C5. (cont.) Also, some results might confuse the reader, for example, how different explicit and implicit GEMMs are and the reasoning behind a GPU choosing one over the other.
It seems that our description of the difference between explicit and implicit GEMM in line 153 has been imprecise (specifically "presumably"). Thank you for the comment, we will refer to the literature and clarify that the only difference is that explicit GEMM stores the Toeplitz matrix in memory, whereas implicit GEMM computes it on the fly.
Thank you for bringing these points to our attention.
If you have any further suggestions or questions, please do not hesitate to let us know.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. Many of my concerns have been addressed. I have raised my rating from 4 to 6. | Summary: The paper presents an extensive empirical study on the numerical instabilities for convolutions across different platforms. The findings of why these instabilities occur are interesting and informative. They further cluster them into equivalence classes for ease of explanation.
Strengths: Extensive evaluation on a large number of platforms across both CPUs and GPUs.
Weaknesses: The results and findings are very interesting, but the utility values of the EQCs and the errors they find is not demonstrated. Essentially, how do you use this information to make inference more robust for example?
No results to corroborate line 285. Will these imperfections lead to different model outputs? Some of these errors are tolerable specially in classification setting. Authors themselves mention this is rare. Some evidence or experiment on flips would be useful.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I find the paper very interesting to read. It is not a research solutions paper, but an extensive study on numerical stabilities. Even though, the research contribution on novel techniques do not exist, I find the community would benefit from these findings the paper reports through extensive experimentation.
Please answer the following questions in the rebuttal.
* Why is Toeplitz based convolution lossy?
* How significant are the loss of precision? Does it affect results in inference? Can this be trained away by fine tuning?
* How do you use the EQCs to better mitigate any inaccuracies in inference?
* Algorithmic choice in GPUs are anyway supposed to get different results, since they are different approximations of the convolution. I find this expected and obvious.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Authors mention the limitations explicitly under discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for carefully reviewing our paper.
> "The results and findings are very interesting, but the utility values of the EQCs and the errors they find is not demonstrated. Essentially, how do you use this information to make inference more robust for example?"
The concrete utility of this work lies in identifying the root causes of deviations as a necessary first step towards reproducibility. Our findings give researchers and ML framework developers concrete starting points for improving robustness. Thank you for asking this question and bringing this lack of framing to our attention. We will clarify our contributions in the introduction.
> "No results to corroborate line 285. Will these imperfections lead to different model outputs? Some of these errors are tolerable specially in classification setting. Authors themselves mention this is rare. Some evidence or experiment on flips would be useful."
We agree that our formulation in line 285 is too general, and will revise it to state that bit-level reproducibility of inference results is a necessary requirement for reproducibility. While our study focuses on inference for comprehensibility, the causes we identify may also explain phenomena observed (but not explained) during training, as shown for instance by [Zhuang et al. "Randomness in neural network training: Characterizing the impact of tooling" (2022) and Pham et al. "Problems and opportunities in training deep learning software systems: An analysis of variance." (2020)].
Experiments on forcing label flips have been described in related work which we cite in line 288.
> "Why is Toeplitz based convolution lossy?"
Our presentation of convolution approaches was imprecise. In fact, convolution with Toeplitz matrices is not lossy. The "lossy" transformations in lines 60 and 63 refer to Winograd and FFT-based convolution, respectively. Thank you for pointing this out, we will revise accordingly.
> "How significant are the loss of precision? Does it affect results in inference? Can this be trained away by fine tuning?"
As stated in line 287, reductions in common model performance characteristics are unlikely.
While incorporating deviations into the loss function to reduce deviations by fine-tuning is conceivable in principle, we deem it impractical with current frameworks. The deviations happen at such a low level that the ML framework does not capture them without the kind of instrumentation we built for the purpose of this research. We thank you for this question and will keep it in mind for future work on this topic.
> "How do you use the EQCs to better mitigate any inaccuracies in inference?"
We introduce EQCs - a metric linked to the effect numerical deviations - as a tool to understand their causes. Engineers striving for deterministic and consistent results should aim for a single EQC, by restricting the inference platforms to those within the same EQC or by applying mitigation measures that reduce the number of EQCs. Researchers can use EQCs to test new mitigation measures. Thank you for this question, we will elaborate more on the practical use of EQCs in the paper.
> "Algorithmic choice in GPUs are anyway supposed to get different results, since they are different approximations of the convolution. I find this expected and obvious."
It is known that algorithm choice can vary on GPUs, and that different algorithms yield different results. However, to the best of our knowledge, we are the first to report the variations in chosen algorithms, given the same models, inputs, and hardware.
As mentioned in line 300, even with "enforced" determinism (through setting determinism-flags in TensorFlow and PyTorch), different algorithms can be chosen based on the algorithms supported by the hardware.
Thank you again for your thorough review and your valuable remarks.
If you have any further suggestions or questions, please do not hesitate to let us know.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. The authors agree to provide more discussion on the utility of EQCs. That said, I would have liked to see more concrete reasons or experiments to corroborate their claims (e.g., While incorporating deviations into the loss function to reduce deviations by fine-tuning is conceivable in principle, we deem it impractical with current frameworks). I would like to keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment.
> I would have liked to see more concrete reasons or experiments to corroborate their claims (e.g., While incorporating deviations into the loss function to reduce deviations by fine-tuning is conceivable in principle, we deem it impractical with current frameworks).
Of course we respect your judgement.
Perhaps we can take this opportunity to elaborate on why fine-tuning for consistency is beyond current technology and would require much more research.
(In hindsight, calling it "impractical" was probably too brief.)
Recall that most deviations we study can only be observed between different platforms, i.e., different physical machines.
While it is common to measure outside information and incorporate it into the loss function, this is usually done for information that is available on the same machine.
Capturing deviations from different machines is feasible, and our experiments provide the infrastructure to do so.
However, it is not clear how to combine them into a single loss function, and different strategies would have to be evaluated.
Even if a method for calculating "deviation loss" was found, it might result in different weight updates on different platforms.
This is because the same factors that cause deviations during inference are also present during back-propagation.
(We have identified this issue during followup research in a similar direction as your suggestion.)
Moreover, it is far from certain that fine-tuning against deviations on $k$ platforms - if possible at all for $k>>2$ - would generalize to the $k+1$th platform.
Another approach would be to come up with a model of the arithmetic processes that cause deviations in order to simulate their occurrence on a single platform.
While such a model would clearly be nice to have, it appears to us that it would require significant new research (and substantial space to document it), including reverse-engineering of closed source GPU software.
We deem that this would go far beyond the scope of our paper.
In summary, we agree that exploring fine-tuning as a mitigation measure is an interesting idea, and see it as a promising direction for future research motivated by our present work.
We are happy to mention and - space permitting - briefly discuss this the camera-ready version.
Thank you once again for your review and in particular this suggestion. | Rebuttal 1:
Rebuttal: We thank all four reviewers for their insightful feedback and thoughtful suggestions. We are very glad that the suggestions by different reviewers are compatible (and partly overlapping). This enables us to incorporate all of them into the final version of the paper.
Responses to specific questions and remarks are posted under the respective reviews.
The attached PDF document contains the figures and tables of the experiments we ran to address specific questions in the reviews.
Pdf: /pdf/4bae8eadba3358e08a725aa35751cc6afb089899.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Weitzman's Rule for Pandora's Box with Correlations | Accept (poster) | Summary: This paper considers Pandora’s Box problem with correlated values. Previous work gives a 9.22 approximation for this problem. This problem considers two variants, depending on whether the algorithm updates based on exact values or on the event that the value is large, with approximation factors of 5.828 and 4.428, respectively. The latter approach can be extended to the case of unknown distributions that the algorithm has sample access to. An interesting feature of the main algorithmic blueprint is that it is a direct extension of Weitzman’s original rule for this problem (in the independent value case).
Strengths: This paper makes a major contribution to optimal stopping theory, by giving a clean answer to a very natural and important problem.
Weaknesses: The technical writing can definitely be improved. For example, it is very difficult to follow the histogram argument in the proof of Theorem 3.2 without figures.
(
Some typos/writing notes since there is nowhere else to put them:
- Typo in line 41: “are are”
- I believe the notation (x)^+ is not defined anywhere.
- It’d be nice to do some of the math slower. E.g., in 154, it’d be useful for the reader to write ALG first as an expectation, and then slowly open it up and use the uniform fact, etc
- Typo in line 274: the \geq sign should not be there
)
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Why are both variants considered? It seems like variant 1 is better under every aspect considered (approximation ratio, simpler proof, learnable).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and typos caught; we will add a figure in the final version showing how the histogram method works, to better illustrate the proof.
To answer the question: when we initially started considering variant 2, we were not sure whether it would give a better approximation or not. It is however a more natural variant: we don’t keep scenarios that are clearly not possible anymore (which variant 1 does).
As other reviewers pointed out, intuitively we would expect a better factor. However since the algorithm is a greedy approximation, we soon realized that the factor may not necessarily be monotone on the amount of information given. Variant 2, led us to a generalization of the histogram proof on trees, which might be of independent interest, and therefore we decided to include it as well. Additionally our analysis is not necessarily tight, and a better factor might still be possible.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. | Summary: This paper considers the Pandora's box problem with correlated values. In each step, the algorithm chooses an unopened box and observes its value generated from the known distribution. The goal is minimizing the sum of the minimum value among the opened boxes and the total opening cost. The distribution is given as a uniform distribution over finite possible scenarios. This paper shows simple greedy algorithms based on reservation values achieve improved approximation ratios. The existing algorithms for this problem are based on more complicated techniques. This paper also proposes an algorithm that uses samples from the distribution instead of its explicit representation.
Strengths: The Pandora's box problem is an interesting problem that has been extensively studied in TCS and algorithmic game theory. Its correlated version is regarded as a noteworthy technical challenge. This paper improves and simplifies the existing results. This is a significant theoretical advancement.
Weaknesses: I have a concern regarding the soundness of the proofs. The proofs sometimes appear informal, making it challenging to verify their validity. For example, the description of the algorithm is not always clear. In Algorithm 2, the opening cost of the already opened boxes are set to 0 (line 8). Then their reservation value might become the minimum, which is selected in line 4-5. The algorithmic behavior in this case is not clearly specified. This issue is relevant to the analysis around the inequalities (4) and (5). The opening cost $c_b$ appears in these inequalities, but it is not constant during the execution of the algorithm.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: What is the formal treatment of the opening cost $c_b$ in the analysis?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes, it is adequate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments, and we will clarify that a box can be opened many times during the run of the algorithm.
* Regarding the algorithm’s behavior: a box can be opened many times, therefore after we open it once (and make the cost 0) it could be re-opened at a later stage of the algorithm. This is similar to what happens in the independent case algorithm (Weit79) where we could stop at any point and “go back” to pick a value seen in a box opened before. This is the same behavior: we can go back and select a box previously opened with no extra cost.
* The fact that the opening cost becomes 0 is not directly used in the analysis (i.e. inequalities (4) and (5) ). This means that our analysis gives an upper bound on the cost of the algorithm, *even if* the algorithm *never* changes the cost of an opened box to 0. That is the reason in inequalities (4) and (5) the cost appears unchanged but the analysis still works for the algorithm since we just want an upper bound (and if we changed the cost to 0 this would only lower the cost of the algorithm).
Therefore, our analysis shows that we can bound the algorithm’s cost even without setting costs to 0. However, setting them to 0 this makes it easier to see how it directly generalizes the independent case (since it mimics the “going back to pick a box opened earlier” behavior).
---
Rebuttal Comment 1.1:
Title: Comment
Comment: Thank you for addressing my concerns in your response. The clarification provided regarding the opening cost is appreciated. To enhance the clarity of this aspect, I recommend considering a revision of the pseudocode and the proofs. Given that there is room for improvement in the writing, my score is still on the borderline.
---
Reply to Comment 1.1.1:
Title: Thanks for comment
Comment: Thank you for reading our reply. We would like to mention that the clarification of the opening's cost role does not require a major rewrite and can easily be incorporated in the final version of the paper.
Specifically, as we described in our reply, the cost change from $c_i$ to 0 does not come into the proof at all, therefore the technical part/proofs do not need any major changes. Of course we acknowledge that it is unclear why it is of no consequence to the proof, and what is its role in the algorithm. We intend to add a paragraph discussing
* why it does not appear in the proof (so that someone reading the proof will understand why the change to 0 does not appear),
* clarify its role in the algorithm, by also mildly editing the pseudocode (So that someone reading the pseudocode has no doubt on whether a box can be reopened),
which is an easily implementable change to do for the final version. | Summary: This paper provides an exploration of Pandora's box problem with correlations. The authors innovatively modify the computation of reservation values within Weitzman's algorithm. They further solidify their contribution by proving the approximation ratio of the proposed algorithms under various distribution updating schemes.
Strengths: This paper provides an exploration of Pandora's box problem with correlations. The authors innovatively modify the computation of reservation values within Weitzman's algorithm. They further solidify their contribution by proving the approximation ratio of the proposed algorithms under various distribution updating schemes. The algorithms presented are intriguing and succinct, which I find very appealing.
The whole paper also reads very well except skipping of some technical parts.
Weaknesses: However, I admit that some aspects of the algorithms and their details elude my understanding. I would greatly appreciate further explanations to deepen my comprehension. Specifically, it remains unclear what it means to update the value distribution $\mathcal{D}$ conditional on $V_b > \sigma_b$ and $V_b = \sigma_b$. Why even we could have both different updating? I am also uncertain how this conditional updating of $\mathcal{D}$ gives rise to Algorithms 2 and 3.
Providing additional insights into these elements would undoubtedly clarify the understanding for readers like myself and further enhance the value of your work.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: I would greatly appreciate further explanations to deepen my comprehension. Specifically, it remains unclear what it means to update the value distribution $\mathcal{D}$ conditional on $V_b > \sigma_b$ and $V_b = \sigma_b$. Why even we could have both different updating? I am also uncertain how this conditional updating of $\mathcal{D}$ gives rise to Algorithms 2 and 3.
Providing additional insights into these elements would undoubtedly clarify the understanding for readers like myself and further enhance the value of your work.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments, we can add the appropriate clarifications in the final version. We answer the question below.
The (correlated) distribution is a set of vectors of size $n$ (described end of page 3), where each is drawn with some probability. When we open a box and see a value, some scenarios are not “possible” anymore, i.e. we know they cannot be the ones realized. We illustrate in the following example. Assume there are 3 of these vectors (scenarios).
| | B1 | B2 | B3 |
|:--:|:--:|:--:|----|
| **S1** | 3 | 4 | 7 |
| **S2** | 6 | 4 | 2 |
| **S3** | 7 | 7 | 2 |
The rows in the matrix above are the scenarios, and the columns are the boxes. For example, if scenario S2 is the one realized (i.e. drawn from the distribution) then the values inside boxes B1,B2 and B3 are 6, 4 and 2 respectively. The distribution $D$ is essentially drawing one of the scenarios with some probability.
To see what the conditioning means: assume we open box 1 and we see the value 6 (and assume for the sake of the example that the reservation value of box 1 is $\sigma_1 = 5$).
* Variant 1: we condition on $6=V_b> \sigma_1 = 5$, meaning that scenario S1 is not possible anymore (because if S1 was the one drawn from D, then we would have seen a value less than $\sigma_1 = 5$ when opening the box), and is removed from the set S the algorithm considers (line 9, Alg 2)
* Variant 2: we condition on $V_b = 6$, which means that scenarios 1 and 3 are both removed (similarly, because if any of these were drawn, we would not have seen 6 upon opening the box)
The second way these variants differ, is that due to this conditioning, the solution for the $V_b> s$ variant is “partially adaptive” meaning that the next box the algorithm opens, *only* depends on the *scenarios that remain*. However, for the $V_b=v$ variant the solution is “fully adaptive” (meaning that the next box opened, *depends on the exact value seen*). This is illustrated in Figures 2 and 4 of the appendix (supplementary material), where variant’s 1 solution can be represented by a line graph (Fig 2), while Variant’s 2 solution is a tree (Fig 4).
---
Rebuttal Comment 1.1:
Comment: The provided example is highly illuminating and resonates well with me. I would strongly recommend incorporating this example, along with the accompanying reasoning, into the final version of the paper. Doing so would undoubtedly aid in enhancing the readers' comprehension. While I acknowledge that the authors have made substantial efforts in addressing queries, I concur with my fellow reviewers that the paper still requires structural improvements for better readability. Therefore, I am inclined to maintain my current review score. | Summary: This paper studies the problem of Pandora's boxes with the values of the boxes correlated. The authors extend the classical algorithm Weitzman’s Rule for independent values of boxes to the correlated case, and propose new algorithms with better approximation than previous works that are learnable from samples.
Strengths: Novelty: The major novelty of this paper is to extend the Weitzman’s Rule algorithm to the cases of correlated boxes' values, and simplify the problems greatly. Though none of the correlated case or the algorithm are new, this extension does have its unique value by making simplicity and improving the approximation guarantee.
Quality: This paper's result is sound and the analysis looks good. The results are strong as well.
Weaknesses: Clarity: This paper is not very clear in some places. For instance, it does not give the definition of \epsilon-approximation before first using it, which may make some readers hard to understand this paper. The proofs are also too simplified without enough intermediate steps to make readers easy to follow.
Significance. This is a minor weakness, where the authors need to better justify why the correlated case is important. What is the major motivation in practice, or is is a theoretical fundamental problem that can solve a series of dependent problems?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Justify the major motivation of the correlated case for its practical or theoretical importance.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: As written in Weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments, and we will clarify and define the notions that aren’t currently clear. To answer the question: on the theoretical side, removing the strong assumption of independence generalizes the problem, and gave rise to new techniques that may be used to solve similar problems with correlation (e.g. prophet inequalities [1]).
On the practical side, in real life independence is an assumption that usually does not hold. For example:
1. *Housing market*: we are looking to buy a house and we have a list of potential properties in mind, and have to decide on some order to visit them, learn more information and finally buy one. The houses are the boxes, we have to spend time/gas driving to each property (i.e. pay the opening cost) in order to learn the actual price (i.e. the house might need more repairs, therefore the price is not as low as we thought). The houses’ prices however are not independent:
1. Location affects nearby houses the same way (e.g. if a neighborhood is bad, all houses near there will be affected the same way)
2. Having the same property manager (e.g. a potentially dishonest property manager hides the issues with the house, meaning that houses with the same property manager are potentially affected the same way)
2. *Acquiring an item*: we want to buy an item that many different shops sell. We have a prior belief for every shop (e.g. if it’s generally expensive or not) but we have to spend time inquiring about the price. However, some shops will use the same supplier, meaning that if the supplier is low on some part used for the item, these shops will sell it at higher price, compared to the ones using different suppliers.
3. *Job Search*: we are a company interviewing candidates for a job position. Each candidate is a box, we have some prior for each (e.g. their cv) but until we spend time to interview them (i.e pay the opening cost) we don’t know their exact value (i.e. how close they match the position). Our goal is to select the best candidate without interviewing too many. Correlations arise since some of them have common qualifications in their cv that affect their value (e.g. they have graduated the same university, which happens to offer a program that matches very well the position)
These examples are the motivation that started this area of research in economics (consumer search, housing market, job search), where optimal search problems (like Pandora’s Box) arise [see [2] for more details].
[1] Prophet Inequalities with Linear Correlations and Augmentations [Immorlica, Singla, Waggoner] EC 20
[2] The Economics of Search [Brian McCall, John McCall], Routledge, 2007] | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies the Pandora’s box problem with correlation. The problem is as follows: a decision maker is presented with $n$ boxes to explore, and each box $b_i$ is associated with a hidden value $v_i$ and a known cost $c_i$ that needs to be paid to reveal the value. The values in the boxes are drawn from a known correlated distribution. The decision maker opens the box one after the other in an arbitrary order that may depend on the realized values uncovered and, when it decides to stop, pays the total cost of exploration (the sum of the costs of the boxes it opened) plus the smallest uncovered value. The goal of the problem is to design a strategy that minimizes the cost paid by the agent.
The authors present two algorithms and a learnability result. A first algorithm yields a constant factor approximation (4.428) to the best non-adaptive strategy when the only information the decision maker uses to update its strategy is whether the exploration stopped or not, according to some stopping rule. The second algorithm yields another constant factor approximation (5.828) to the same benchmark in a full update model (the decision maker updates its prior distribution according to the exact realized values). Both approximations improve on the state of the art.
Strengths: The Pandora’s Box problem is an exciting and challenging model for exploration under uncertainty that has received much attention in recent years (see, e.g., the recent papers at STOC and EC). The correlated version of the problem is interesting and overcomes the unnatural assumption of independent valuations made in the original model by Weitzman.
Strengths:
- The algorithms proposed by the authors are uncomplicated and enjoy the desirable property of extending Weitzman’s notion of reservation value. This fills a gap in understanding the correlated version of the problem, as previous works used different techniques, achieving worse results.
- The learnability result is interesting and useful in overcoming the natural limitation of not knowing the underlying distribution. The result complements its analogous for the independent scenario (COLT 21, ‘‘Generalizing complex hypotheses on product distributions: Auctions, prophet inequalities, and Pandora’s problem’’).
Weaknesses: - It is unnatural to obtain weaker results (against the same benchmark) when the decision-maker employs a richer update rule. The main weakness of the paper is not providing a convincing explanation of this fact. Moreover, it is impossible to understand from the main body what is the technicality that allows the improved approximation factor.
- Removing the abstract from the manuscript to fit into the 9 pages limit is a borderline practice.
Minor comments:
- Please be consistent in the use of \citet and \citep. Use \citet when the article cited is part of the sentence (e.g., in the abstract), and \citep otherwise.
- Please explain why the second to last display of page 3 implies the last one. Why is it safe to make the sum_s \in A P_D(s) term appear at the numerator without affecting the maximization problem?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why consider the richer update model? It gives a worse approximation guarantee. The decision-maker should ignore it
2. Is there any lower bound on the approximation factor? How far are the proposed results from the optimal poly-time algorithm?
3. Please explain why the second to last display of page 3 implies the last one. Why is it safe to make the sum_s \in A P_D(s) term appear at the numerator without affecting the maximization problem?
-----------------------
Raised my score after rebuttal
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments, we are going to fix the citation inconsistencies, and add more intuition on the difference between the two algorithms versions. Also apologies for the abstract, the author who submitted the paper was careless with copying everything to the NeurIPS template. Space was not an issue, since we could have moved more proofs to the appendix. To answer the reviewer’s questions:
1. When we initially started considering the richer update model (Variant 2), we were not sure whether it would give a better approximation or not. It is however a more natural variant: we don’t keep scenarios that are clearly not possible anymore (which variant 1 does). Intuitively we would expect a better factor. However since the algorithm is a greedy approximation, we soon realized that the factor may not necessarily be monotone on the amount of information given. The richer update model led us to a generalization of the histogram proof on trees, which might be of independent interest, and therefore we decided to include it as well. Additionally our analysis is not necessarily tight, and a better factor might be possible for the richer updates variant (even though even the worse approximation for the richer update variant is close to the lower bound --> see below)
2. The best possible for a polynomial-time algorithm is a $4$-approximation, and this is a result from the Min Sum Set cover problem [FLT04], which means our algorithm is almost tight. We have already included a discussion on the lower bound in the Appendix (supplementary material, section A.3). We will however mention it in the main text too, for completeness.
3. The maximization problem is not affected because we need the max over subsets $A\subseteq S$ such that the expression is equal to $0$; i.e. the expression is the following:
{ Find $\sigma $ that is the solution to $\max_{A\subseteq S} { f(A,\sigma) } = 0$} where $f(A,\sigma)= \sum_{s\in A} Pr_D[s] (\sigma_b-v_b^s)-c_b$
Therefore, by dividing with something positive (i.e. $\sum_{s \in A} Pr_{\mathcal{D}}[s]$), we still need the numerator to be $0$ for this to be satisfied, which is not affected by dividing by any positive number. Note also that $S$ is not empty (since if it was, the algorithm would have stopped), so this is a well defined division.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal, and I raise my score accordingly. | null | null | null | null | null | null |
PUe: Biased Positive-Unlabeled Learning Enhancement by Causal Inference | Accept (poster) | Summary: This paper works on PU learning with a selection bias. They propose a weighted risk estimator to solve this problem, and estimate the weight via neural networks. Extensive experimental results validate the effectiveness of the proposed approach.
Strengths: - The studied problem is very important for PU learning.
- The experimental results is good, since the proposed mechanism improves previous PU learning methods under the biased setting.
Weaknesses: - First, I found potential mistakes in the derivations of $\mathbb{E}[\hat{R}\_{PUe}(g)]=R_{PN}(g|y)$ in the supplemental materials. In Line 10 of Section 3, the authors use $n_{P}/n$ to replace $\pi$, but I do not think so. Since there are positive examples in the unlabeled data ($s=0, y=1$), the class prior is under estimated. I will reconsider it if I need to be corrected.
- Second, the major problem of this paper is that it is very similar to [3]. The proposed approach and theoretical results are very similar. For the theoretical results, Theorem 1 and 2 are very similar to Proposition 3 and 4 in [3]. For the methodology, it only changes the MLE method in [3] to a neural network. Therefore, the contribution is limited.
- Third, this work actually works on “single-training-set PU learning”, while uPU and nnPU work on "case-control PU learning". Please see Gong et al. (2021) for more explanations. The authors should discuss the problem here.
- Besides, I also have questions on the weight estimation via a neural network. The paper says that it regards positive examples as "positive" and unlabeled examples as "negative". My concern is that the model output is $p(s=1|x)$, instead of $p(s=1|y=1,x)$. Can the authors explain it?
- It seems that the paper was written in a hurry since it is very difficult to read. For example, what's after Line 96? It has not reached the bar of a scientific paper up to now. I suggest revising this paper in detail.
---
Reference
- Gong et al, Instance-Dependent Positive and Unlabeled Learning with Labeling Bias Estimation, TPAMI 2021.
Technical Quality: 1 poor
Clarity: 1 poor
Questions for Authors: Please see "Weaknesses".
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 1 poor
Presentation: 1 poor
Contribution: 2 fair
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Q1:* **The class prior is under estimated by using $n_P/n$ to replace $\pi$.**
*A1:* Thanks for the nice conern. We actually use $N_P/n$ to replace $\pi$, that is, the proportion of the total number of positive samples to the total number of samples. $n_P=n_L$ is the number of labeled samples, $N_P$ is the number of positive samples, $n$ is the total number of samples, And $\sum_{j=1}^{n_P} \frac{1}{e(x_j^P)}=N_P$. Obviously, $N_P>n_P$. Therefore, the result is reasonable and does not underestimate the class prior.
*Q2:* **The major problem of this paper is that it is very similar to [3]. It only changes the MLE method in [3] to a neural network.**
*A2:* Thanks. Our work differs greatly from [3] with four aspects. First, the propensity score estimated in [3] is actually unidentifiable, and we stabilize our estimate of the propensity score by relaxing the Local Certainty assumption. Second, the deep learning model is used to estimate the propensity score instead of the linear model, which is better than logistic regression. Third, we improve the accuracy of cost-sensitive PU learning under biased labeling sets through normalized coupling commonly used PU learning, which is not involved in [3]. Finally, our method can be generalized to negative classes, and experiments are done (see PUbNe), which is not involved in [3]. The impact is great and the contribution is not small.
Since our initial conditional assumptions are similar (based on the four basic assumptions of causal inference: Probabilistic, SUTVA, Consistent, unconfounded), the theoretical results look similar. In fact, our deep learning method coupling the cost-sensitive PU learning, which is different from [3].
*Q3:* **This work actually works on “single-training-set PU learning”, while uPU and nnPU work on "case-control PU learning".**
*A3:* Thanks for the nice conern. Our method can be implemented on both scenarios.
The observed positive examples are generated from the same distribution in both scenarios. Hence, the learner has access to a set of examples drawn i.i.d. from the true distribution and a set of examples that are drawn from the positive distribution according to the labeling mechanism that is defined by the propensity score e(x). As a result, most methods(includes uPU and nnPU can handle both scenarios, but the derivation differs. Our method is coupled to cost-sensitive methods such as uPU, nnPU, and PUbN. Therefore, it can be implemented in two scenarios.
In our paper, we mainly conduct experiments on the single-training-set, which is following the previous works.
*Q4:* **My concern is that the model output is $p(s=1|x)$, instead of $p(s=1|x,y=1)$.**
*A4:* Thanks for the nice concern. It's true that the estimate here is $p(s=1|x)$, but I'll show later that our estimate is reasonable, and that's what contributes to our relaxation of the Local Certainty assumption. In fact, we would point out that method in [3] of calculating the propensity score (PS) achieves this goal by maximizing the probability of observing the data, treating both the class posterior and the PS as latent variables. Unfortunately, this did not produce a identifiable PS; There are multiple false assumptions about the PS, which are as likely to be returned by this method as the true PS. Further, the estimated PS in the Local Certainty Scenario is something else, but in the Probabilistic Gap Scenario, the PS is unidentifable without additional assumptions. But in Local Certainty setting, we assume the relationship between the observed features and the true class is a deterministic function, this assumption is too strong to be realistic. We consider a large probabilistic gap. and positive instances that resemble negative instances are less likely to be labeled, i.e. $|P (y=1|x) -P(y=0|x)|>0.8$ and $P(y=1|x^L) > P(y=0|x^L)$, L means the labeled samples. According to the definition of probabilistic score, we have the $\sum e = n_p$ restriction, but $\hat{e}(x_i^N)>0$ on the negative will result in underestimated PS on labeled samples. Normalization can improve the estimation effect.
Only labeled samples are considered. We have $0.9<P(y=1|x_i^L) \leq 1$. Obviously, $0.9e(x_i^L) < \hat{e}(x_i^L)\leq e(x_i^L)$. This shows that our estimated PS for the labeled sample is smaller than the true PS. Moreover, the PS of the estimated negative sample is greater than 0, i.e. $\hat{e}(x_i^N)>0$. According to the PS definition, we have the following equation $\sum \hat{e_i} = {n_p}$. In Local Certainty Scenario and [3], PS of negative samples also share part of the probability, the PS of the positive sample is underestimated. Therefore, we need to use the normalization method to improve the accuracy of our classification algorithm.
$\frac{\frac{1}{e(x_i^L)}}{\sum_{j} \frac{1}{e(x_j^L)}} = A$.
$0.9A < \frac{\frac{1}{\hat{e}(x_i^L)}}{\sum_j \frac{1}{\hat{e}(x_j^L)}} < 10A/9$. This shows that our estimate is relatively stable.
*Q5:* **It seems that the paper was written in a hurry since it is very difficult to read. For example, what's after Line 96?**
*A5:* Thanks for the advice. We will carefully check for spelling, grammar and typographical errors, thoroughly proofread the paper, improve the overall quality, credibility and readability of the paper, and carefully revise it in the final version.
Line 96 should end with a period instead of a colon ":".
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer w1uD,
We sincerely thank you for providing valuable feedback and suggestions.
We have carefully considered your comments and have provided detailed responses addressing each of your concerns. We hope that our revisions have satisfactorily addressed your concerns.
If you have any further questions or uncertainties regarding our work, please reach out to us. We would be more than happy to discuss them with you.
Best,
The Authors
---
Rebuttal Comment 1.2:
Title: Thanks for the rebuttal!
Comment: First, I thank the authors for the rebuttal.
For Q1, my concerned is solved. I am sorry for my mistake. However, I still suggest the authors should *use visually distinguishable notations* for different symbols. Now it seems that it needs to know **the total number of P data in the unlabeled set**, which is a strong assumption. The authors should write it out clearly in the paper.
For Q2, it has partly addressed my concern. From my point of view, the biggest difference is a NN. However, the output is biased (as I explained next).
For Q3, my concern is resolved.
For Q4, now we reach a consensus that the output is **biased**. However, the authors didn't discuss it explicitly in the paper. I think it is a big problem in the manuscript because the authors should rigorously point it out in the paper and then discuss the estimation error or assumptions. In the rebuttal, the authors say that they need several assumptions to reduce the bias. I did some derivations:
$$
p(s=1|x,y=1)=\frac{p(s=1,y=1|x)}{p(y=1|x)}=\frac{p(s=1|x)}{p(y=1|x)}.
$$
It means that only when the confidence of P data is very high and reaches 1, the equation holds. However, I do not think all the P data have an almost-one confidence. I think a more suitable method is to select high-confidence examples and then train the NN.
Besides, I also had a question about the last equation in line 159. I did some derivations and got $E_{p(x|y=1)}e(x)=\frac{n_p}{N_p}$. It indicates that $\sum_{j=1}^{N_p}e(x)=n_p$, but I cannot get the equation in line 159. Could the authors elaborate it?
Overall, since part of my question is solved. I will increase my score by 1.
---
Reply to Comment 1.2.1:
Title: Thanks for the reply!
Comment: *Q1:* **It needs to know the total number of P data in the unlabeled set, which is a strong assumption.**
*A1:* In fact, in cost-sensitive PU learning, such as nnPU. This is a general assumption and has been omitted due to space constraints. For details, see Positive-unlabeled learning with non-negative risk estimator in neurips 2017. There is a lot of work on estimating $\pi$, such as the paper "Learning from corrupted binary labels via class-probability estimation" presented at ICML 2015, "Mixture proportion estimation via kernel embedding of distributions" presented at ICML 2016, "Estimating the class prior and posterior from noisy positives and unlabeled data" presented at NIPS 2016, " Class-prior estimation for learning from positive and unlabeled data" presented at Machine Learning 2017.
*Q2:* **The biggest difference is a NN. However, the output is biased.**
*A2:* Indeed, our biggest contribution is not only the use of NN, but also the use of regularization techniques to couple cost-sensitive PU learning to alleviate the general underestimation of propensity scores(PS). For details, see A4.
*Q4-1:* **The output is biased. The authors should rigorously point it out in the paper and then discuss the estimation error or assumptions.**
*A4-1:* Thank you very much for pointing out the question. Indeed, due to space limitations, it was not strictly pointed out in the paper that the PS estimates may be biased. We'll add this part to the final version of the paper and clarify it strictly. Another reason not stated is that in fact [3] this problem is not explicitly stated either. In fact, this problem can be understood from a simple point of view, because the unbiasedness needs to be guaranteed $\sum_{j=1}^{n}e(x_j)=n_p$, where $e(x_j^N)=0$, but actually because $\hat{e}(x_j^N)>0$, The value of $\hat{e}(x_j^P)$ will be partially apportioned. $\hat{e}(x_j^P)$ must be underestimated.
In "Recovering the PS from Biased Positive Unlabeled Data" presented at AAAI 2022, this paper also points out that the PS estimated by [3] are not identifiable. So in addition to NN, another major contribution we made in this paper is to use regularization techniques to couple cost-sensitive PU learning to alleviate the general underestimation of PS. Instead of only using NN to estimate PS.
*Q4-2:* **I do not think all the P data have an almost-one confidence. I think a more suitable method is to select high-confidence examples and then train the NN.**
*A4-2:* Thank you for your advice. In fact, your "select high-confidence examples and then train the NN" is another method in PU learning, such as self-PU. We added experiments to illustrate this problem. It shows that our effect is better than self-PU on biased annotation sets.
Supplemental Experiments on MINST. The setting conditions are the same as those in Table 1 of the supplementary materials.
|method|Distripution|ACC|Prec|Rec|F1|AUC|AP|
|-|-|-|-|-|-|-|-|
|LRe | [.65,0,.15,0,.1,0,.07,0,.03,0] | 86.19(0.75) | 92.94(0.64) | 77.89(1.38) | 84.75(0.93) | 88.06(0.93) | 88.72(1.09)|
| nnPUe | .. | 92.45 (1.61) | 90.45 (2.26) | 94.73 (1.24) | 92.53 (1.55) | 92.48 (1.60) | 88.29 (2.43)|
|nnPU without normalize | .. | 90.95(1.61) | 88.18(2.40) | 94.38(2.74) | 91.13(1.56) | 91.00(1.61) | 85.98(2.25) |
|Self-PU | .. | 90.08(0.47) | 90.08 (0.47) | 89.35 (1.17) | 90.70 (1.73) | 90.00 (0.53) | 85.61 (0.69)|
LRe: Logistic regression estimation of PS for PU learning.
Well, it's really easy to explain, in fact, if the covariate x is almost certain of the data label y. That means all the P data has an almost-one confidence. However, if this condition is not met, the PN classification effect will be poor, and the upper limit of PU classification is PN classification. (All positive samples are marked, and all non-labeled samples are negative samples.) That is, it will definitely lead to worse PU classification. If you can't learn PN well, you can't learn PU well. Therefore, the problems solved by PU learning are problems with good PN classification effect. Otherwise, the problems discussed may be meaningless. Our method only requires high confidence of P data, which is in line with the actual application scenario of PU learning. | Summary: This paper considers PU learning issue. Existing cost-sensitive-based methods often rely on strong assumptions that examples with
an observed positive label were selected entirely at random. This work relaxes the assumption and proposes a unbiased novel method PUe based on the causal theory.
Strengths: 1. The motivation to relax the Selected Completely At Random assumption is practical and interesting.
2. Generalization theory to guarantee the effectiveness.
3. Excellent experiment performance to guarantee the effectiveness of the algorithm.
Weaknesses: I am very confused on how to estimate the prior pi.
Please conduct experiments with wild OOD detection method Training OOD Detectors in their Natural Habitats.
I have not got where you have used the causal theory. Could you point out the causual theory you have used and clarify what causal assumptions you have used in you theorems?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Q1:* **I am very confused on how to estimate the prior $\pi$.**
*A1:* Thanks. There is a lot of work on estimating $\pi$, such as the paper "Learning from corrupted binary labels via class-probability estimation" presented at ICML 2015, "Mixture proportion estimation via kernel embedding of distributions" presented at ICML 2016, "Estimating the class prior and posterior from noisy positives and unlabeled data" presented at NIPS 2016, " Class-prior estimation for learning from positive and unlabeled data" presented at Machine Learning 2017.
Common methods for estimating $\pi$ include kernel method, kernel logistic regression, mixture proportion estimation and univariate transforms.
Estimates on $\pi$ are not our main contribution. In papers on PU learning loss functions, $\pi$ is usually assumed to be known. Our main contribution is four points. Firstly, we relax the assumption of local certainty and propose a simple method to estimate the propensity score under the assumption of SAR, and apply it to PU learning to improve the performance of the algorithm. Second, using deep learning to estimate propensity score (PS) rather than linearity is better than logistic regression. Third, our PS estimation method can be extended to the case of negative classes (e.g. PUbNe). Fourth, our new algorithm can simply be coupled with most cost-sensitive algorithms to improve performance, as shown in the paper and supplementary experiments. We will include more analysis and results in the final version.
*Q2:* **Please conduct experiments with wild OOD detection method "Training OOD Detectors in their Natural Habitats".**
*A2:* Thank you very much for your question. Wild OOD detection can also be regarded as a PU problem. $P_{wild}=\pi P_{out}+(1-\pi)P_{in}$.
We separate in-distribution (ID) data and OOD data from input data by using a small number of labeled samples and a large number of wild data samples. We can regard ID data as P class, OOD data as N class, wild data as unlabeled data, and labeled ID data as label data.
Wild OOD detection can be tested and compared with our algorithm. However, the common dataset of PU work in the previous literature is not this OOD task, so I didn't consider doing this experiment at first. And due to time constraints and additional experiments, we did not complete code refactoring and experimental simulations. We will supplement this experimental comparison in the final paper to further explore the performance of our algorithm.
*Q3:* **Could you point out the causal theory you have used and clarify what causal assumptions you have used in you theorems?**
*A3:* Thanks for your question. There are four basic assumptions in causal inference. 1st, assignment mechanism must be probabilistic, $0<P(s=1|x_i, y_i=1) \leq1$. for positive sample, (It can be extended to negative classes, see PUbNe) If the selection probability is 0, the sample is difficult to accurately determine without additional assumptions. 2nd, SUTVA means that the potential outcome (label value) of any individual does not change depending on whether other individuals are treated or not. 3rd. Consistent, there is no noise. 4th, the core assumption on which our work relies, is that the assignment mechanism must be unconfounded, requiring that all the assignment probabilities $P(s |x_i, t_i=1)$ are free from dependence on the potential outcomes. allocation mechanism is independent of potential outcomes.
And we think that this allocation mechanism should be completely dependent on the covariate that can be observed in the sample, i.e. $e(x_i)$. Usually, PU learning studies the case of $e(x_i)=c$. Here we look at the case where $e(x_i)$ is a variable function, that is, the SCR hypothesis (which does not conform to the SCAR hypothesis). And under the unconfounded hypothesis, according to the theory of causal inference, we can know
$s_i \perp x_i |e(x_i)$, or, equivalently, $e(x_i)=P(s_i=1|x_i, e(x_i),y_i=1)= P(s_i=1|e(x_i),y_i=1)$. and $s_i \perp y_i(0),y_i(1) |e(x_i)$. where $y_i(0), y_i(1)$ is the potential result, and in PU learning there is $y_i(0)=y_i(1)=1$, i.e., the final potential result is positive regardless of whether the positive class is selected into the annotation set.
We are actually using causal reasoning backwards here, first completing the causal analysis of the processing (i.e. selection) and the label itself, and then using known causal analysis, combining selected and unselected samples to determine the distribution of positive samples.
Determine the causality: In PU case, accepting processing means selecting samples for labeling. The result refers to whether the sample itself is a positive sample. Whether a sample is a positive sample is determined when the sample is generated, and the behavior of selecting a sample for marking is executed only after the sample is generated.
In causal inference, what happens after the result is produced has no effect on the result, i.e. whether the sample is selected or not has a causal effect on the label value is 0. In essence, individual causality is also 0, which ensures that we can estimate the positive probability of samples based on selected samples and unselected samples.
In causal inference, we can think that the samples in each block are generated by completely randomized experiments, and estimate the causal effects in each block separately. Given knowledge of the propensity score, the sample weights can also be adjusted using propensity score adjustment weights to estimate causal effects. The individual causal effect is 0, and we know the distribution of covariates of the labeled and unlabeled samples, so we can estimate the positive probability with the estimated propensity score. Simply put, we want to increase the weight of samples with low labeling probability and reduce the weight of samples with high labeling probability. What kind of weight adjustment is used is based on the inverse probability weighting in causal inference. | Summary: This paper considers the problem that existing cost-sensitive-based methods often rely on strong assumptions that examples with an observed positive label were selected entirely at random in Positive-Unlabeled (PU) learning. The authors propose a PU learning enhancement (PUe) algorithm based on causal inference theory, which employs normalized propensity scores and normalized inverse probability weighting (NIPW) techniques to reconstruct the loss function, thus obtaining a consistent, unbiased estimate of the classifier and enhancing the model’s performance. Moreover, they investigate and propose a method for estimating propensity scores in deep learning using regularization techniques when the labeling mechanism is unknown. Experiments on three benchmark datasets demonstrate the proposed PUe algorithm significantly improves the accuracy of classifiers on non-uniform label distribution datasets compared to advanced cost-sensitive PU methods.
Strengths: 1. One strength of this paper is its focus on addressing the prevalent issue of bias in the labeled set, which often does not conform to the Selected Completely At Random (SCAR) assumption. This assumption posits that the observed labeled examples are a random subset of the complete set of positive examples. In many real-world scenarios, this assumption does not hold true, leading to biased learning outcomes.
Weaknesses: 1. This paper contains several spelling errors, which can potentially hinder the reader's understanding and overall perception of the work. For instance, on line 131, the term "oftreament" appears to be a typographical error where two words are mistakenly joined. Similarly, on line 149, "porpensity" is likely a misspelling of "propensity," and on line 189, "Apparemtly" should be corrected to "Apparently." Lastly, on line 220, "pi" might be a typo or a symbol that is not clearly defined in the context. These errors suggest a lack of thorough proofreading and can detract from the overall quality and credibility of the paper. It is highly recommended that the authors conduct a careful review of the manuscript for spelling, grammar, and typographical errors before final submission.
2. A significant weakness of this paper lies in its lack of clear explanation and connection between the problem it aims to solve - the bias in the labeled set not conforming to the Selected Completely At Random (SCAR) assumption - and the causal inference method it employs to address this issue. Specifically, lines 123 to 145 appear to attempt to elucidate this connection, but the explanation falls short due to the lack of clear definitions and elaboration on key concepts. This lack of clarity can make it difficult for readers to understand the rationale behind the proposed method and its relevance to the problem.
3. A notable weakness in the experimental section of the paper is the presentation of results in Tables 1 and 2. The current format does not clearly highlight the best-performing algorithm, making it difficult for readers to quickly and intuitively discern which method is superior and by what margin.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: Could the authors provide a more detailed explanation of how the causal inference method addresses the issue of bias in the labeled set not conforming to the SCAR assumption?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: A significant limitation of this paper is its overall clarity and readability.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Q1:* **This paper contains several spelling errors.**
*A1:* Thank you for your advice. We will carefully check spelling, grammar and typographical errors, thoroughly proofread the paper, improve the overall quality and credibility of the paper, and carefully revise it in the final version. Lines 131, 149, 189 do have errors. In line 220, "pi" should actually be "$\pi$".
*Q2:* **Could the authors provide a more detailed explanation of how the causal inference method addresses the issue of bias in the labeled set not conforming to the SCAR assumption?**
*A2:* Thanks. There are four basic assumptions in causal inference. First, assignment mechanism must be probabilistic, $0<P(s=1|x_i, y_i=1) \leq1$. for positive sample, (It can be extended to negative classes, see PUbNe) If the selection probability is 0, the sample is difficult to accurately determine without additional assumptions. Second, SUTVA means that the potential outcome (label value) of any individual does not change depending on whether other individuals are treated or not. Third. Consistent, there is no noise. Fourth, the core assumption on which our work relies, is that the assignment mechanism must be unconfounded, requiring that all the assignment probabilities $P(s |x_i, t_i=1)$ are free from dependence on the potential outcomes. allocation mechanism is independent of potential outcomes.
And we think that this allocation mechanism should be completely dependent on the covariate that can be observed in the sample, i.e. $e(x_i)$. Usually, PU learning studies the case of $e(x_i)=c$. Here we look at the case where $e(x_i)$ is a variable function, that is, the SCR hypothesis (which does not conform to the SCAR hypothesis). And under the unconfounded hypothesis, according to the theory of causal inference, we can know
$s_i \perp x_i |e(x_i)$, or, equivalently, $e(x_i)=P(s_i=1|x_i, e(x_i),y_i=1)= P(s_i=1|e(x_i),y_i=1)$. and $s_i \perp y_i(0),y_i(1) |e(x_i)$. where $y_i(0), y_i(1)$ is the potential result, and in PU learning there is $y_i(0)=y_i(1)=1$, i.e., the final potential result is positive regardless of whether the positive class is selected into the annotation set.
We are actually using causal reasoning backwards here, first completing the causal analysis of the processing (i.e. selection) and the label itself, and then using known causal analysis, combining selected and unselected samples to determine the distribution of positive samples.
Determine the causality: In PU case, accepting processing means selecting samples for labeling. The result refers to whether the sample itself is a positive sample. Whether a sample is a positive sample is determined when the sample is generated, and the behavior of selecting a sample for marking is executed only after the sample is generated.
In causal inference, what happens after the result is produced has no effect on the result, i.e. whether the sample is selected or not has a causal effect on the label value is 0. In essence, individual causality is also 0, which ensures that we can estimate the positive probability of samples based on selected samples and unselected samples.
In causal inference, we can think that the samples in each block are generated by completely randomized experiments, and estimate the causal effects in each block separately. Given knowledge of the propensity score, the sample weights can also be adjusted using propensity score adjustment weights to estimate causal effects. The individual causal effect is 0, and we know the distribution of covariates of the labeled and unlabeled samples, so we can estimate the positive probability with the estimated propensity score. Simply put, we want to increase the weight of samples with low labeling probability and reduce the weight of samples with high labeling probability. What kind of weight adjustment is used is based on the inverse probability weighting in causal inference.
According to Rubin's formulas for Weighting Estimators and Blocking Estimators:
$\hat{\tau}^{ht}= \frac{1}{N}\sum_{i=1}^{N} \frac{s_{i}{Y_{i}}^{P}}{e({x}^{P}_{i})} $
$-\frac{1}{N} {\sum_{i=1}}^N \frac{(1-s_i){Y_i}^{P}}{1-e({x_i}^{P})}$
$\hat{\tau}^{dif}(j)=\frac{\sum_{i:B_{i}(j)=1}s_i {Y_{i}}^{P}}{\sum_{i:B_{i}(j)=1}s_i}- \frac{\sum_{i:B_{i}(j)=1}(1-s_i) {Y_{i}}^{P}}{\sum_{i:B_{i}(j)=1}(1-s_i)}$
Only positive are labeled, so $Y^{P}_{i}=1$. Because the causal effect is 0. We have the following equation:
$\frac{1}{N_{p_j}}\sum_{i=1}^{N_{p_j}} \frac{s_{i}Y^{P}_{i}}{e({x_i}^{P})} $
$= \frac{1}{N_{p_j}}\sum_{i=1}^{N_{p_j}} \frac{(1-s_i){Y_i}^{P}}{1-e({x_i}^{P})}$,
$N_{p_j}$ is the number of positive samples in the jth block.
The jth block represents a positive sample with a propensity score between $b_j$ and $b_j+\epsilon$,
$b_j \in [0,1)$, $\epsilon > 0$ and $b_j+\epsilon \leq 1$. $s_i=1$ means sample i is selected into the labeled set.
So we can get $T_j= \sum_{i=1}^{n_{p_j}} \frac{1}{e({x_i}^{L})}=\sum_{i=1}^{N_{p_j}-n_{p_j}} \frac{1}{1-e({x}^{'}_{i})}$,
${n_{p_j}}={n_{L_j}}$ is the number of labeled samples in the jth block, ${x}^{'}_{i}$ are unlabeled positive samples.
$(\frac{b_j}{b_j+\epsilon})N_{p_j} \leq \frac{n_{p_j}}{b_j+\epsilon} < T_j \leq \frac{n_{p_j}}{b_j} < (\frac{b_j+\epsilon}{b_j})N_{p_j}$.
Make $\epsilon \to 0$, $T_j \to N_{p_j}$. So $\sum T_j = \sum_{i=1}^{n_{L}} \frac{1}{e(x^{L}_{i})} = N_p$, which is used in the unbiased proof in the paper.
*Q3:* **A notable weakness in the experimental section of the paper is the presentation of results in Tables 1 and 2. The current format does not clearly highlight the best-performing algorithm.**
*A3:* Thanks for your advice, we will mark the best results, clearly highlighting the best performing algorithms so that readers can quickly identify them. In the final version of the paper, we will refine the presentation of the results.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer 7yJd:
We sincerely thank you for the review and comments.
We have provided corresponding responses and results, including a more detailed explanation of how the causal inference method addresses the issue, which we've tried our best to cover your concerns.
Please let us know whether your concerns have been well addressed. We would like to further discuss with you if you still have any unclear parts of our work.
Best,
The Authors | Summary: The paper introduces an algorithm for Positive-Unlabeled (PU) learning that tackles the problem of biased labeled data. By utilizing causal inference theory, the algorithm utilizes normalized propensity scores and inverse probability weighting to reconstruct the loss function and achieve an unbiased estimate of the classifier. Compared to existing cost-sensitive PU methods, the proposed PUe algorithm demonstrates significant improvements in classifier accuracy, particularly on datasets with non-uniform label distributions. Additionally, the paper proposes a method for estimating propensity scores in deep learning models when the labeling mechanism is unknown. Overall, the PUe framework effectively addresses selection bias, enhances PU learning, and improves model performance.
-------------------------------
After rebuttal: I would like to increase my rate a little bit. However, I think the novelty part still needs more clarification.
After the author-review discussion: I could increase my rate a little bit for the thorough rebuttal provided by the authors. However, I think the rebuttal provides a lot of information that may not be easily included in the paper. I would like to see a plan on how to incorporate the reviews/rebuttals into the paper.
During the final discussion: The plan has given me more confidence in how the paper can be if we accept it.
Strengths: Importance of the studied problem (biased positive data): The problem of studying biased positive data is highly significant and practical. In many real-world scenarios, the selection of positive samples is often biased, while existing methods typically assume that the selection of positive samples is completely random. The proposed PUe algorithm in this paper addresses this issue and effectively handles biased labeled data, filling a research gap in the field of PU learning.
Well-written: The paper is well-written, with clear descriptions of the problem background, method principles, and experimental design and results. The authors introduce causal inference theory and utilize techniques such as normalized propensity scores and inverse probability weighting to provide a viable solution, which is presented in a clear and concise manner.
Experimental results demonstrate the effectiveness of the method: The paper provides experimental evidence of the proposed PUe algorithm's effectiveness.
Weaknesses: Lack of novelty: One of the limitations of the paper is that it may lack novelty. Similar work, such as the paper "Recovering the Propensity Score from Biased Positive Unlabeled Data" presented at AAAI 2022, may have explored similar ideas and methods earlier. It is important for a research contribution to demonstrate novelty and differentiate itself from existing approaches in the field.
Limited causal inference aspect: While the paper is motivated by causal inference, it falls short in incorporating substantial causal elements. The approach of reweighting samples based on sampling probabilities is straightforward and may not fully capture the complexity of causal relationships. To strengthen the causal inference aspect, the paper could have explored additional methods or frameworks that explicitly address causal assumptions and mechanisms.
Lack of significant contribution: The primary contribution of the paper should lie in the proposed method for estimating propensity scores. However, the approach of using model predictions as weights for propensity scores is not fundamentally different from classical self-supervised methods. Although a regularizer is introduced, it may not provide a meaningful improvement. Additionally, the process of normalizing the estimated propensity scores could have been done separately after learning the scores, making the proposed regularizer less essential or impactful.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Comparison with "Recovering the Propensity Score from Biased Positive Unlabeled Data" presented at AAAI 2022: A direct comparison between the paper and the mentioned work would require a thorough analysis of both papers.
Significance of causal inference in the context of the proposed work: Although the paper mentions causal inference as a motivation, it is valid to question the significance of causal inference in this work. If the process of generating biased data is explicitly defined and the proposed loss function can be straightforwardly derived from it, the direct application of causal inference may not be evident. It is important to consider whether the paper effectively incorporates causal inference methods or frameworks that address causal assumptions and mechanisms beyond straightforward reweighting.
Comparison with self-supervised methods like selfPU: Evaluating the performance of the proposed approach compared to self-supervised methods like selfPU would require a detailed analysis and comparison of the methodologies and experimental results from both approaches.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Q1:* **Comparison with paper in AAAI 2022.**
*A1:* Thanks for the nice concern. The main content of aaai's article is introduced and explained respectively in the case of Local Certainty and Probabilistic Gap Scenario PS estimation which is s identifiable. There are four points about the novelty and contribution of this paper. First, the assumption of Local Certainty is relaxed, and the algorithm effect is improved. Second, use deep learning to estimate propensity scores(PS), rather than linearity, better than logistic regression. Third, our PS estimation method can be extended to the case with negative classes (e.g., PUbNe). Fourthly, our new algorithm can be simply coupled with most cost-sensitive algorithms to improve performance, as shown in paper. We’ll include more analysis and results in final version.
1. The paper relaxes the assumption of the Local Certainty and does not require the relationship between the observed features and the true class is a deterministic function . When x has a large Probabilistic Gap to y and positive instances that resemble negative instances are less likely to be labeled, this algorithm can also be true, i.e. $|P (y=1|x) -P(y=0|x)|>0.8$ and $P(y=1|x^L) > P(y=0|x^L)$, L means the labeled samples.
And the Local Certainty method in the aaai paper is used to estimate the PS, the $\sum e = n_p$ restriction and $\hat{e}(x_i^N)>0$ on the negative sample will result in underestimated PS on labeled samples. Normalization can improve the estimation effect.
Only labeled samples are considered. We have $0.9<P(y=1|x_i^L) \leq 1$. Obviously, $0.9e(x_i^L) < \hat{e}(x_i^L)\leq e(x_i^L)$. This shows that our estimated PS for the labeled sample is smaller than the true PS. Moreover, the PS of the estimated negative sample is greater than 0, i.e. $\hat{e}(x_i^N)>0$. According to the PS definition, we have the following equation $\sum \hat{e_i} = {n_p}$. In aaai Local Certainty Scenario, PS of negative samples also share part of the probability, the PS of the positive sample is underestimated. Therefore, we need to use the normalization method to improve the accuracy of our classification algorithm.
$\frac{\frac{1}{e(x_i^L)}}{\sum_{j} \frac{1}{e(x_j^L)}} = A$.
$0.9A < \frac{\frac{1}{\hat{e}(x_i^L)}}{\sum_j \frac{1}{\hat{e}(x_j^L)}} < 10A/9$. This shows that our estimate is relatively stable.
2. In the Probabilistic Gap, We don't limit the PS to be linear. Deep learning is used for estimation. The regularization method is used to prevent overfitting, and the normalization method is used to alleviate the problem of underestimating PS. We added experiments on MINST using LR (Logistic regression) to estimate PS, showing that LR is inferior to deep learning.
|Method|Distribution|ACC|Prec|Rec|F1|AUC|AP|
|-|-|-|-|-|-|-|-|
|LR | [.65,0,.15,0,.1,0,.07,0,.03,0] | 86.19(0.75) | 92.94(0.64) | 77.89(1.38) | 84.75(0.93) | 88.06(0.93) | 88.72(1.09)|
| nnPUe | .. | 92.45 (1.61) | 90.45 (2.26) | 94.73 (1.24) | 92.53 (1.55) | 92.48 (1.60) | 88.29 (2.43)|
|nnPUe w/o normalize | .. | 90.95(1.61) | 88.18(2.40) | 94.38(2.74) | 91.13(1.56) | 91.00(1.61) | 85.98(2.25) |
|Self-PU | .. | 90.08(0.47) | 90.08 (0.47) | 89.35 (1.17) | 90.70 (1.73) | 90.00 (0.53) | 85.61 (0.69)|
*Q2:* **Significance of causal inference in the context of the proposed work.**
*A2:* Thanks for the nice question. The simple method is applied because we analyze the causality here is simple. Our main contributions are indeed not here. See the first question for our main contributions. We'll explore more complex ways to study this in the future.
We're using causal reasoning in reverse here. In general, we study the average causal effect of a treatment. However, we first complete the causal analysis of the processing (i.e. selection) and the label itself, and then use the known causal analysis, combined with the selected samples and the unselected samples to determine the distribution of positive samples.
In this paper, accept processing means that the sample is selected for labeled. And the result is refers to whether the sample itself is a positive sample. Without noise in the detection, whether the sample itself is a positive sample is determined at the time when the sample is generated, and the behavior of selecting is performed after the sample is generated. In causal inference, what happens after the result is produced has no effect on the result, i.e. whether the sample is selected has a causal effect on the label value is 0. In essence, individual causality is also 0.
Based on such a simple causality, we use the method of reweighting the sample to estimate the positive probability of the sample.
*Q3:* **The primary contribution of the paper should lie in the proposed method for estimating PS.**
*A3:* It makes sense for us to introduce a regularizer here. The regularization method is used to solve the overfitting problem, and the normalization method is used to solve the problem of small estimated PS, which is a method to deal with different problems.
Ablation experiment (in the above table) showed that when the regularizer was removed, the performance was reduced. This will make it very easy to overfit, when we estimate the PS. The estimated PS in the label set is close to 1, with unlabeled is close to 0. The regularization method smooths the estimated PS. The regularization method is to solve the problem that the model underestimates PS. We also do additional experiments here to demonstrate that the performance of the model deteriorates when the regularization effect is removed under the nnPUe algorithm.
*Q4:* **Comparison with selfPU.**
*A4:* According to the aaai paper, it cannot estimate identifiable PS without making certain assumptions about the data. But according to the formula we gave in the first question, it's approximate. This is not explained by the self-monitoring method. Results in the above table show that our scheme is better than self-PU in the case of biased label datasets.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer nitz:
We sincerely thank you for the review and comments.
We have provided corresponding responses and results, which we've tried our best to cover your concerns. With extra experiments during the rebuttal period, the advantage of our approach becomes clearer. Some missing comparison have been provided and will be added to the final paper.
Please let us know whether your concerns have been well addressed. We would like to further discuss with you if you still have any unclear parts of our work.
Best,
The Authors
---
Rebuttal Comment 1.2:
Title: Re
Comment: Thank you for your thorough response addressing my concerns. I appreciate your efforts to clarify several points. However, I would like to seek more clarification or details on novelty before I am confident to change my score.
In light of my concern about novelty, could you provide additional analysis or evidence that clearly distinguishes your method from existing approaches, highlighting the aspects that set your work apart? Specifically, while your response sheds light on the novelty of your contributions compared to the AAAI 2022 paper, I would appreciate more specific examples illustrating how your relaxation of the Local Certainty assumption, deep learning-based PS estimation, extension to negative classes, and coupling with cost-sensitive algorithms uniquely differentiate your approach from existing methods.
Regards,
---
Reply to Comment 1.2.1:
Comment: *Q1&2:* **Examples of relaxation of the Local Certainty assumption and deep learning-based PS estimation**
*A1&2:* Here we merge the two issues since they are relevant . According to the AAAI 2022 paper: In the Local Certainty scenario, we assume the relationship between the observed features and the true class is a deterministic function $f:X\rightarrow Y$, where X is the feature space and Y = {0, 1}, while allowing the propensity score to be an arbitrary probability function. but this assumption is too strong. Actually follow our reply. Only $|P (y=1|x) -P(y=0|x)|>0.8$ and $P(y=1|x^{L}) > P(y=0|x^{L})$, needs to be satisfied using our method to use the estimated propensity score well for sample classification. (where L represents the labeled sample and is used to distinguish it from the positive class.)
An obvious example is that in MNIST, the accuracy of existing algorithms is more than 90%, but cannot reach 100%. So it's not allowing the propensity score to be an arbitrary function, it's supposed to be a linear function. However, according to our experimental results, the probability function fitted using deep learning is better than the linear estimation. 6-layer MLP is used for mnist datasets, 13-layer CNN is used for cifar-10, and ResNet-50 is used for Alzheimer datasets.
Supplemental Experiments on MINST. The setting conditions are the same as those in Table 1 of the supplementary materials.
| method | Distripution | ACC(%) | Prec. (%) | Rec. (\%) | F1 (%) | AUC (%) | AP (%)|
|-|-|-|-|-|-|-|-|
|LRe | [0.65,0,0.15,0,0.10,0,0.07,0,0.03,0] | 86.19(0.75) | 92.94(0.64) | 77.89(1.38) | 84.75(0.93) | 88.06(0.93) | 88.72(1.09)|
| nnPUe | [0.65,0,0.15,0,0.10,0,0.07,0,0.03,0] | 92.45 (1.61) | 90.45 (2.26) | 94.73 (1.24) | 92.53 (1.55) | 92.48 (1.60) | 88.29 (2.43)|
LRe: Logistic regression estimation of propensity scores for PU learning.
Here's our theoretical explanation of this example.
$e(x)=P(s=1|x,y=1)=\frac{P(x|s=1)P(s=1)}{P(y=1|x)P(x)}$
When $|P (y=1|x) -P(y=0|x)|>0.8$. Only labeled samples are considered. We have $0.9<P(y=1|{x_i}^{L}) \leq 1$. Obviously, $0.9e({x_{i}}^{L}) < \frac{P(s=1)P({x_i}^L|s=1)}{P({x_i}^L)}=\hat{e}({x_{i}}^{L})\leq e({x_i}^{L})$. This shows that our estimated propensity score for the labeled sample is smaller than the true propensity score. Moreover, the propensity score of the estimated negative sample is greater than 0, i.e. $\hat{e}({x_{i}}^{N})>0$. And according to the propensity score definition, we have the following equation $\sum \hat{e_i} = {n_p}$. In aaai Local Certainty Scenario, propensity scores of negative samples also share part of the probability, the propensity score of the positive sample is underestimated. Therefore, we need to use the normalization method to improve the accuracy of our classification algorithm. $\frac{0.9\frac{1}{e({x_{i}}^{L})}}{\sum_{j} \frac{1}{e({x_{j}}^{L})}} < \frac{\frac{1}{\hat{e}({x_{i}}^{L})}}{\sum_{j} \frac{1}{\hat{e}({x_{j}}^{L})}} <\frac{\frac{1}{e({x_{i}}^{L})}}{0.9\sum_{j} \frac{1}{e({x_{j}}^{L})}}$. This shows that our estimate is relatively stable. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper noted that existing Positive-Unlabeled (PU) learning methods assumed that positive samples are selected entirely at random, ignoring the prevalent selection bias in real-world PU problems. To overcome this limitation, the authors proposed a PU learning enhancement (PUe) algorithm based on causal inference theory. They adapted sample re-weighting methods, to be specific, inverse propensity weighting (IPW) to PU learning, which assigns a weight to each positive sample. Based on the weights, they obtained a new unbiased PU risk estimator. They also proposed a method that can compatible to deep networks to estimate the weights when the labeling mechanism is unknown.
Strengths: * The problem is interesting and valuable. The adaptation of causal inference to PU learning is sound and original to the best of my knowledge.
* The resulting algorithm is unbiased, which is a very nice result (but I quickly checked the proofs and have some concerns).
* The algorithm is simple and easy to follow.
* The empirical studies well support the method's superiority with a clear margin.
Weaknesses: * I'm a bit puzzled by the fact that the risk estimator is unbiased.
$R_{PN}$ in current version should be $\hat{R_{PN}}$ for notation consistency, while $R_{PN}$ denotes the expected risk. Then if $\mathbb{E}[\hat{R_{PUe}}(g)] = R_{PN}(g)$, we say $\hat{R_{PUe}}$ is unbiased.
While in this paper, the authors proved $\mathbb{E}[\hat{R_{PUe}}(g)] = \hat{R_{PN}}$ (Eq.11), which looks a little weird.
* The examples given by the authors give readers a very clear idea that biased selection is likely to be present in the PU problem, but it seems to me that Figure 1 doesn't present a very clear picture of the setting discussed in the paper. Wouldn't it be more telling if all the blue dots were centered, like, in the upper left corner? But if so could ePU also have difficulty learning good classification boundaries, since this bias could also seriously mislead the estimation of $e$ (step 1 in Figure 2)?
* $s$ should be introduced in Eq.4.
* In line 113, "it does not depend on...", "it" here seems very vague.
* Tables 1 and 2 are too dense a presentation of the results, and I would suggest labeling the best results more obviously.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see the main review.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Q1:* **$R_{PN}$ should be $\hat{R_{PN}}$. Eq.11 looks a little weird.**
*A1:* Thank you for pointing out the issue. It is true that the expression forms here are not uniform. The empirical risk should be represented by $\hat{R_{PN}}(g|y)=\frac{1}{n}\sum_{i=1}^{n}[y_iL(g(x_i),+1)+(1-y_i)L(g(x_i),-1)]$ and the expected risk should be represented by $R_{PN}(g|y)=E_{P(x,y)}(L(g(x),y))=E_i(\hat{R_{PN}}(g|y))$. Obviously, the expectation of empirical risk equals the expectation of expected risk. Using the two-stage expectation $\mathbb{E}_i[\mathbb{E}_s]$, we can obtain that the risk estimation of our algorithm is unbiased.
$\mathbb{E}[\mathbb{E}(\hat{R_{PU_e}}(g))]=R_{PN}(g|y)$. It does not affect the conclusion. We'll add this to the final version of the paper.
*Q2:* **If all the blue dots were centered, could PUe learn good classification boundaries? Or is it possible that the bias can mislead the estimation of $e$ (Step 1 in Figure 2)?**
*A2:* Thank you for your question. The scenario you said is different from the condition setting of our paper and contradicts the Probabilistic Assignment assumption in causal inference. If all the blue dots were centered, like, in the upper left corner. Without additional assumptions, machine learning cannot learn good classification boundaries, and unlabeled positive classes cannot be correctly classified. Our scenario actually requires that all orthodox classes be labeled with probability, even if they are small, which means that Probabilistic Assignment is $0<P(s,x|y_i=1)<1$ for all samples.
The scenario you're hypothetical, which is denoted as partial positive, has a tendency score of 0. For example, in the odd and even category, a scenario that doesn't select 8 at all for labeling is similar to your assumption. According to the no free lunch principle, the same algorithm does not exist for the same data, and the classification of 8 must be accurate. If algorithm A correctly classifies eight of [0, 2, 4, 6, 8] and [1, 3, 5, 7, 9], algorithm A can not correctly classifies eight of [0, 2, 4, 6] and [1, 3, 5, 7, 8, 9]. 8 is not labeled, so it is impossible to determine whether 8 is positive or negative without additional assumptions. In our scenario, even if the probability of 8 being labeled is very small, the accuracy of the final algorithm is improved as long as there is labeling. The results are as shown in the paper.
*Q3:* **$s$ should be introduced in Eq.4.**
*A3:* Sorry for the unclear statement. $s=1$ here means that the sample was selected into the set of annotations. $s=0$ indicates that the sample is not selected into the annotation set. We'll clearly state this symbol in the final version of the paper.
*Q4:* **In line 113, "it does not depend on...", "it" here seems very vague.**
*A4:* Thank you for your correction. Here "It" means the labeling mechanism.It's not clear here. We will refine it in the final version of the paper.
*Q5:* **Tables 1 and 2 are too dense a presentation of the results, and I would suggest labeling the best results more obviously.**
*A5:* Thanks for your advice. We will mark the best results in the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. You have addressed all my concerns. | null | null | null | null | null | null |
Efficient Uncertainty Quantification and Reduction for Over-Parameterized Neural Networks | Accept (poster) | Summary: This paper proposes an epistemic uncertainty assessment framework which comes with statistical coverage guarantees and low computation costs for over-parametrized neural networks. This approach seeks to remove procedural uncertainty by using one auxiliary network. In addition, approaches are provided to construct confidence intervals using a small number of retrained networks on different problem settings.
Strengths: The authors introduce a Procedural-Noise-Correcting (PNC) predictor, which uses one auxiliary network trained on the data but where all labels set to zero (instead of +-1). This mimicks the variability coming from the training procedure leveraging asymptotic results from NTK theory. The idea is simple and intuitive and appears to work on some toy experiments.
Weaknesses: Experiments are only provided on low-dimensional toy problems.
In addition, for the coverage studies, only 40 samples are used which results in relatively large binomial errors on the reported coverage properties. This issue isn't discussed at all. It would be better to both increase the number of samples and to report the results in a way that respects the uncertainty due to finite samples.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How wide is wide enough that the NTK theory used is approximately valid?
Have you looked into studying more realistic problems (even if still only simulated)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Some discussion should be provided on limitations. For example, how large does the NN need to be?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer for the careful reading and valuable feedback. We address the reviewer's concerns below.
Q1. (Toy problems) To fully evaluate our approaches, we conducted additional experiments on the real-world dataset. For additional experiments, please refer to our global response to all reviewers and our discussions there.
Q2. (Selection on repeat times) The 40 here represents 40 experimental repetitions: Essentially, we regenerate the entire datasets 40 times and re-run our approaches on new datasets 40 times to generate 40 confidence intervals. We tried more repetition times, such as 80 times in the experiment: the plot of the coverage rate and interval width against the number of experimental repetitions is shown in the attached PDF file (in our global response to all reviewers). As shown, 40 times is sufficiently large to guarantee a robust evaluation of the coverage rate and interval width.
Q3. (Width in NTK) There are some rough lower bounds of the width in the literature so that the NTK holds approximately; See, e.g., [*], [**].
Essentially, they show that the width should be at least a polynomial function of the sample size as well as other constants that are intrinsic in the data. In the experiment, we choose a 32$\times$sample size (main paper) or 128$\times$sample size (Appendix) to reflect this dependence. Note that the bounds derived from theory are rough and typically too large to be implemented in practice, so we need to balance it with our actual computation units.
Q4. (Experiments) For additional experiments on real-world datasets, please refer to our global response to all reviewers and our discussions there.
[*]: Arora, Sanjeev, et al. "On exact computation with an infinitely wide neural net." Advances in neural information processing systems 32 (2019).
[**]: Du, Simon, et al. "Gradient descent finds global minima of deep neural networks." International conference on machine learning. PMLR, 2019.
---
Rebuttal Comment 1.1:
Comment: I think you missed the point about the coverage estimates. Any finite number of repetitions used to estimate the coverage will incur a binomial error on the coverage estimate. In the paper you ignore this and have examples that are 1 sigma from the desired asymptotic value that are labeled as undercoverage. I would suggest representing that better and doing more repetitions to reduce the uncertainty.
Nevertheless based on the other replies I've raised by score by one.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate the reviewer for increasing our score, and we also apologize for not adequately addressing the issue on coverage estimation accuracy. We now see the point of the reviewer. In the final version, we will 1) increase the number of experimental repetitions from 40 to at least in the hundreds. 2) report the confidence interval of the coverage by using the binomial or sample proportion confidence interval. Both of these would serve to better represent the comparison results. | Summary: The paper focuses on uncertainty quantification, specifically on estimating the statistical range of predictions made by a deep neural network (DNN). This is achieved through the use of a novel DNN called Procedural Noise Correcting (PNC), which is capable of estimating the variability inherent in the training process, particularly the variability arising from different initializations of the DNN. In order to obtain a more accurate estimate of the idealized DNN, denoted as h^*, the authors employ the Neural Tangent Kernel (NTK) and train the PNC DNN. To construct the confidence interval for the PNC, they utilize either batching or bootstrap methods.
Strengths: In this paper, the authors introduce a remarkable deep neural network (DNN) named PNC, which effectively addresses uncertainty quantification. The presentation of the paper is well written, although there are a few missing notations. However, the supplementary materials provided by the authors greatly enhance the overall completeness of the paper. The problem tackled by the authors is intriguing, and to the best of my knowledge, I have not come across any other papers that address this particular issue using a similar approach.
Weaknesses: There are several instances of missing notations in the paper, which make it challenging to comprehend. I'm curious about the authors' decision to only consider the random initialization of network parameters when discussing procedural uncertainty. Why did they not take into account random batches or the non-deterministic backpropagation? Algorithm 1 is also difficult for me to understand. If only \bar(s) is predicted, how can h_n*(x) be obtained? Is it necessary to apply the NTK? I did not see it explicitly mentioned in algorithm 1.
Regarding the experimental protocol, I believe it could benefit from a more robust approach. It might be valuable to compare the proposed method to other approaches, such as:
[1] Blundell, Charles, et al. "Weight uncertainty in neural network." International conference on machine learning. PMLR, 2015.
[2] Maddox, W. J., Izmailov, P., Garipov, T., Vetrov, D. P., & Wilson, A. G. (2019). A simple baseline for bayesian uncertainty in deep learning. Advances in neural information processing systems, 32.
[3] He, Bobby, Balaji Lakshminarayanan, and Yee Whye Teh. "Bayesian deep ensembles via the neural tangent kernel." Advances in neural information processing systems 33 (2020): 1010-1022.
Additionally, I find it intriguing to include experiments on additional datasets such as MNIST, CIFAR-10, and CIFAR-100, not limited to regression tasks but also for classification purposes. It would provide valuable insights into how the proposed DNN performs in the presence of epistemic uncertainty. Why not evaluate its behavior by applying out-of-distribution (OOD) data?
The authors highlight that PNC involves minimal computation; however, they only apply their technique to relatively simple datasets. This raises questions about whether it truly qualifies as a low-budget technique.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see Weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors' paper lacks emphasis on the societal impact of their work. Typically, I wouldn't expect authors in the field of Uncertainty Quantification to delve into this aspect. However, I do feel that the claims made in the paper are overly assertive. For instance, on line 29, the paper mentions general uncertainty quantification, yet it only focuses on regression tasks. Additionally, the claim of working with Overparametrized Neural Networks seems exaggerated, considering that the DNN employed in the study consists of just two fully connected layers. It would be advisable for the authors to exercise caution when making such claims.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer for the careful reading and valuable feedback. We address the reviewer's concerns below.
Q1. (NTK and algorithms) Most of the essential notations are introduced in the statement of Proposition 3.1 and at the beginning of Section 3.1, while we defer the details related to NTK in Appendix C due to the page limit. Regarding the missing notations, we would appreciate it if the reviewer could give more guidance.
NTK is the essential component of our theory, which provides the theoretical foundation to establish the statistical properties of all our algorithms, including justifying Algorithm 1. Therefore, we need the NTK theory to derive the statistical guarantee of Algorithm 1.
The major difference between our work and previous literature is that our UQ framework has simultaneous $\textit{statistical coverage guarantee}$ and $\textit{low computation cost}$. Some Bayesian approaches are fast-to-implement empirically but do not (or have not been shown to) enjoy the statistical coverage guarantee that ours does. The major theoretical difficulty in neural networks, compared with classical models such as linear regression, is that neural networks are non-convex and have many global minima, making standard statistical theory impossible to be applied. The recent NTK theory gives a statistical characterization of neural network training, which provides a possible foundation for theoretical UQ. With this, our theoretical framework can characterize the uncertainty from the random initialization, build connections between UQ for neural networks and UQ for kernel-based regression, and develop large-sample asymptotic theory, which takes the very first step in the UQ framework with statistical coverage guarantee, even only considering random initialization. It is more appealing to incorporate random batches or SGD in the framework, but then it becomes unclear how to obtain statistical coverage guarantees, which is left as our future research direction.
$\bar{s}$ is not a “prediction”; it is the average of the initial network (without using data or training). $\bar{s}$ is used to generate the procedural-shifted label in Step 2, which is then used to train the second network. $\hat{h}^*_{n}(x)$ is the output of Algorithm 1, which is the subtraction of two neural networks from Steps 1 and 2, respectively.
Q2. (Compare with Bayesian approaches) We cited these works and discussed them in Section 1: “While powerful, these approaches nonetheless possess inference error that could be hard to quantify, and ultimately finding rigorous guarantees on the performance of these approximate posteriors remains open to our best knowledge.” The major difference between our work and these works is that our UQ framework has a statistical coverage guarantee. Some Bayesian approaches are fast-to-implement empirically but do not (or have not been shown to) enjoy the statistical coverage guarantee that ours does. For the illustration in our experiments, DropoutUQ is probably the most cited (well-known) work in deep-learning-based UQ, and could be a representative of these Bayesian methods without statistical guarantees.
Q3. (Classification) NTK is the essential component of our theory, providing the theoretical foundation to establish the statistical properties of our algorithms. In previous literature, NTK is generally employed for mean-square-error (MSE) loss, e.g.,
Du, Simon, et al. "Gradient descent finds global minima of deep neural networks." International conference on machine learning. PMLR, 2019.
Lee, Jaehoon, et al. "Wide neural networks of any depth evolve as linear models under gradient descent." Advances in neural information processing systems 32 (2019).
as well as other references for NTK in our paper. The major technical discrepancy related to NTK between the regression tasks and the classification tasks is that the softmax operation in the network final layer and the cross-entropy-type loss used in the classification does not have an explicit statistical characterization of neural network training. Therefore, most NTK-based theories consider MSE loss. It would be an interesting but different direction to study the statistical guarantee for classification neural networks, which we recognize as our future research direction.
Q4. (Computation) First of all, we have added more experiments on “simulated” real-world datasets to validate our performances; please refer to our global response to all reviewers and our discussions there. We also want to emphasize that our rigorous theory holds for any $m’\ge 2$ in PNC-enhanced batching and $R \ge 1$ in PNC-enhanced cheap bootstrap, meaning that our approaches can construct asymptotically exact-coverage confidence intervals using as low as two PNC procedures (four network training in total) without additional overheads. In terms of running times, it is as low as 4 times the standard network (point prediction) training time.
Q5. (Terminology) We understand the reviewer’s concern and would definitely remove or clarify the overclaims in the paper (e.g., “general” uncertainty quantification, etc). Nonetheless, we should point out that “Overparametrized” used in deep learning theory, by convention, means the width is sufficiently large, while the depth can be as small as 2, e.g.,
Li, Yuanzhi, Tengyu Ma, and Hongyang R. Zhang. "Learning over-parametrized two-layer neural networks beyond NTK." Conference on learning theory. PMLR, 2020.
Li, Yuanzhi, and Yingyu Liang. "Learning overparameterized neural networks via stochastic gradient descent on structured data." Advances in neural information processing systems 31 (2018).
NTK theory and our theory are applied to neural networks with any depth. In experiments, we use a two-layer neural network for illustration. So, this terminology appears fine to us, but we would clarify it in the paper as suggested by the reviewer.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
I extend my gratitude for your response. After revisiting the paper alongside your answer, I find that the content has become notably clearer.
Regarding your responses to **points 1-4**, I find myself in agreement with the explanations provided by the authors.
As for **point 5**, I hold a differing perspective from what the authors refer to as a 'Deep' Neural network. While I may not entirely align with the terminology used, acknowledging the variation in nomenclature within the field, I am open to conceding this point.
In light of the clarifications and persuasive arguments presented, **I am pleased to convey my decision to revise my initial review.**
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate the reviewer for your support and for increasing our score. We are also glad to hear that our response helped make the contents notably clearer. Regarding the terminology, we will make sure to clarify it in the final version to avoid overclaims in the paper as kindly suggested by the reviewer. | Summary: The authors focus on the task of uncertainty quantification for neural networks.
They contribute (i) a procedure to remove procedural uncertainty, an uncertainty that arises due to
randomness in the training procedure, and (ii) an approach to cheaply construct confidence intervals with asymptotic coverage guarantees. The proposal is evaluated on two small synthetic experiments.
Strengths: - The proposed approach offers a cheap (compared to ensemble models) and principled approach to remove procedural uncertainty and construct confidence intervals.
- The paper is well written.
Weaknesses: The main weakness of the paper is its empirical evaluation, which consists of a single simple data set, a sum of sine functions, evaluated with small data sets (up to 1024) in very low dimensional spaces (up to eight dimensions). Any kind of scalability guarantee is not given/evaluated, which is concerning, especially since the theory assumes access to the full training data (l118).
A second synthetic function (sum of exponentials and squared covariates) is evaluated in the appendix, again without too many details.
### Minor
- Throughout the paper, the term _extremely_ is used repeatedly (extremely fast, extremely time-consuming, extremely few) instead of providing proper quantitative numbers (if only in the order of magnitudes, or relative to other approaches).
- Prop 3.4 $\pi_n$ is only defined in the appendix
- When submitting to NeurIPS, please follow the NeurIPS style guide, i.e., place your captions above tables, no vertical lines in tables, etc.
- The regularizing $\lambda$ is fixed to $\lambda \equiv 0.1^{10}$, i.e., essentially zero? Why keep it at all?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The evaluation is limited to a single function in a very low-dimensional setting. Can the authors provide a further experimental evaluation with respect to the scalability of the proposed approach?
- Wrt the paragraph around l174-l193. I am not sure I understand this. $\bar s \equiv 0.2$, or $\bar s \equiv 0$, gives us a constant target, that is independent of the input, i.e., $\hat \phi_{n, \theta^b}'(\cdot)$ should learn to be constant, giving us $\bar \phi_{n,\theta^b} \equiv 0.2$, i.e., step 2 of Algorithm 1 becomes irrelevant. Which part am I misunderstanding?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Discussion on limitations and broader impact are missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer for the careful reading and valuable feedback. We address the reviewer's concerns below.
Q1. (“extremely”) Thanks for pointing this out. We agree that “extremely” should be clarified. In the revised version, we will avoid using “extremely” but provide more concrete numbers to elaborate our point. For instance, we will change,
“Given these computational bottleneck, we will utilize light-computation resampling alternatives, including batching and the so-called cheap bootstrap method, which allows valid confidence interval construction using extremely few model retrainings.”
to
“Given these computational bottleneck, we will utilize light-computation resampling alternatives, including batching and the so-called cheap bootstrap method, which allows valid confidence interval construction using as few as one additional model retraining.”
Q2. ($\pi_n$) We will add this definition to the main body.
Q3. (Format) We will revise the format of our tables accordingly.
Q4. (Regularization) Our theory applies to any regularization greater than zero, regardless of how small it is. We introduce it in our theory so that the eigenvalue of the NTK Gram matrix will be bounded away from zero and thus the inversion of the regularized NTK Gram matrix can be computed stably. In practice, the regularization is typically very small or simply zero, e.g., in
Lee, Jaehoon, et al. "Wide neural networks of any depth evolve as linear models under gradient descent." Advances in neural information processing systems 32 (2019).
We maintain a small but nonzero regularization in the experiment to match our theoretical results.
Q5. (Experiments) We understand this concern. We have carried out additional experiments on more sophisticated problems using “simulated” real-world data. Please refer to our global response to all reviewers.
Q6. (Constant label) Even if every training data has label 0, the output network is not a zero-constant network: The major difference between neural networks and classical models such as linear regression, is that neural networks are non-convex and have many local/global minima. Starting from an initialization parameter $\theta^b$, gradient descent will find a global minimum that is very close to $\theta^b$ (but not $0$ in general although $0$ is one of the global minima). This phenomenon has been characterized by the NTK theory or lazy training, e.g.,
Du, Simon, et al. "Gradient descent finds global minima of deep neural networks." International conference on machine learning. PMLR, 2019.
Chizat, Lenaic, Edouard Oyallon, and Francis Bach. "On lazy training in differentiable programming." Advances in neural information processing systems 32 (2019).
Another way to see this is that in Proposition 3.1, if we plug in $\textbf{y}=0$ (zero-label on all training data), we will not get a zero-constant network. Instead, the output network depends on the initialization network. This phenomenon also highlights the need to consider random initialization as an important uncertainty component in the UQ task.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications. I keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much. We appreciate the reviewer for your reading and reply. | Summary: In the paper, authors present a new approach to quantify and mitigate a specific aspect of epistemic uncertainty in model predictions. They identify and quantify "procedural variability," a type of epistemic uncertainty that arises from noise in the training process. Based on the Neural Tangent Kernel (NTK) theory, the authors introduce a "procedure-noise-correction" method. They offer theoretical support and experimental evidence to justify their approach.
Strengths: Generally, the majority of research papers propose methods that quantify the overall epistemic uncertainty, but a few discuss the estimation and evaluation of its different components. The process of splitting epistemic uncertainty into different parts, while not a novel concept, remains significantly under-studied. Therefore, the problem addressed by this paper is promising and brings potential value for further studies.
The strengths of this paper are:
1) The authors built a mathematical framework for their proposed method.
2) The authors' approach to quantify and remove procedural variability is indeed novel and opens new perspectives for future works.
3) Combined with the PNC-Enhanced batching, the presented approach seems practical, and practitioners in the field may find it beneficial.
4) The supplementary material is verbose and resembles a textbook, providing a comprehensive guide to the details of derivations, proofs, and certain theoretical aspects like the NTK.
Weaknesses: There are some aspects of this paper I consider as weaknesses:
1. Narrative of the Paper: Despite its overall good organization and clarity, Section 3 in the main document is challenging to understand. Reorganizing this section could enhance the paper's readability. Some propositions could be moved to the Appendix, freeing up space for the experimental section (see Weakness 2). Adding illustrative elements, such as a schematic diagram of the dataset construction and overall architecture, could further improve the readers' understanding.
2. Experiments: The set of experiments presented appear overly simplified and unrepresentative. Even though the multi-layer perceptron (MLP) models with two layers and varying unit numbers technically qualify as over-parameterized for d=1,2,4,8, the example is overly simplistic. The only experiment is with a sinusoidal signal form, and a y-label variance of 0.001 isn't explained or justified. The paper could be enriched by the inclusion of generally accepted benchmarks, such as regression benchmarks containing UCI datasets[1]. Moreover, it's clearly seen from your results that an ensemble of 5 models substantially outperforms an ensemble of 2 models, and training these ensembles isn't overly computationally demanding considering the complexity of data and models. Hence, comparing with larger ensembles, say 50 or 100 members, could provide more insightful results.
3. The initial part of the paper seems to promote the enhancement of **uncertainty quantification** by disentangling epistemic uncertainty into different components, but the actual evaluation of this uncertainty is lacking. The experiment appears to improve the **optimization process** rather than the quantification of uncertainty. An experimental suggestion would be to select a measure for epistemic uncertainty and apply it to both in-distribution (validation split) and out-of-distribution data. Computing the uncertainty measure for each data point across independently trained models several times and comparing the resulting ROC-AUC with the one obtained through your method could demonstrate whether the elimination of procedural variability truly improves the estimates of epistemic uncertainty.
4. There is no code available, so I can not check the reproduction of the experiments.
[1] Gal Y., Ghahramani Z. Dropout as a bayesian approximation: Representing model uncertainty in deep learning //international conference on machine learning. – PMLR, 2016. – С. 1050-1059.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Can the method be somehow adapted for classification?
Edit: I would like to thank the authors for their answers during rebuttal period.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: As I can see, this paper has no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer for the careful reading and valuable feedback. We address the reviewer's concerns below.
Q1. (Experiments) For additional experiments on real-world datasets, please refer to our global response to all reviewers and our discussions there. We hope this would alleviate your concern that our data set is overly simplistic (and not motivated as well). For larger ensembles, the emphasis of our paper is that our PNC-predictor only requires 2 networks training. Therefore, a fair comparison with the same running time should be 2 networks in the deep ensemble approach, and our approach clearly outperforms it. Deep ensemble with any networks of more than 3 (5 or 50) does not change our conclusion even if they perform similarly to ours, since they require much more running times than ours.
Q2. (Evaluation) Our work focuses on the quantification of $\textit{epistemic uncertainty}$, which refers to the errors coming from the inadequacy of the model or training data noises. This is different from $\textit{aleatoric uncertainty}$ (the randomness in the conditional data distribution). We provided a discussion on the difference between them in Appendix A. In particular, epistemic uncertainty is typically used on in-distribution data, while detecting out-of-distribution data generally requires aleatoric uncertainty or predictive uncertainty that is different from our study scope, even though this is certainly an interesting future direction for us to explore.
When speaking of the “improve the optimization process” experiment, our understanding is that the reviewer probably means Table 2. However, we also have Table 1 for evaluation of our epistemic uncertainty quantifier. We conduct the evaluation of epistemic uncertainty statistically via the task of constructing confidence intervals, which is our task at the beginning of Section 4 (Table 1). More precisely, we aim to construct an interval that can cover the ground-truth value $h^*(x)$ with high probability with respect to the training randomness. To evaluate the confidence interval performance, we repeat our experiments 40 times to obtain 40 confidence intervals and check their coverage rate. For 95% confidence level, the coverage rate of 40 confidence intervals should be around or larger than 95%. Our performance appears competitive in this task as well.
Q3. (Code.) We will definitely make our code publicly available. Also, and at the time being, Section E.1 Experimental Details provides our experimental details and configurations.
Q4. (Classification) The theoretical developments needed for regression tasks and classification tasks appear different and not easily adaptable. NTK is the essential component of our theory, providing the theoretical foundation to establish the statistical properties of our algorithms. In previous literature, NTK is generally employed for mean-square-error (MSE) loss, such as
Du, Simon, et al. "Gradient descent finds global minima of deep neural networks." International conference on machine learning. PMLR, 2019.
Lee, Jaehoon, et al. "Wide neural networks of any depth evolve as linear models under gradient descent." Advances in neural information processing systems 32 (2019).
He, Bobby, Balaji Lakshminarayanan, and Yee Whye Teh. "Bayesian deep ensembles via the neural tangent kernel." Advances in neural information processing systems 33 (2020): 1010-1022.
as well as other references for NTK in our paper. The major technical discrepancy related to NTK between the regression tasks and the classification tasks is that the softmax operation in the network final layer and the cross-entropy-type loss used in the classification does not have an explicit statistical characterization of neural network training. Therefore, most NTK-based theories consider MSE loss. It would be an interesting but different direction to study the statistical guarantee for classification neural networks, which we recognize as our future research direction.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed answers!
I am quite confused with the answer to Q2.
In particular:
``epistemic uncertainty is typically used on in-distribution data, ... detecting out-of-distribution data generally requires aleatoric uncertainty''.
Can you please elaborate on what precisely you mean?
If aleatoric uncertainty is captured by $\pi_{Y \mid X}$, does not it mean that aleatoric uncertainty makes sense only for that $X$ which came from the in-distribution? Otherwise not clear what should be the distribution over $Y$ for out-of-distribution $X$.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer for your reading and reply. We apologize for not adequately explaining the comment on the out-of-distribution detection. Let us elaborate on this point a bit more. There are two different scenarios for detecting out-of-distribution data, as follows:
1) The goal is to check whether the joint distributions are identical, i.e., whether $(X_{test}, Y_{test})$ is from the same distribution as $(X_{train}, Y_{train})$. To this end, we can estimate the aleatoric uncertainty $\pi_{Y \mid X}$ and check whether $Y_{test}$ falls into the highest density region of $\pi_{Y \mid X_{test}}$ [1]. If the answer is no, then with high probability, $Y_{test}|X_{test}$ is out-of-distribution, and thus $(X_{test}, Y_{test})$ is also out-of-distribution. This approach considers the conditional distributions of Y given X.
This is the scenario we referred to in the rebuttal, which is indeed related to our problem setting. In particular, our problem aims to quantify the uncertainty in the obtained model $\hat{h}(x)$ against the target best model $h^*(x)$ (with a fixed $x$). We do so by providing a confidence interval around $h^*(X_{test})$ (not $Y_{test}$), because we focus on the quantification of epistemic uncertainty. On the other hand, $Y_{test}$ typically contains noise in regression tasks (i.e., the aleatoric uncertainty is not zero), and to carry out the out-of-distribution detection scheme, the construction of the highest density region needs to consider the aleatoric uncertainty.
2) The reviewer probably means the second scenario that has been widely studied in classification: The goal is to check whether the marginal distributions are identical, i.e., whether $X_{test}$ is from the same distribution as $X_{train}$ (typically given some conditions, such as for a certain class). To our knowledge, this task generally requires estimating the distribution of $X_{train}$ or its features (extracted from the neural network), and then checking whether $X_{test}$ fits. For instance, [2,3] use information from the likelihood function based on the estimated density function of $X_{train}$ from generative models, [4,5] check the energy score or softmax score from the distribution of the final layer's logits, [6] aggregates the p-values from the distribution of intermediate layers’ features. These works are related to quantifying the distribution of $X_{train}$ or its features (logits can also be viewed as condensed features), while our work has a different focus: The epistemic uncertainty is regarding the response $\hat{h}(x)$ while $x$ is a fixed test point (not random), and the randomness is from the training data and training procedure (different in-distribution training data and training procedures lead to different responses $\hat{h}(x)$).
References:
[1] Hyndman, Rob J. "Computing and graphing highest density regions." The American Statistician 50.2 (1996): 120-126.
[2] Nalisnick, Eric, et al. "Do Deep Generative Models Know What They Don't Know?." International Conference on Learning Representations. 2019.
[3] Grathwohl, Will, et al. "Your classifier is secretly an energy based model and you should treat it like one." International Conference on Learning Representations. 2020.
[4] Liu, Weitang, et al. "Energy-based out-of-distribution detection." Advances in neural information processing systems 33 (2020): 21464-21475.
[5] Hendrycks, Dan, and Kevin Gimpel. "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks." International Conference on Learning Representations. 2017.
[6] Haroush, Matan, et al. "A Statistical Framework for Efficient Out of Distribution Detection in Deep Neural Networks." International Conference on Learning Representations. 2022. | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewers for their careful reading and valuable feedback.
In this global response, we will show some additional experiments to showcase our approach to more challenging problems than in the paper via “simulated” real-world datasets. In the following, we will first discuss why synthetic datasets are employed in our experiments and then provide our experimental results and rationale.
The major reason for using synthetic datasets in our experiments is that precise evaluation of a confidence interval requires the following two critical components, which nevertheless are not available in real-world data: 1) We need to know the ground-truth regression function, while the label in real-world data typically contains some aleatoric noise. 2) To estimate the coverage rate of the CI, we need to repeat the experiments multiple times (40 times in our paper) by keep regenerating new independent datasets (from the same data distribution). In practice, we cannot regenerate new real-world datasets.
Thus, to address the reviewer’s concern about experiments on larger, more realistic and preferably real-world data, we provide a way to mimic an evaluation of the CI on real-world datasets as follows.
Step 1. Select a subset of the real-world data $(x_i, y_i)$ ($i \in \mathcal{I}$) and a test data point $(x_{k}, y_{k})$ ($k \notin \mathcal{I}$).
Step 2. For each experimental repetition j, add some “artificial” independent noise on the label to obtain a “simulated” real-world dataset $(x_i, y_i+\epsilon_{i,j})$ ($i \in \mathcal{I}$) where $\epsilon_{i,j}$ are all independent variables. Construct CI based on this “regenerating” training data $(x_i, y_i+\epsilon_{i,j})$ ($i \in \mathcal{I}$) and evaluate the coverage on $(x_{k}, y_{k})$.
In the above setting, $y_{k}$ becomes the “true” mean response of $x_{k}$ without aleatoric noise, and $\epsilon_{i,j}$ represents the only data variability. Again, as we discussed above, precise evaluation is impossible for real-world data; the above procedure, although not precise, is the best we can do to provide a resembling evaluation.
Using this setting, we conduct experiments on real-world benchmark regression datasets from UCI datasets: Boston, Concrete, and Energy. These results are shown in the Table in our attached PDF file.
From the results, we can see that our approaches also work well for these “simulated” real-world datasets. They can provide accurate confidence intervals that satisfy the coverage requirement of the confidence level. In contrast, DropoutUQ does not have such statistical guarantees. Therefore, our approaches not only work for synthetic datasets but also is scalable well to be applied in benchmark real-world datasets.
Pdf: /pdf/7973a849f5b01501d9c0840e5a2636aedb466f5c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Explaining Predictive Uncertainty with Information Theoretic Shapley Values | Accept (poster) | Summary: The paper aims to define Shapley values to decompose the conditional distribution of the outcome so to attribute the uncertainty of the outcome to individual variables. This aims to extend prior work, which has mainly focused on explaining the change in the conditional mean of the outcome.
Strengths: The manuscript highlight how methods for explaining the conditional variance can have highly different results from those for explaining the conditional mean. This is nicely shown in the simulation results in Section 6.3 and Figure 3. The authors also nicely highlight how methods for explaining the conditional variance can be useful in many contexts, such as for model calibration.
Weaknesses: 1. There are a number of other papers that have studied how to explain the conditional variance of Y given X using Shapley values. For instance, Williamson and Feng 2020 use Shapley values to determine how well individual features predict Y using some user-defined scoring function V. If this scoring function is set to the deviance/KL divergence, then the resulting Shapley values are highly similar to those discussed in this paper. Moreover, this paper discusses how to efficiently perform statistical inference that sidestep computational issues that arise when considering every possible variable subset. The authors should put the current manuscript in context, given such existing works.
2. The link between conditional independencies and the information theoretic Shapley values defined in this paper follows very directly from how the Shapley values were defined. I don't see why the authors state that this connection is "deep" or "subtle". Perhaps the authors can clarify why this connection is far from obvious.
3. The proposed Shapley value is characterizing properties of the oracle distribution. Given the framing of the paper, the true target for statistical inference should be the Shapley value with respect to the oracle distribution. However, the result in Theorem 5.1 does not account for the uncertainty in estimating the oracle distribution and performs inference for a different target (seems to be the mean of the Shapley value of the estimated distribution). Moreover, it is not clear to me what the use case is for marginal coverage guarantees like that presented in Theorem 5.1. (Conditional coverage guarantees are ideal, but I agree that is unattainable.) I suggest better motivating (i) why are we doing statistical inference and (ii) what the target of statistical inference is.
4. This framework seems most relevant for explaining the uncertainty of either a continuous outcome or a multi-class outcome. For binary outcomes, the mean-variance relationship suggests to me that the results from using more typical Shapley values will be highly similar to those using this new definition. However, many of the experiments in Section 6 deal with binary outcomes. First, the authors should compare their results to those from using Shapley values for explaining the conditional mean. Second, the author should consider additional experiments dealing with other types of outcomes, to really highlight the utility of their proposed method.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: N/A
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors discuss some limitations of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank TWY9 (henceforth R4) for their close reading and insightful feedback.
We respond to specific issues below.
W1. We thank R4 for pointing us to Williamson & Feng (2020), a reference we had not previously come across. Their SPVIM method is an elegant frequentist alternative to the Bayesian procedure we cite in the original manuscript. However, we dispute the claim that “a number of other papers…have studied how to explain the conditional variance of Y given X using Shapley values”–we are unaware of any such works, and would appreciate the reference(s) that R4 has in mind. The cited W&F paper, for instance, does *not* explicitly do this. Rather, W&F provide a procedure for performing inference using various global measures of “predictiveness”. Though their method could in principle be applied to games that model conditional variance or information theoretic quantities, the authors do not study or implement any such examples.
R4 is correct to point out that existing inference procedures could be applied to our information theoretic Shapley values, a point we also acknowledge in Sect. 5. However, this is orthogonal to our main goals in this paper. First, we propose a number of novel information theoretic Shapley values, which we motivate by their ability to describe conditional dependencies beyond the first moment and their applicability in settings where labels are unavailable. Second, we show that conformal inference can be used to test whether the set of local Shapley values for a single feature tends to concentrate around zero. This is different from testing whether a single local or global feature attribution is significantly far from zero. Of course, one could in principle evaluate global importance by running $n$ tests using a local inference procedure, or compute a global measure upfront instead of aggregating local ones. The former is inefficient by comparison, while the latter requires a global value function, as opposed to the local alternatives we consider here. We have amended the manuscript to better contextualize our contribution and acknowledge the work of W&F.
W2. R4 objects that Thm. 4.5 is “obvious”. We are somewhat sympathetic to this complaint with regards to result (a), which we include primarily for completeness. As for results (b) and (c), we must respectfully disagree with R4’s judgment. That context-specific independence (CSI) is sufficient but not necessary for zero marginal payoff, and that counterexamples to the necessity claim under CSI are measure zero, came to us as something of a surprise. These results rely on carefully constructed examples and an algebraic lemma from the 1970s. If these points were immediately obvious to R4 upon reading the definitions, then we commend R4 for their intuitive command of information and measure theory. We have amended the abstract to avoid all talk of “deep connections”, but are comfortable with the claim in Sect. 4 that our results establish a “somewhat subtle link between conditional independencies and information theoretic Shapley values.”
W3. We thank R4 for pressing us to better motivate our inference procedure in Thm. 5.1 and to clarify our statistical target. First, we reiterate that our primary goal is to define and study a number of local attribution methods based on information theoretic value functions. As a secondary benefit, we may aggregate local attributions to test whether Shapley values concentrate around zero, which is evidence of a globally uninformative predictor. This is useful and efficient when local attributions have already been computed. However, if a researcher’s only goal is to test global importance, then they are better off using global measures upfront.
Our statistical targets in Thm. 5.1 are the upper and lower extrema of *the oracle band*. Let $P_j$ be the true distribution of Shapley values for feature $j$, i.e. $\phi(j, \mathbf{x}) \sim P_j$, with corresponding quantile function $Q_j$. Then the oracle band for level $\alpha$ is defined as $C^*_j(\alpha) := \big[Q_j(\alpha / 2), Q_j(1 - \alpha / 2)\big]$. This band is *optimal* in the sense that it has the shortest length among all bands with valid coverage. Our previous version of this theorem relied on an implicit symmetry assumption to compute split conformal bands. Though symmetry holds for uninformative features, it may not hold in general and is inessential in practice. A more general solution is to report the empirical quantiles as estimated on $\mathcal{I}_2$. Our modified result therefore reads: $\mathbb{P} \big( \phi(j, \mathbf{x}^{(n+1)}) \in \big[\hat{Q}_j(\alpha / 2), \hat{Q}_j(1 - \alpha / 2)\big] \big) \geq 1 - \alpha$. We have updated our coverage experiments accordingly (PDF, Table 1).
R4 inquires about sources of error for these estimates. We identify three: (i) estimating the target function $h$ from finite samples; (ii) sampling values for out-of-coalition features; and (iii) sampling coalitions. Convergence rates as a function of (i) and (ii) are entirely dependent on the selected subroutines. With consistent methods for both, conformal prediction bands are provably close to the oracle band (see Lei et al., 2018, Thm. 8). We have conducted new experiments to empirically evaluate consistency under a range of conditions (see PDF, Fig. 1). As for (iii), we defer to the results of W&F, who show that with efficient estimators for (i) and (ii), and an extra condition on the minimum number of subsets, sampling $m = \Theta(n)$ coalitions is asymptotically optimal, up to a constant factor.
W4. We thank R4 for the suggestion to compare our results against those of standard Shapley values to better understand how the two relate (PDF, Fig. 2). As expected, standard Shapley values for binary classification problems are highly predictive of information theoretic Shapley values. The relationship is somewhat less salient in the multiclass setting, as illustrated by our 10-class MNIST experiment (PDF, Fig. 2B).
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal
Comment: Please let us know if there is anything more we can do to address reviewer comments or concerns. If we have resolved all major issues raised in the initial review, then we invite R4 to revise their score upward :) | Summary: I have previously reviewed this paper at another conference in which, after reviewers' discussion, it was a borderline reject, below my review is enriched with the previous conference discussions.
This work proposes calculating a modified Shapley value where the coalitional game represents the entropy of the predictive distribution.
A high-scoring feature will increase the entropy, and a low-score feature will decrease it.
Sections 4 and 5 describe the properties of their value function, along with a couple of alternative value functions. They discuss how their value function connects to the notion of information gain, a decomposition of uncertainty into aleatoric and epistemic sources, and a PAC-like technique for testing if a feature's mean Shapley value is near zero.
Various experiments are performed to illustrate use cases and validate correctness.
Strengths: - The idea of using KL divergence and Entropy in combination with Shapley Values to attempt to quantify the individual importance of features is novel and potentially very useful. Not many papers have delved into the topic of attributing uncertainty to input features.
- Nice job deriving properties from the two main coalitional games they discussed
- Extensive set of tests given.
Weaknesses: In the final discussion, the main reasons to reject were the following:
- Developing metrics to verify the correctness of their results
- Making their implementation consistent with the methods section.
- Experimental Limitations: the experimental design is based on papers from Computer Vision, where pure covariate shift is identifiable.
- Some missing discussions about related work
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Since the paper was previously rejected and no major modifications have happened. Why would the authors expect the paper to be accepted this time?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer yQq8 (henceforth R3) for taking the time to read and comment on our manuscript (again!). We learned a great deal during the review process for ICML, incorporating reviewer feedback to improve the manuscript in numerous ways—an improvement widely acknowledged by reviewers, who revised their scores upward by 2-3 points on average. We strenuously object to the charge that “no major modifications have happened” since that submission. In fact, Sect. 4 was rewritten in its entirety, with major updates to our theoretical results (including the introduction of context-specific independence structures that went unmentioned in the previous manuscript). We added numerous references to our literature review, including further discussion on related methods such as LossSHAP and SAGE, and amended the experiments to more clearly implement our theory, computing explanations for both epistemic and aleatoric uncertainty under a range of function classes and sampling procedures, in addition to our recently added convergence experiments. We hope R3 will agree that this manuscript has come a long way since we submitted an initial draft to ICML some six months ago.
We reply to R3’s critiques as laid out in the “Weaknesses” section below:
W1. We appreciate the suggestion, also made by other reviewers, to include more quantitative experiments to evaluate the soundness of our method. Toward that end, we have conducted a simulation experiment in which ground truth Shapley values can be calculated in closed form to see how our estimation procedures fare under a range of data generating processes and imputation methods for out-of-coalition feature values. Echoing results from other XAI studies (Olsen et al., 2023), we find that no single method dominates throughout. However, we tend to converge on true information theoretic Shapley values when sample sizes are large and dependencies between features are well modeled. See our comment to all reviewers above for more details.
W2. Better aligning our methods and experiments section is one of the major changes we have made to the manuscript since the initial ICML submission. This includes expanding the experiments to cover cases of both epistemic and aleatoric uncertainty (as opposed to just total uncertainty), and comparing different methods for reference distribution sampling across different base algorithms. We would be open to adding new experiments to better align these sections if R3 has some specific suggestion(s) in mind?
W3. We are somewhat confused by the claim that “the experimental design is based on papers from Computer Vision.” Sect. 6 includes just a single image data example—from the canonical MNIST—in addition to examples from natural language processing (Fig. 1B), and numerous tabular datasets (Figs. 2, 3). We have made a deliberate effort to include a diverse set of experiments spanning structured and unstructured data types, as well as classification and regression examples. We fail to see how our experimental design is limited to computer vision applications.
W4. We have attempted to cover a wide body of literature in this manuscript, including papers on explainable AI, uncertainty quantification, information theory, and application areas such as active learning, covariate shift detection, feature selection, and classification with reject option. R2 specifically complimented our “thorough literature review”. Our bibliography spans some 80+ references, compared to 55 references in the earlier version of our work that was reviewed at ICML. In any event, we are happy to expand Sect. 2 or Sect. 7 with extra references if R3 has any specific titles in mind.
We hope this rebuttal goes some way toward addressing R3’s concerns, and kindly request that they consider revising upward :)
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal
Comment: Please let us know if there is anything more we can do to address reviewer comments or concerns. If we have resolved all major issues raised in the initial review, then we invite R3 to revise their score upward :) | Summary: This paper introduces a method for explaining the uncertainty in predictions made by DNNs. The authors extend the Shapley values from explaining the value of DNN outputs to explaining the uncertainty of the DNN outputs. Then, the authors demonstrate the close relationship between these Shapley values and the conditional independencies between inputs and outputs. Additionally, the authors provide a theoretical bound to support the coverage of the explanation obtained by the proposed method.
Strengths: 1. The paper is well-organized and easy to follow. The authors also provide a thorough literature review.
2. The authors prove some propositions and theorems to support the reliability of the proposed definition of Shapley values in DNNs.
3. It is good that the authors recognize the presence of both epistemic and aleatoric uncertainty in the prediction of DNNs.
Weaknesses: 1. The motivation behind investigating the Shapley values of inputs towards the uncertainty of predictions is not convincing. While the proposed information-theoretic Shapley value reveals which variables contribute to prediction uncertainty, it fails to explain which variables are responsible for correct/incorrect predictions. The original Shapley values defined on the model output can provide this information. On the other hand, the results in Figure 1 do not demonstrate the superiority of the proposed information-theoretic Shapley value. This is because the features that increase predictive uncertainty can also be considered as pixels with negative Shapley values toward the correct prediction. Therefore, the information-theoretic Shapley value does not offer significant additional insights or information.
2. How should I understand the Shapley values defined on the value functions $v_{KL}$ and $v_{CE}$? When using these value functions, the output $v(N)$ is always 0.
3. Is the proposed information-theoretic Shapley value a local explanation for individual samples or a global explanation over all training samples? If it is a local explanation, what does the coverage of the explanation represent? If it is a global explanation, how is the value function defined over different samples?
4. In Figure 2, many perturbed inputs are assigned negative Shapley values for predictive uncertainty, which is perplexing. In my understanding, a negative contribution to the predictive uncertainty indicates that this feature is discriminative. However, the perturbed features are typically expected to be non-discriminative.
5. There is a lack of quantitative experiments to validate the accuracy of the proposed Shapley values in estimating the uncertainty/variance of the prediction. Figure 3(A) just provides qualitative results and Figure 3(B) focuses solely on the ranking of features. I suggest the authors conduct a quantitative experiment to show that the proposed method could precisely explain and estimate the variance of the prediction.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Further discussion is needed to clarify the motivation and advantages of explaining predictive uncertainty.
2. It is suggested that the authors perform a quantitative experiment to demonstrate the ability of the proposed method to accurately explain and estimate prediction variance.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed the limitation of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer DQBN (henceforth R2) for their time and feedback. We were pleased to see that R2 found our paper “well-organized and easy to follow”, but would like to clarify several points of potential confusion that were raised in the “Weaknesses” section.
W1. R2 objects that our method “fails to explain which variables are responsible for correct/incorrect predictions.” This is true – and by design. R2 appears to be conflating “uncertainty” (our target) with “error” (not our target). A method such as LossSHAP (Lundberg et al., 2020), which seems closer to what R2 has in mind, would explain the pointwise loss, e.g. the squared residual of each prediction in a regression example. However, this requires labels. As noted throughout the manuscript, *our method is unique in handling cases where labels are unavailable*. This can occur, for instance, in clinical medicine or online advertising, where outcomes are delayed or expensive to collect. As we point out in Sect. 4, there is a close relationship between our proposed games and LossSHAP (using likelihood-based loss functions). Specifically, our games represent average LossSHAP values as we marginalize over $Y$ with respect to different conditional distributions. This is the best we can do without access to labels. Our experiments in covariate shift (Sect. 6.2), which can also be used to explain the selections of an acquisition function for an active learning algorithm, illustrate the utility of our information theoretic Shapley values in cases where LossSHAP is impossible to compute.
Beyond simply removing the need for labels, a key motivation for explaining uncertainty rather than error is that throughout many areas in machine learning there exist methods that are based on uncertainty, and we may need to explain their behavior. A few examples:
Many active learning algorithms select their queries based on (different notions of) uncertainty, and one may want to know “why did our active learning algorithm query those instances?”
Many out of distribution (OOD) detection algorithms make their OOD assessments based on some notion of uncertainty, and one may want to know “why did our OOD method assess this instance to be out of distribution?”
Our methods are uniquely well-suited to handle such questions.
W2. We are unsure why R2 is under the impression that $v_{KL}$ and $v_{CE}$ are invariant functions that map all inputs to zero. This is false. Quoting from Sect. 4: $v_{KL}(S, \mathbf{x})$ “can be interpreted as $-1$ times the excess number of bits one would need on average to describe samples from $Y \mid \mathbf{x}$ given code optimized for $Y \mid \mathbf{x}^S$.” The cross entropy value function has a similar interpretation, and is indeed equivalent up to an additive constant. In other words, these value functions measure the information gap between two conditional distributions for $Y$: one based on the complete vector $\mathbf{x}$, and another based on the partial vector $\mathbf{x}^S$. Large values can occur when informative features are excluded from $S$. To take an extreme example, consider a case in which $H(Y \mid X) = 0$, $H(Y) > 0$, and $S = \emptyset$. Then for all $\mathbf{x}$, we have $v_{KL}(S, \mathbf{x}) = v_{CE}(S, \mathbf{x}) = H(Y)$. In general, these value functions can be made arbitrarily large by letting $Y$’s prior entropy tend toward infinity while its local posterior entropy tends toward zero.
W3. Our proposed value functions compute exclusively *local* Shapley values, as we emphasize in several places throughout the manuscript, e.g. when contrasting against Covert et al.’s (2020) SAGE method in Sects. 2 and 4. However, we acknowledge that this may complicate the interpretation of Thm. 5.1 without further elaboration. Our coverage guarantee pertains not to an individual Shapley value $\phi(j, \mathbf{x})$ but rather to the set of all Shapley values for a single feature $X_j$. As we note in the manuscript, the motivation for bounding the spread of this set is that “These results can inform decisions about feature selection, since narrow intervals around zero are necessary (but not sufficient) evidence of uninformative predictors.” In other words, we aggregate local attributions to draw a global inference. This connects with the conditional independence results of Thm. 4.5, which license an interpretation of the marginal payoff function as a conditional dependence measure. We have updated Sect. 5 to make this point more explicit. See also our reply to R4, who raises some questions about this.
W4. Negative Shapley values in this game correspond to feature values that decrease predictive uncertainty relative to a data-dependent baseline (i.e., the prior entropy $H(Y)$). Extremely low values following perturbation indicate over-confidence in the corresponding prediction, a common case of miscalibration under covariate shift (see, e.g., Ovadia et al., 2019). Identifying the sources of such miscalibration can inform decision making. We have added some text to Sect. 6.2 to clarify this point.
W5. We appreciate R2’s suggestion to include a quantitative experiment to demonstrate our method’s ability to accurately explain predictive uncertainty. We have conducted a simulation in which this quantity can be calculated in closed form, and estimated the corresponding Shapley values using a range of different pipelines for imputing out-of-coalition feature values. See the comment to all reviewers above for more details on this experiment.
References:
-Lundberg et al., 2020: https://www.nature.com/articles/s42256-019-0138-9
-Covert et al., 2020: https://proceedings.neurips.cc/paper/2020/file/c7bf0b7c1a86d5eb3be2c722cf2cf746-Paper.pdf
-Ovadia et al., 2019: https://proceedings.neurips.cc/paper_files/paper/2019/file/8558cb408c1d76621371888657d2eb1d-Paper.pdf
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response, but some concerns still remain.
Weakness 1. I'm clear about the difference between explaining uncertainty and explaining the error/model output. My concern is that explaining the model output usually provides more information than just explaining the uncertainty. Although there are some cases where the label is unknown, explaining the output value of each category may also help people understand the inference of the model.
Weakness 2. When taking $v_{KL}$ and $v_{CE}$ as the value function, the output $v(N)$ given all input features is always zero, because it computes the distance between two same distributions ($p(y|x)$). Thus, my question is, what does the Shapley value in this scenario mean? Using the traditional reward function (maybe the model output or loss), the Shapley value refers to how much contribution each input feature has to the output score/loss. But with $v_{KL}$ and $v_{CE}$ as the value function, I don't know how to understand the Shapley value.
Weakness 4. What does the miscalibration refer to? The authors did not answer why the randomly perturbed features were explained as discriminative features (leading to the over-confidence of the prediction).
---
Reply to Comment 1.1.1:
Title: Re: Official Comment by Reviewer DQBN
Comment: We thank R2 for taking the time to reply to our rebuttal. We respond to their new comments below.
W1. Explaining individual predictions provides information at a different level of detail. Existing tools are already well-suited to these applications. However, as we note in our rebuttal, current methods may not be useful in other areas that rely on uncertainty evaluation such as active learning and OOD detection.
Though one could in principle inspect SHAP explanations for each candidate label to probe for sources of uncertainty, this becomes infeasible with more than two or three classes and is unnecessarily indirect.
Moreover, focusing on the first conditional moment ignores higher order information, as we explain in our motivating example at the top of Sect. 4 where $X, Z \sim \mathcal{U}(0, 1)^2$ and $Y \sim \mathcal{N}(X, Z^2)$. Standard SHAP values will assign zero importance to $Z$ here, although the variable does in fact provide information about the conditional distribution of $Y$. In summary, our proposed Shapley values *complement* rather than *supplant* existing alternatives.
W2. Apologies for misunderstanding R2’s original comment – we see now that what R2 calls $N$ is what we call $[d]$.
Often in Shapley style games, the value function has positive range with $v(\emptyset) = 0$. However, as R2 correctly observes, zero is the *ceiling* rather than the *floor* for $v_{KL}$ and $v_{CE}$. That is, these value functions have *negative* range, with $v_{KL}(\emptyset, \mathbf{x})$ representing the negative KL-divergence between prior and posterior distributions for $Y$.
As we explain in Prop. 4.1 and the surrounding text, resulting marginal payouts $\Delta_{KL}(S, j, \mathbf{x})$ represent the information gained by $X_j = x_j$ when we already know feature values for coalition $S$, assuming the true target distribution is $Y \mid \mathbf{x}$.
As we explain in Prop 4.2 and the surrounding text, resulting Shapley values represent the contribution in bits to the KL-divergence between prior and posterior distributions.
W4. Miscalibration in this context means that the model is over-confident in its prediction. Perturbing a feature can lead to overconfidence if, for instance, the model infers a monotonic relationship between $X$ and $P(Y \mid X)$ during training and then sees an unusually high value for $X$ at test time.
If random perturbations decrease predictive uncertainty, we infer that (a) the feature in question must be one the model relies on for making predictions, otherwise we would see no change in output; and (b) the model is extrapolating with too much confidence to anomalous regions of the feature space. | Summary: Authors aim to explain uncertainty in model outputs by adapting Shapely value framework. In specific, authors try to explain predictive distributions through the lens of entropy, cross-entropy, KL and information gain. Authors offer some theoretic interpretation of the Shapley values adapted with these metrics and, through experiments, show the efficacy of the entropy based variant of Shapely values.
Strengths: 1. I would like to thank the authors for presenting their work in a very thoughtful manner. I find most of the paper to be clear and intuition which follows propositions and theorems to be very helpful. I like the how the paper introduces theory to offer intuition and make some simplifications to make the method more practical.
2. With raising importance in uncertainty quantification in ML, the work presented in this paper is very relevant to the community.
3. The paper not only presents theoretical analysis of the modified variants of the Shapley values but also offers empirical evidence of how their proposed method works beyond the assumptions made in the theoretical analysis.
Weaknesses:
I might be missing something, but I am unable to understand the specific details of the experiments by reading the main paper and the appendix. (even the supplementary code didn't have much documentation). For example, I couldn't find the details of ensembles method used. What is the size of ensemble (`B` in Sec 5)? Which type of ensemble [5] is being used? What was the batch size how many training epochs were run?
I think there is a line of work on uncertainty estimation and it's interpretation which is missing and might be useful. (A subset of these papers might be useful: [1] , [2], [3], [4])
[1] From Predictions to Decisions: The Importance of Joint Predictive Distributions (https://arxiv.org/pdf/2107.09224.pdf)
[2] Epistemic neural networks (https://arxiv.org/pdf/2107.08924.pdf)
[3] Evaluating High-Order Predictive Distributions in Deep Learning (https://proceedings.mlr.press/v180/osband22a/osband22a.pdf)
[4] Marginal and Joint Cross-Entropies & Predictives for Online Bayesian Inference, Active Learning, and Active Sampling (https://arxiv.org/pdf/2205.08766.pdf)
[5] Ensembles for Uncertainty Estimation: Benefits of Prior Functions and Bootstrapping (https://arxiv.org/pdf/2206.03633.pdf)
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. In Section 6, even through the problems were explained in some detail, I find Section 6 and Appendix C to be lacking in details of methods used and their hyperparameters. Can you please include these details?
2. Some parts of the paper refers to active-learning experiments in the experiments section, but it was not obvious to me how the experiments in the Section 6 are related to active learning. Can you clarify this?
3. In Section 5, authors present computation of entropy (h_t) with a single point estimate and then computation of aleatoric entropy (h_a) with an ensemble of particles. It is not clear how h_t is estimated when we use an ensemble of particles. Can you clarify this?
4. There is reference to script D in line 219. I don't think this was introduced before or referenced later. It would be useful to replace this with an appropriate term.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Please refer to the weakness section.
I am willing to increase my score if authors provide more details about their experiments either in the main paper or the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank reviewer 2QPT (henceforth R1) for their attentive comments and overall positive assessment. We were especially pleased to find that R1 considered our presentation “very thoughtful” and judged “the paper to be clear and intuition which follows propositions and theorems to be very helpful.” We appreciate R1’s suggestion to review other methods for computing and interpreting predictive uncertainty in machine learning. We have incorporated these references into Sect. 2 (Related Work) and Sect. 7 (Discussion).
R1 raises several important questions, which we address below.
Q1. We agree that the presentation in Sect. 6 and Appx. C could be more thorough, and have amended the text to include previously excluded details. In particular, for our image data example, we train a neural network with the following model architecture: (1) A convolutional layer with 10 filters of size 5x5, followed by max pooling of size 2x2, ReLU activation, and a dropout layer with probability 0.3. (2) A convolutional layer with 20 filters of size 5x5, followed by a dropout layer, max pooling of size 2x2, ReLU activation, and a dropout layer with probability 0.3. (3) Fully connected (dense) layer with 320 input features and 50 output units, followed by ReLU activation and a dropout layer. (4) Fully connected layer with 50 input features and 10 output units, followed by softmax activation. We train with a batch size of 128 for 20 epochs at a learning rate of 0.01 and momentum 0.5. For Monte Carlo dropout, we do 50 forward passes to sample $B = 50$ subnetworks. This information has now been added to Appx. C.2. The XGBoost ensembles vary in size from $B = 50$ to $B = 100$, with details provided in Sect. 6.
Q2. We thank R1 for pointing out that it may not be immediately clear how our experiments in Sect. 6 connect to the active learning (AL) setting we mention at several points throughout the text. The connection is strongest in Sect. 6.2 on covariate shift. Following several other authors (e.g., Sugiyama et al., 2007; Quiñonero-Candela et al., 2009), we see this task as basically continuous with AL, since an effective acquisition function will tend to prioritize anomalous samples that appear to be drawn from a different distribution than the training data. These will naturally have high epistemic uncertainty. It is common in the AL literature to use epistemic uncertainty to select for which instances to query the label. This is inherently the case in Query-by-Committee based methods, and has recently become more common also in the uncertainty sampling literature (e.g., Nguyen et al., 2022). We have rewritten the text in this subsection to clarify the connection between these tasks.
Q3. We thank R1 for observing that our discussion in Sect. 5 is potentially ambiguous with respect to the role of individual basis functions $f^b, b \in [B]$ in defining the total entropy $h_t$. This depends on how we construct the ensemble predictor $f_y(\mathbf{x}) := p(y \mid \mathbf{x})$. The simplest method—widely used in, e.g., random forests and deep ensembles—is to average over the basis functions: $f_y(\mathbf{x}) := B^{-1} \sum_{b=1}^B f^b(\mathbf{x})$. By contrast, in boosting models, predictions represent the sum (rather than the average) of basis model outputs. For more details on aggregating basis functions for probabilistic predictions in random forests, see (Malley et al., 2012); for more on deep ensembles, see (Lakshminarayanan et al., 2017); and for details on uncertainty quantification in gradient boosting, see (Malinin et al., 2021). We have expanded the text in Sect. 5 to include these references and resolve the ambiguity identified by R1.
Q4. The reference distribution $\mathcal{D}$ was introduced in Sect. 3, specifically the subsection on Shapley values. We have amended the text in Sect. 5 to remind readers of this connection.
We hope our comments have resolved all of R1’s concerns. If so, we invite them to revise their score upward :)
References:
-Sugiyama et al., 2007: https://jmlr.org/papers/v8/sugiyama07a.html
-Quiñonero-Candela et al., 2009: https://mitpress.mit.edu/9780262545877/dataset-shift-in-machine-learning/
-Malley et al., 2012: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3250568/
-Lakshminarayanan et al., 2017: https://proceedings.neurips.cc/paper_files/paper/2017/file/9ef2ed4b7fd2c810847ffa5fa85bce38-Paper.pdf
-Malinin et al., 2021: https://openreview.net/pdf?id=1Jv6b0Zq3qi
-Nguyen et al. 2022: https://link.springer.com/article/10.1007/s10994-021-06003-9
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for addressing the points I raised. As mentioned in my review, I am increasing the score to accept. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their thoughtful comments and constructive feedback. Working through our replies has deepened our understanding of the material and helped us further refine our manuscript.
We reply to all individual reviewer comments below. However, we open with one general note here, since it came up in multiple reviews. We appreciate the suggestion to include a quantitative experiment that demonstrates our method’s ability to accurately explain predictive uncertainty. We have designed a simulation experiment, loosely inspired by Aas et al. (2021), in which Shapley values under the entropy game have a closed form solution, and tested a range of methods for estimating these quantities in regression examples (see attached PDF, Fig. 1). As with previous work on Shapley value computation, the main considerations to keep in mind here are (i) the accuracy of the conditional entropy estimator (analogous to the base model in more classic XAI settings); (ii) the accuracy of the imputation method for estimating out-of-coalition feature values; and (iii) the coalition sampler. For this experiment, we simulate multivariate normal data with $d=4$ predictors and a Toeplitz covariance matrix with variable autocorrelation. The conditional log-variance of $Y$ is given by a linear function of $\mathbf{X}$. We estimate this conditional variance using multiple regression on the log squared residuals of a linear model for $\mathbb{E}[Y \mid \mathbf{X}]$, and exhaustively enumerate candidate coalitions (generally feasible with $d$ on the order of ~10). For imputation schemes, we compare KernelSHAP, maximum likelihood, copula methods, empirical samplers, and ctree (see Olsen et al., 2023 for definitions of each and a benchmark study of conditional sampling strategies for Shapley values). We note that this is strictly more difficult than the standard case, since $\text{Var}[Y \mid \mathbf{X}]$ is not directly observed but must be estimated from the data. No single method dominates throughout, but we show that under some combinations, we are able to converge on the true information theoretic Shapley value with existing imputation pipelines. This demonstrates both the accuracy of our method and its modularity with respect to reference distribution samplers. We thank the reviewers for pressing us on this point and encouraging us to add this valuable experiment to Sect. 6.
The attached PDF also includes some revised or expanded experiments, in reply to particular reviewer comments (see below).
References:
-Aas et al., 2021: https://www.sciencedirect.com/science/article/pii/S0004370221000539
-Olsen et al., 2023: https://arxiv.org/abs/2305.09536
Pdf: /pdf/48c6b79e630d76f21183ed984d0e87470a98e353.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence? | Accept (poster) | Summary: The paper introduces a study of pre-trained visual encoders for Embodied AI, and produces, among others, a new pre-trained ViT-L encoder (VC-1) using Masked Auto-Encoding on a dataset combining an expanded collection of egocentric video and ImageNet. VC-1, when adapted for agents solving 17 task types covering dexterous manipulation, locomotion, mobile manipulation, and navigation, performs best in average. The curated set of 17 task types is presented as a new benchmark named CortexBench. The results show that no visual encoder is universally dominant, and that just scaling the pre-training dataset size and diversity does not imply uniform improvements across benchmarks, refuting previous extrapolated results. Further experiments on real hardware are also included, showing the improvement in comparison with the best performing existing Masked Visual Pre-training visual encoder.
Strengths: - Thorough study of pre-trained visual encoders for Embodied AI tasks across different pre-training schemes.
- Provides a reference model with high performance (especially after adaptation) across all considered task types.
- Thorough experimental section and introduction of the very relevant CortexBench benchmark.
- The paper reads really well.
Weaknesses: - In an ideal world, it would have been wonderful to have found a single pre-trained model that could be used without adaptation with high performance across all tasks, but the actual outcome is thoroughly justified in the paper. One open question is whether, with larger and more diverse datasets, a single pre-trained model could eventually be the absolute winner before adaptation, but I agree this is out of scope for this paper.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Line 370 states "MAE adaptation does not help TriFinger performance, this is likely due to the mere 300 real robot images available for adaptation.". However, fine-tuning seems to do a good job in this case, contradicting the general consensus derived in Section 6. Is there an intuitive explanation?
- Why is prompting/conditioning VC-1 not considered as an adaptation method in the study? It might be a viable not-so-costly alternative to fine-tuning.
- Is the advantage of R3M indicating that known architectures might be far from optimal for the goal of having a single visual encoder able to feed all EAI tasks or is the scale of the pre-training datasets considered in the study the most likely reason?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The paper discusses adequately the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking time to review our paper and sharing your thoughts. Please find responses to your questions/suggestions below. Please let us know if any additional clarifications are needed.
> Line 370 states "MAE adaptation does not help TriFinger performance, this is likely due to the mere 300 real robot images available for adaptation.". However, fine-tuning seems to do a good job in this case, contradicting the general consensus derived in Section 6. Is there an intuitive explanation?
There are broadly two reasons why a frozen visual encoder may improve via fine-tuning on task data: 1) closing the domain gap, 2) moving from task-independent features to task-specific features. MAE adaptation does the former, BC fine-tuning does the latter. Which among the two is more important could be task-dependent. Our results suggest that the task-specific fine-tuning was more important than closing any domain gap for TriFinger. We will add this discussion to the paper.
> Why is prompting/conditioning VC-1 not considered as an adaptation method in the study? It might be a viable not-so-costly alternative to fine-tuning.
It is unclear to us how to produce prompts for the tasks in CortexBench, as there is no standard approach that we are aware of. However, we’d be happy to add more comments if the reviewer could provide more details about the prompts they had in mind.
> Is the advantage of R3M indicating that known architectures might be far from optimal for the goal of having a single visual encoder able to feed all EAI tasks or is the scale of the pre-training datasets considered in the study the most likely reason?
We're not sure we fully understand the question. We believe you're asking what are important ingredients to training a general purpose visual encoder for EAI (architecture, pre-training objective, dataset diversity, dataset size, etc.). The short answer is that we believe all of these matter, and finding the best combination is an open research problem. Our work shows that pre-training dataset diversity definitely matters, and that (thus far) a ViT backbone slightly outperforms other backbones (on average).
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: I would like to thank the authors for the detailed responses to the reviewers' questions. I reopen some questions that I think were misunderstood or unclear in my original review.
- The first unclear question was regarding fine-tuning alternatives:
> It is unclear to us how to produce prompts for the tasks in CortexBench, as there is no standard approach that we are aware of. However, we’d be happy to add more comments if the reviewer could provide more details about the prompts they had in mind.
I forgot to add the corresponding lines in my original review, but I actually referred to Lines 297-298 in the original manuscript:
> We use a broad definition of adaptation [50], which, in the context of large pre-trained foundation models, can take several forms from simple prompting [51] ...
Apologies if I am mistaken, but this line of research is no further discussed (or explicitly discarded) for the rest of the study, so it might have been worth it to add on line explaining why this mentioned option is not (or cannot be) considered, despite its potential advantages/interest in other domains.
- The second unclear question was about the potential advantages of R3M compared to VC-1 and possible impact in upcoming models. The original manuscript (Lines 288-291) states:
> When compared to R3M, VC-1 demonstrates superior performance on average and on 4 of 7 benchmarks (Figure 4). R3M outperforms VC-1 on Adroit, MetaWorld and DMControl benchmarks. It is unclear whether this gap is caused by the different training objective, pre-training dataset, or backbone.
In recent years, transformers have been thoroughly used as the standard architecture in a large (and extremely varied) amount of domains, principally due to their advantageous scalability properties. However, it was impressive to see R3M (whose architecture is based on ResNet) perform so well in comparison to VC-1 based on transformers. Without being very concrete in my question (and obviating other important factors like the ones correctly mentioned by the authors in their response), I was meaning to start a discussion on the importance of research of new architectures that could hopefully bring the best of both worlds (inductive biases and scalability) in a principled way. In any case, I am happy with the provided response
---
Reply to Comment 1.1.1:
Title: Thank you for the clarifications
Comment: Thank you for the clarifications. These are great points and we address both below.
> it might have been worth it to add on line explaining why this mentioned option [prompting] is not (or cannot be) considered
Agreed. The PVRs studied in this work were not originally designed for few-shot prompting. Thus, adding prompting to these methods remains an open research question. We will add a note to the paper.
> I was meaning to start a discussion on the importance of research of new architectures that could hopefully bring the best of both worlds (inductive biases and scalability) in a principled way. In any case, I am happy with the provided response
We agree! We believe research on new architectures is a fruitful direction to pursue, and is well motivated by the results of R3M and VC1 on CortexBench. We will discuss this interesting observation in the paper. | Summary: • This paper presents the largest and most comprehensive empirical study of pre-trained visual representations for Embodied AI based on proposed CORTEXBENCH.
• This paper focuses on studying the effect of pre-training data size and diversity for PVRs, further proposing VC-1.
• This paper shows that task- or domain-specific adaptation of VC-1 leads to substantial gain by E2E fine-tuning or MAE adaptation.
Strengths: • This paper proposes a comprehensive benchmark CORTEXBENCH for pre-trained visual representation evaluation of embodied intelligence, which is very valuable and meaningful.
• This paper studies the effect of pre-training data size and diversity for PVRs, which is novel and insightful.
Weaknesses: • There may be some typos in experiments, such as ‘Trifinger by+7.4 (72.7 → 80.1)’ in the 343 line, and according to the table, it should be Trifinger (71.7 → 80.6). Please carefully check.
• Although I think this work has been done well enough. However, there are still many questions that can be explored: based on ViT and the same data, which is better for MIM and time-contrastive video-language alignment. Is ResNet better than ViT when few-shot imitation learning?
• This paper only attempts to adapt VC-1, but there are no corresponding experiments for R3M and CLIP.
• I strongly recommend including a concise introduction to multimodal-based pre-training visual representation in the related works section, citing relevant sources such as [1-5]. This is crucial as multimodal presentation learning now plays a pivotal role in numerous vision tasks.
[1] Instruction-Following Agents with Multimodal Transformer. [2] Language-Driven Representation Learning for Robotics. [3] Open-World Object Manipulation using Pre-trained Vision-Language Models. [4] Pave the Way to Grasp Anything: Transferring Foundation Models for Universal Pick-Place Robots. [5] LIV: Language-Image Representations and Rewards for Robotic Control. Among them, [3] and [4] explicitly adapt the middle-level output of the pre-trained model through larger policy models, which also utilize the raw image as input.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: • I'm curious how you choose specific tasks under each benchmark. Does this follow some basis?
• I'm curious why you did not model the temporal information in policy model. For example, adding temporal attention in ViT, even if utilizing a large amount of video data for pre-training.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: • Although the evaluation of this work is already very comprehensive, I believe that 17 tasks are not sufficient. I found that even for the same benchmark, different tasks exhibit significant differences.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking time to review our paper and sharing your thoughts. Please find responses to your questions/suggestions below. Please let us know if any additional clarifications are needed.
> Add a concise introduction to multimodal-based pre-training visual representation in the related works section, citing relevant sources such as [1-5].
Thanks! We would be happy to add a discussion of these works to the paper.
> Although I think this work has been done well enough. However, there are still many questions that can be explored: based on ViT and the same data, which is better for MIM and time-contrastive video-language alignment. Is ResNet better than ViT when few-shot imitation learning?
These are good questions that we are also interested in. However, given the scale of experiments required to answer the research questions posed in sections 4, 5, and 6, these additional questions fell beyond the scope of this work. We speculate that using temporal information for learning visual representations (as done in time-contrastive learning) will be useful, and might be a good direction to pursue in future work.
> Although the evaluation of this work is already very comprehensive, I believe that 17 tasks are not sufficient.
The tasks included in CortexBench were chosen to cover a wide range of embodied AI (EAI) applications, and are of substantial interest to the EAI community as evidenced by the body of work that uses and reports on these tasks. That said, we agree that additional tasks could be added in future studies. As discussed in L633-L636 in the Appendix, "in proposing a benchmark, we sought to find a balance between task diversity and the computational resources required for evaluation. However, new and challenging benchmarks in embodied AI, such as those presented in [55], continue to emerge and may merit inclusion in future studies to track progress in this field.”
> I'm curious how you choose specific tasks under each benchmark. Does this follow some basis?
Yes, we selected tasks following the precedent and trends seen in recent works that have studied each of the individual benchmarks. For example, the MetaWorld tasks were selected “following the evaluations in [7]” (L157), the DMC tasks “following [6]” (L160), and in Adroit “we study the two hardest tasks” (L149-150).
> I'm curious why you did not model the temporal information in policy model.
Most of the downstream policies do model temporal information either through frame stacking (Adroit, MetaWorld, DMC) or by using a recurrent neural network (ObjectNav, ImageNav, Mobile Pick). We will make this more clear in the paper.
> Typos in experiments
Thanks for pointing this out. We will correct these typos. | Summary: This paper focuses on the analysis of large-scale pre-trained vision models for interactive tasks. The authors curate CortexBench which includes tasks covering dexterous manipulation, navigation, and also scene-level interaction as the testing benchmark. Further experiments were made on testing existing pre-trained vision models on this task. The authors also control the dataset type, the scale used in pertaining and proposed their VC-1 model pre-trained in an MAE fashion which shows superior performance compared to baselines. Additional analysis were made showing the effectiveness of egocentric images for interaction and also gap between existing pre-trained vision models and the desired "visual cortex" representation.
Strengths: This paper conducts solid experiments on testing existing pre-trained visual representations (PVR) on their own curated interaction benchmark CortexBench. Both the evaluation benchmark and released checkpoints (as promised by the authors) will be extremely beneficial for embodied AI research. The overall analysis is technically sound with proper conclusions made, pointing to future directions for PVR studies.
Weaknesses: One concern of this paper is that except for the conclusion pre-training with egocentric frames is beneficial for interactive EAI tasks, I do not see signature conclusions that could be drawn from the current analysis. After reading this manuscript, I do not feel that the question in the paper title has been properly addressed:
- First, there is no task analysis for the curated tasks in CortexBench, for a proper visual cortex, to what extent should we expect with "visual cortex"? are these tasks equally important? are there any relationships between these tasks?
- Second, since the current VC-1 model still does not universally dominate either task, what is missing on the training/model side? is it missing for example value condition in VIP or some embodied experience?
- Third, as a pre-trained vision representation, what should we expect from VC-1 in addition to its capabilities in CortexBench? Should we expect the model to at least have some sort of objectness as DINO? Can this pre-trained model be used in more general vision settings?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weakness section for questions.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have properly addressed the limitations of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking time to review our paper and sharing your thoughts. We have addressed your concerns below. Please let us know if any additional clarifications are needed.
> First, there is no task analysis for the curated tasks in CortexBench.
Because CortexBench is curated from existing benchmarks (Sec. 3.1) a detailed analysis of each task can be found in the respective papers. The tasks span a wide spectrum of EAI. They vary in visual complexity, task complexity, task domain, skills required, agent embodiments, and how policies are trained. In that sense we have a good coverage of various EAI problems.
> Are these tasks equally important?
In this work we do not take a position on which tasks are more or less “important” – that is for the community to decide. However, all of the tasks are widely studied in the community as evidence by the citation counts for each benchmark: Adroit [12] (801 citations), MetaWorld [11] (648 citations), DeepMind Control [10] (420 citations), TriFinger [13] (43 citations), Habitat [14] (931 citations), and Habitat 2.0 [15] (277 citations).
> Are there any relationships between these tasks?
The overarching relationship is that these tasks have been deemed important and interesting by the EAI community as evidenced by the body of work that has gone into studying each of the tasks separately. In this work, we use the collection to draw new conclusions about existing and new PVRs.
At a lower level, there are relationships in the way goals are specified and the policy learning algorithms that have led to SoTA performance in prior work as illustrated in Table 1. Finally, there are commonalities in the behaviors required. For example, ObjectNav, ImageNav, and Mobile Pick all require navigation. We will add this to the discussion.
> VC-1 does not universally dominate; what is missing? Is it missing for example value condition in VIP or some embodied experience?
Great question! Value conditioning or embodied experience may help and would be quite interesting to explore.
Additionally, the visual embedding in VC-1 is for a single frame, so most of the downstream policies offload the environmental representation construction to a generic RNN or frame stacking. There are several avenues to consider for improving representations including using videos/temporal axis, 3D spatial priors, objectness, etc. We will add these future directions to the discussion section.
> What should we expect from VC-1 in addition to its capabilities in CortexBench? Should we expect the model to at least have some sort of objectness as DINO? Can this pre-trained model be used in more general vision settings?
We expect that VC-1 will be useful in other EAI tasks because of its effectiveness on the diverse range of tasks in CortexBench. Additionally, we provide some preliminary analysis of the visual features in appendix A.10, where we find, for example, that after adaptation the visual attention focuses more on task-relevant parts of the image such as task-relevant objects. In general, we are excited for the community to use our open-sourced models on additional tasks and settings.
---
Rebuttal Comment 1.1:
Title: Post-rebuttal response
Comment: The authors' responses have clarified some of my concerns. I still feel that there are still more analyses to be made to make this benchmark design more meaningful and complete (instead of selecting the most common or popular existing EAI tasks). Therefore, I'm keeping my original rating as borderline accept, however, I will not argue for rejection. | Summary: This paper evaluates different pre-trained visual representations w.r.t. their capability to serve as foundation models for Embodied AI. To do so, they compile a dataset (CortexBench), with 17 simulated EAI tasks. The intuition here is that a single pre-trained visual representation which would perform well across all tasks could be considered as an Artificial Visual Cortex. The paper collects several egocentric video datasets starting with Ego4D, and sequentially adding datasets to create different variations trained with different training datasets. Their proposed artificial visual cortex (VC-1) is trained on all datasets (ViT-L trained with Ego4D + MNI). They also propose adaptations to VC-1 to improve its capability to perform well across different datasets. Finally a proof of concept hardware experiment using VC-1 to learn policies with imitation learning for few shot manipulation tasks.
Strengths: 1. Exhaustive training with lots of variations with architectures and increasingly growing dataset.
2. Evaluation conducted on a large scale dataset with a diverse set of EAI tasks.
3. Writing: The paper is easy to read and follow. The figures do a good job of illustrating the dataset and the experiments. Also, the.
Weaknesses: 1. No data contribution: CortexBench is a compilation of existing datasets, and it's not clear what new it brings to the table beyond what's already out here. From the introduction, it initially seemed as thought it's newly proposed benchmark in this paper. However, it seems to be a set of already existing datasets. Why is it being claimed/renamed as a new dataset?
2. No new findings:
- The first negative result is that specialized pre-trained models are better - that is, in-distribution models perform better than out-of-distribution ones (lines 42-45). This is well known in the machine learning community, and thus, is not a new finding.
- Lines 70-85 can be summarized as saying: More data is better, but OOD data is a problem. This too is a well known property of deep learning vision models.
- Naively scaling size and diversity: The paper does not study data diversity explicitly, but rather only
3. Data diversity and Dataset size are not controlled: When the new datasets are added to the foundation models, are the number of images held constant? I didn't see this detail in the paper. Thus, it is unclear what helped - dataset size, or diversity? To control data diversity independently, a control would be to reduce number of images per dataset as # datasets is increased to isolate the effect of size and diversity.
4. The adaptive VC-1 does not really work well: of the datasets with prior results in Table 5, VC-1 does better for only 2 task classes. Without adaptation too, it could be trained to do well on one of these task classes alone. Thus, the adaptation took it to performing well from 1 task to 2 task classes. Furthermore, there are no error bars in these results which makes it hard to evaluate the reliability of these findings given the stochasticity of the models.
5. Claim too grandiose: Calling a pre-trained visual recognition model an artificial cortex is a very far fetched claim on several levels of abstraction. Firstly, the cortex does much more than EAI: so testing it only on a small set of functionalities of the cortex means there's a mismatch at a behavioral level between the cortex and the artificial cortex. Secondly, by not comparing with any neural data, there is a mismatch at the mechanistic level as well. A more objective representation of the work done here is that this paper investigates --- Can a single pre-trained representation perform well across a diverse set of EAI tasks?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please refer to weaknesses above for questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking time to review our paper. We believe there may be some major misunderstandings about our submission, and we would like to take this opportunity to clarify.
> No data contribution. From the introduction, it initially seemed as thought it's newly proposed benchmark in this paper. Why is it being claimed/renamed as a new dataset?
We do not claim a data contribution. As stated in the intro, "we *curate* CortexBench" and we cite the sources in L34-35; section 3 describes details. A name is used to succinctly refer to the curation in the exposition and to the associated software.
Our contributions are stated upfront in abstract L16-20: "Overall, this paper presents no new techniques but a rigorous systematic evaluation, a broad set of findings about PVRs (that in some cases, refute those made in narrow domains in prior work), and open-sourced code and models (that required over 10,000 GPU-hours to train) for the benefit of the research community."
Furthermore, we agree with reviewer aWtL: "Both the evaluation benchmark and released checkpoints (as promised by the authors) will be extremely beneficial for embodied AI research."
> specialized pre-trained models are better - that is, in-distribution models perform better than out-of-distribution ones (lines 42-45).
This interpretation is not inline with our findings. None of the PVRs considered in this work (except in Section 6) were trained on images from the evaluation tasks or simulators, and thus are all out-of-domain by definition. What we're seeing is that previous work had specialized on evaluating PVRs on a small set of tasks and shown strong results, but those results do not carry over to a wide distribution of tasks, which is a new finding not known in prior work. There is a broader question here of “what exactly is a domain”, but that is beyond the scope of this work (we use standard definitions).
> Lines 70-85 can be summarized as saying: More data is better, but OOD data is a problem.
Not quite. As pointed out above, all training data we use is OOD, so what this finding is saying is that it is difficult to know a priori which data should be included or excluded from the pre-training set; and without our experiments, we would not know about the existence of this problem. As we say in L84-85: "[our] broad evaluation refutes a naive extrapolation of the positive scaling trends observed in prior work on robot manipulation [8]."
> Data diversity and Dataset size are not controlled: When the new datasets are added to the foundation models, are the number of images held constant? I didn't see this detail in the paper.
Yes, the training dataset size is held constant to support claims about data diversity. The number of images per dataset is provided in Table 3 on page 5 and a more detailed breakdown is provided in Table 6 in the Appendix.
Specifically, comparisons between the **Ego4D+M** and **Ego4D+N** training datasets – both containing 3.5M frames – support the dataset diversity claims. As stated on L288 “[Ego4D+N] is similar in size to Ego4D+M (3.5M frames) but is more diverse because it contains a larger proportion of navigation data than the manipulation-centric datasets Ego4D and Ego4D+M.”
> of the datasets with prior results in Table 5, VC-1 [adapted] does better for only 2 task classes.
It is not clear why nearly half the experiments in Table 5 are being ignored in making this claim. To recap, Table 5 presents results on 7 simulation benchmarks ("task classes") and 2 hardware tasks. Prior published results exist for 5 out of 9 benchmarks because as we describe in the abstract, intro, and experiments, prior work has focused on narrow task domains; our contribution is to systematically benchmark existing PVRs on this broad set of tasks (which is what Table 2 and row 2 in Table 5 provide).
When all tasks are considered, VC-1 adapted (via MAE adaptation in low-shot IL or via fine-tuning in large-scale RL) outperforms VC-1 (without adaptation) on all 7 benchmarks. Additionally, as stated on L351-354, “Furthermore, on MetaWorld, DMControl, and TriFinger, VC-1 with MAE adaptation (row 6 [in Table 5]) is comparable with the best known results (SoTA) and the best results from previous sections (rows 1 and 2 [in Table 5]). Similarly, on ImageNav and Mobile Pick, VC-1 with E2E fine-tuning (row 5 [in Table 5]) matches or exceeds the best results.”
> No error bars in Table 5.
Thank you for this suggestion. We have added error bars to Table 5. We find that the error bars for the adapted versions of VC-1 (Table 5 rows 5 and 6) are very similar to the error bars we report for VC-1 in Table 4 row 11. Accordingly, all of the conclusions drawn in the paper remain unchanged. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful comments. We want to begin with a few high-level observations.
There appears to be consensus that our work presents “a large scale dataset with a diverse set of EAI tasks” (nrzP) – i.e., CortexBench. Several reviewers believe CortexBench will be “extremely beneficial for embodied AI research” (aWtL), is “valuable and meaningful” (5TRU), and is “very relevant” (on5Q). Similarly, there appears to be agreement that we perform a “thorough study” (on5Q) with “exhaustive training with lots of variations” (nrzP).
However, one reviewer disagrees with the others on the novelty and the degree of surprise in the findings from our study.
On one hand, several reviewers find that we present experiments that are “novel and insightful” (5TRU) and “technically sound with proper conclusions made, pointing to future directions for PVR studies.” (aWtL). On the other hand, one reviewer expresses that the findings are “well known in the machine learning community” (nrZP). The disagreement here lies in the perceived newness of our findings rather than the technical substance of our research.
We contend that these are precisely the kinds of papers that should be presented at a conference like NeurIPS because they provoke thought, stimulate discussion, and provide rigorous benchmarking that forms the foundation for future work. In our case, we present a rigorous empirical study that pushes the boundaries of understanding in the field of Embodied AI.
If one of the reviewers is skeptical about certain aspects of our work, that's part of the scientific discourse—we can let the broader community and future research be the judge of the long-term impact.
Below, we address specific concerns raised by each reviewer to provide further clarity and address any misunderstandings. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Bayesian Optimization with Cost-varying Variable Subsets | Accept (poster) | Summary: The paper introduces a novel Bayesian Optimization algorithm for the case where we select a subset of variables to query at each iteration. Furthermore, each subset will incur a different _cost_. Examples of this include: control of soil nutrients in farming, advanced manufacturing, and targeting specific subgroups for ad revenue. Previous work has focused on optimizing across subsets, but does not consider the cost. Due to the method being based on Thompson Sampling, it is not simple to extend to the cost-variable setting.
The problem setting considers the case where we can choose a subset of inputs (a control set) and its corresponding values, then the remaining inputs are samples randomly from a _known_ distribution. This setting differs from multi-fidelity problems because lack of information comes from randomness in the choice of inputs when they are not selected as part of the control set, not from querying a cheaper approximation. Our objective is to find the control set and its corresponding values that maximize the expected value of the function (with respect to the randomness in the remaining inputs).
The algorithm is based on the upper confidence bound (UCB) acquisition function, and mainly works in four stages:
(a) We find the maximum expected UCB across all valid input subsets
(b) We build a set of possible control sets that have a maximum value $\epsilon_t$ close to the maximum from (a)
(c) We build a set of that includes all the control sets with minimal cost from (b)
(d) Finally, from the remaining possible control sets, we choose the set with the maximum expected UCB to query
Note that an important hyper-parameter sequence, $\{ \epsilon_t \}$ as been introduced. Then four theoretical results are presented. Theorem 4.1 shows that the algorithm is no-regret for appropriately chosen $\beta_t$ and a sub-linear $\sum_{t = 1}^T \epsilon_t$. Proposition 4.2 shows that provides an alternative condition for which the no-regret property hold. Then Lemma 4.3 and Theorem 4.4 show that the bound depends on the variance of the input distributions, with larger variance allowing for more efficient exploration. They further show that by choosing an $\epsilon$-schedule that increases the number of times cheaper controls are played, we can make the bound tighter. The $\epsilon$-schedule was useful for theoretical analysis, but in practice it is dropped instead for a simpler idea, which focuses on fixing the number of plays for each cost-group, choosing the lowest possible cost group that still has plays at each iteration when all cost groups have been explored, the algorithm reverts to the standard version with $\epsilon_t = 0$.
Experiments are carried out in four benchmarks: a 3D GP sample, Hartmann3D, a plant growth simulator, and a simulator built from the airfoil self-noise data-set. Experiment show that when there are subsets with small or moderate costs, the algorithms give an advantage to the Thompson Sampling baseline. When costs are expensive, algorithm performance drops when cost-group sizes are fixed, but if made adaptive the performance is still competitive. In addition, the simple baselines perform well when the control sets are not subsets of each other (and therefore they are not constantly querying the most expensive one). Finally, the results show that having a larger variance for the random inputs tends to give better regret, but also appears to be problem and algorithm dependent.
Strengths: Originality: The author propose an algorithm that tackles a problem not fully considered in Bayesian Optimization, where there are subsets of inputs that can be queried and each subset has a different cost. They also include a thorough theoretical analysis of the regret of the algorithm, and show empirical evidence of its effectiveness. Although I have not seen this specific problem solved before, there are other works could perhaps tackle the problem (see more later).
Quality: The algorithm proposed is sound, combining known Bayesian Optimization ideas that are known to work well. The theoretical analysis is very good and complete, exploring the effect of different aspects of the problem (e.g. $\epsilon$-schedule and variance). The empirical analysis is perhaps limited to a few examples, but they provide a varied number of costs and variances which is important.
Clarity: Generally the paper is very clear, and well written. Important equations are well explained, and the implications of all theoretical results made clear. Figure 1 is very good at explaining the problem setting, and helping understand notation. Empirical results are explained and analyzed thoroughly and clearly. Perhaps a few paragraphs of the paper could be rewritten, and result exposition improved (see later).
Significance: The problem is significant to the wider scientific community, and relevant examples are given. Although it seems limited by certain assumptions and perhaps giving more detail on the examples would be better (see later).
Weaknesses: - The paper seems highly relevant to Causal Bayesian Optimization [1, 2, 3], where they (a) choose a subset of inputs and the rest are observed from the environment, and (b) they consider the cost of the varying subsets. In particular, they do not assume that the input distributions are known but that they instead follow an unknown causal structure which appears to be a more general of a case. Indeed it could be argued that all motivating examples mentioned by the paper fall under this umbrella (i.e. there is an underlying causal structure). Empirical comparison against their baselines would be very strong, but at least a discussion of where the settings differ should be included.
- Following to the previous point, the assumption that the input distributions are known and independent of the chosen values for the control set appears to be very restrictive and limits the application of UCB-CVS. I assume the distribution could be estimated from data, so perhaps there are extensions for the proposed method when this is the case. Based on this, the significance of the proposed paper appears to be far more restrictive than claimed as it only applies when the distributions are known.
- The theoretical analysis is perhaps the strongest part of the paper, however in the end the authors opted for for a rather arbitrary way of selecting which subsets to query (i.e. by defining cost groups and selecting an allocation for each). The link between the theoretical results and the new heuristic should be made clearer, in addition, the heuristic appears to me to be sub-optimal especially in the case where an expensive subset leads to clearly sub-optimal observations but we keep querying it anyway due to the requirement to query each cost group a certain number of times.
Minor weaknesses:
- I found the first paragraphs of section 4 (lines 115 to 138) to be a little convoluted and difficult to read. I understand there is a lot of notation to introduce, but perhaps it would benefit from a small re-write. Figure 1 does make an excellent job of summarizing it all and was very helpful.
- I found the plots in Figure 2 to be a little difficult to read, the plots are small and pixelated, and the font sizes could be made bigger (there seems to be a lot of repetition in the titles so perhaps things could be condensed to use less words and make space for larger fonts).
- Only using 10 runs per example seems a little bit low, would be better to use more.
[1] Aglietti, Virginia, et al. "Causal bayesian optimization." International Conference on Artificial Intelligence and Statistics. PMLR, 2020.
[2] Branchini, Nicola, et al. "Causal entropy optimization." International Conference on Artificial Intelligence and Statistics. PMLR, 2023.
[3] Tigas, Panagiotis, et al. "Differentiable Multi-Target Causal Bayesian Experimental Design." ICML, 2023.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - I saw in the appendix that you included some obvious naive baselines, in particular EI-PSQ-per-cost seemed to give stronger results than the baselines compared against in the main paper. Why was it not included?
- We are seeking to maximize the expected value of the function under randomness from the non-control inputs, can you imagine a scenario where low variance would also be important? If so, would would the method be compatible with a simple scalarazation (ie. $i^*, x^* = \arg\max \left[ E(f(x)) + \alpha Var(f(x)) \right]) $ to solve the problem?
- The algorithm's performance seems to be very sensible to the choice of the $\epsilon$-schedule. Is there any more guidance on how to choose it? Indeed, the ETC variants appear to perform very differently depending on the task.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations are addressed in the appendix: the theoretical assumptions may not hold in practice, and the assumption that the input probability distributions are independent and fixed may be violated (this is considered by Causal BO). The authors also mention poor scaling when the number of control sets is large.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort spent writing this highly detailed and insightful review. Allow us to answer some your concerns:
**Weakness 1: relation to Causal BO.** You are correct in pointing out that Causal BO as formulated in Aglietti et al. (2020) is a more general case. Specifically, our setting is a case in which there are no 'non-manipulative' variables and the causal DAG is such that all input variables have no parents and are parents of the output variable. Nevertheless, we believe our focus on this special case has value as it allows us to derive useful theoretical results that do not exist thus far for the completely general setting. To the best of our knowledge, the three works you cited and many other Causal BO works do not have regret bounds, with the exception of "Model-based Causal Bayesian Optimization" (Sussex et al., 2023). This work is also a special case of Aglietti et al. (2020), and has little overlap with our work as it does not consider costs of control sets or explicit probability distributions over input variables. This work and our work are products of the general principle that focusing on special cases enables the derivation of non-trivial theoretical results which may be very difficult to derive for the general case (supported by the lack of theory for the general case thus far). We believe that our work is sufficiently general to be useful for practical scenarios (where the full Causal BO apparatus may be unnecessary), and is also a stepping stone towards theory for the general case. We agree with you that the relation of our work to the Causal BO literature should be discussed and will certainly include this discussion in the final paper.
**Weakness 2: assumption of known distributions.** We think this assumption is a reasonable one for three reasons:
1. There are many scenarios in real life in which the distribution of some random variable of interest is known or at least can be estimated accurately from historical data. For instance, in our example of ad revenue maximization or crowdsourcing, where the query variables describe human demographics such as country of origin or income level, the platform being used would have access to a large amount of user demographics data.
2. This is a standard assumption made in this line of work due to the difficulty of saying or doing anything useful with too little assumptions. Many prior BO works that cater to uncontrollable random variables make this or similar assumptions; for instance, the cited Causal BO work (Aglietti et al., 2020) assumes the presence of a dataset from an observational study. Other works that make the assumption of known distributions include BO for expected values (Toscano-Palmarin & Frazier, 2022), risk-averse BO (Cakmak et al., 2020; Nguyen et al., 2021a;b), and distributionally robust BO (Kirschner et al., 2020; Nguyen et al., 2020; Tay et al., 2022).
3. As you have intuited, it is possible to modify the setting to one in which the distributions are not known _a priori_ but are estimated in the course of the optimization process. Previous BO works that have done this include the 'data-driven' setting (Hayashi et al., 2022; Kirschner et al., 2020). Since the extension has already been developed in the literature, accounting for this procedure in our algorithm and theory within the constraints of a conference paper incurs additional complications that distract from our work's primary conclusions for little novelty. We will include a discussion on this extension along with references to these previous works in the final paper.
**Weakness 3: choice of heuristic.** The heuristic was chosen based on the interpretation of Theorem 4.4 that an $𝜖$-schedule that increases the number of times cheaper control sets are played can reduce the MIG term of the most expensive control set $m$, which implies that if $c_i≪c_m$ for all $i$, we can reduce overall regret by playing cheaper control sets a greater number of times. The ETC heuristic is a method of enforcing these plays, whereas trying to define an $𝜖$-schedule may result in cheaper control sets not being played at all if the chosen values of $𝜖$ are too small, and knowing the range of good values of $𝜖$ _a priori_ is impossible since $f$ is unknown. We agree with you that the heuristic (with a fixed no. of plays for each control set) is sub-optimal if there is an expensive sub-optimal control set, and Theorem 4.4 reflects that as well: taking $m$ to be optimal, if there is a sub-optimal $i$ such that $c_i ≈ c_m$, then the decrease in the overall regret is not as large as if $c_i≪c_m$ for all $i$. To overcome this, instead of a fixed number of plays for each control set, we suggest in the paper to choose the number of plays adaptive to the cost, and our ETC-Ada variant in the experiments suggests that an $\mathcal O(c_i^{-1})$ number of plays for each control set $i$ generally performs well across many different settings and is a robust choice for practitioners. We will make this justification clearer in the final paper.
**Questions:**
1. We did not plan to include these obvious naive baselines in the main paper as we hypothesized that they would not work well due to the reasons explained in Appendix B, and they were not required to arrive at our main conclusions in the Experiments sections. We agree it would be interesting to see how EI-PSQ-per-cost performs in all settings. We will run this baseline in all settings and include the results in the main body in the final paper.
2. Certainly, there have been many BO works that aim to maximize some risk-sensitive objective such as mean-variance as you mentioned (Iwazaki et al., 2021), or others such as VaR/CVaR (Nguyen et al., 2021a;b). We suspect it will not be too difficult to replace the expected value in our proposed method with these alternate objectives.
3. See discussion on ETC-Ada in Weakness 3.
We hope the clarifications above can improve your opinion of our work.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for the detailed response. While I still believe that the paper would benefit from empirical evaluation to Causal BO, I agree that (i) the current setting allows for theoretical analysis and this should not be understated and (ii) a discussion of the the Causal BO should be at least included. I have also been convinced that the assumption of known distributions, while limiting, is perhaps reasonable in many applications. The link between theory and the choice of heuristic is clearer to me now, and while the sensitivity to the choice of $\epsilon$-schedule remains, I agree that the adaptive variant provides a somewhat robust option.
I seem unable to update my score at the current time, but based on the above, I will upgrade my recommendation by one point when possible. | Summary: The submission studies the problem of Bayesian optimization (BayesOpt) where we can choose to control a subset of the decision vector while the other variables are randomly selected.
Unlike previous works on contextual BayesOpt and BayesOpt with uncertain input, this paper leaves the choice of which variables are set to the user, and accounts for the associated cost of controlling a given set of variables.
As far as I know, this is the first work to tackle this problem.
The authors then proposed an Upper Confidence Bound (UCB) style algorithm that seeks the subset of variables that yields the highest expected upper bound on the objective function's value while incurring the lowest cost.
Theoretical guarantees are derived for the proposed algorithm, showing the algorithm achieves sub-linear cumulative regret.
Numerical experiments show that variants of the proposed algorithm perform competitively against two baseline policies.
Strengths: The submission tackles an original problem that is motivated by many real-world scenarios.
The paper is well-written, and the exposition is clear.
The theoretical results are explained well and are shown to be helpful in interpreting results from numerical experiments.
Numerical experiments on the other hand show strong performance from the proposed algorithm.
Weaknesses: I don't have many complaints about the paper.
The authors can consider including the other baselines shown in Appendix B in Sec. 5 instead of simply referring to them.
The authors can also consider including a simple baseline that randomly selects the subset of variables to control for at each iteration.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The work assumes that the uncontrolled variables are independently sampled from their respective distributions, but I imagine there are scenarios in which the decision variables aren't conditionally independent. Could the proposed approach tackle this scenario too? Perhaps the correlations between known dependent variables could be modeled somehow.
- My understanding is that we assume the costs $c_i$ associated with controlling given subsets are known _a priori_. Do the authors have any comment on how much relaxing this assumption would complicate what we have? I imagine Line 5 in Algorithm 1 might be harder to implement.
- How was $4$ chosen in the **ETC-Ada** variant's $4/c_j$ threshold? Should this be the same constant in all settings?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discuss the difficulty of setting the $\varepsilon$-schedule in Sec. 4.3 and recommend an explore-then-commit strategy.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time and effort spent writing this review. We answer your questions below:
**Questions**:
1. Certainly, as long as the joint distributions are known and induce conditional expectations that can be computed or approximated via Monte Carlo sampling. Algorithm 1 will likely work well under such conditions. The only parts that will be affected are the theoretical results Lemma 4.3 and Theorem 4.4 as they assume independence in order to relate the variance of the individual distributions governing each variable to the regret bound. Some more involved analysis will be required to say something useful in the general case when the distributions may not be independent.
2. You are correct that Algorithm 1 cannot be used effectively if the costs are not known. For example, Algorithm 1 would never play a control set $i$ if there was another control set $j$ that was both cheaper (i.e., $c_j < c_i$) and had a higher expected UCB score (i.e., $\mathbb E[u_{t-1}([\mathbf x^i, \mathbf X^{-i}])] < \mathbb E[u_{t-1}([\mathbf x^j, \mathbf X^{-j}])] $). Without knowing the costs, such comparisons between control sets cannot be made. Fortunately, this assumption of known costs is mild as they are problem-specific constants usually known in practical scenarios, or at least can be found with little difficulty.
3. $4$ was an arbitrarily chosen constant. In general, we wished to convey that an $\mathcal O(c_j^{-1})$ threshold is likely to work well across different sets of costs and is a robust choice for practitioners that keeps the number of hyperparameters to a minimum. We will make this clearer in the final version of our paper.
If you have any further questions, please let us know and we will be happy to answer them.
---
Rebuttal Comment 1.1:
Title: thanks
Comment: I thank the authors for their response. I will keep my score as is, though I do have two quick notes:
- I would not make the (in my opinion strong) claim that having known costs is a mild assumption or costs can be determined easily. In many applications, we might be faced with a very complex cost surface that's not easily learned. However, I'm completely fine with the method being able to handle only scenarios with known costs, as I do agree many applications fall under this setting.
- Interesting point about the constant $4$. Did you try other constants? If not, it might be good to include some discussion on this. | Summary: This work introduces a new black-box function optimization setting where only a collection of the subsets of variables can be optimized while the values of the complement variables for each set are randomly sampled. The authors propose a new BO framework based on GP-UCB, called UCB-CVS, to solve this optimization setting.
Strengths: 1. The problem setting is new and hasn’t been fully investigated before.
2. The algorithm proposed in this work is clearly described and the authors also provide theoretical guarantees.
3. Experimental results show good performance of this new method.
Weaknesses: 1. The extent of the problem setting's relevance and interest to the Bayesian Optimization (BO) community is not apparent to me in this work, despite its novelty. The authors evaluate their methods using two real-world datasets, namely plant growth and airfoil self-noise. However, the control sets, cost values, and probability distributions are still simulated by the authors themselves. Consequently, they have not demonstrated the application of real-world problems that can be effectively addressed within this setting using the new UCB-CVS algorithm.
2. It is not so convincing to me that the terms in the cumulative regret $R_{T}$ should be multiplied by the cost $c_{i_t}$. The objective function presented in line 138 does not incorporate the cost coefficient, and it seems natural for the cumulative regret $R_{T}$ to also exclude the cost coefficient. Moreover, this approach would align better with the single regret mentioned in line 190. The cost information is already encompassed within the value of $T$, namely, for a fixed budget $C$, if we consistently select high-cost sets, $T$ would be small, resulting in insufficient observations. Conversely, if we consistently choose low-cost sets, although $T$ would be large, the optimization procedure is more likely to converge to sub-optimal solutions. Therefore, there is no necessity to introduce an additional cost coefficient for penalization purposes.
3. The font size of Figure 2 is too small to be read.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. UCB formula below line 161, why the subscript of $u$, $\mu$ and $\sigma$ is $t-1$ while $\beta$ is $t$?
2. Proposition 4.2, is this result true with high probability or always true?
3. Equation (2), is $M_i$ here the same as $M_i$ in Lemma 4.3?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors don't specifically discuss the limitations of this work. The authors can add a paragraph in their manuscript to discuss the limitations of their work based on the reviewers' feedback.
No significant negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to read and review our paper. Let us address some of your concerns:
**Weakness 1, on application to real-world problems**: We have attempted to demonstrate the general applicability of our algorithm by using multiple cost sets. As explained in Sec. 5, while these cost sets may (at first glance) seem arbitrary, it is the algorithms’ relative performance across these cost sets rather than the absolute performance on a single cost set that allows us to understand the conditions under which particular algorithms perform better or worse. Real-world problems will come with their own cost sets defined by real-world constraints. If the real costs can also be categorized in a similar relative way like the above cheap, moderate, and expensive cost sets, then the results are expected to be similar.
Unfortunately, we do not have the means (e.g., equipment, budget, domain experts, etc) to conduct real-world experiments and obtain these problem-dependent quantities that correspond exactly to some real-world problem. We hope you will understand the limitations we work with.
**Weakness 2, on cost term in cumulative regret**: We believe that this is a difference of perspective: our perspective of the cumulative regret $R_T$ is that an algorithm should try to minimize the bound on $R_T$ for any given value of $T$, which is the original definition of regret in conventional BO that we have tried to adhere to for familiarity. Your redefinition prescribes the following: an algorithm should try to minimize the bound on the average regret $R_T/T$ for any given value of budget $C$. Since the average regret is an upper bound on the simple regret $\min_{t\leq T} r_t$, this definition is perfectly valid as well, though the analysis required to reach the conclusions of this work regarding the impact of different costs may be significantly different from ours and may be more difficult. We believe that our regret notion is more amenable to analysis. Furthermore, even though the objective below line 138 does not incorporate costs, the maximizer (i.e., $\mathbf x^{i_t}$ s.t. $r_t = 0$) coincides for both our regret notion and your proposed regret notion, hence our regret notion is also a suitable choice given that objective.
We opted to make the cost terms explicit in $R_T$ also in order to avoid misinterpretations such as the following: suppose that 1) the regret incurred in a single iteration were defined as $r_t = \mathbb E[{f([\mathbf x^{i^*}, \mathbf X^{-i^*}])}] -\mathbb E[{f([\mathbf x^{i_t}, \mathbf X^{-i_t}])}]$ without the cost term; 2) there are two control sets $j$ and $k$ such that $j$ is much cheaper than $k$, i.e., $c_j \ll c_k$; and 3) that the learner is in the initial exploration stage in the BO procedure such that the partial queries $\mathbf x^{j}$ and $\mathbf x^{k}$ at each iteration are more or less random and $\mathbb E[{f([\mathbf x^{j}, \mathbf X^{-j}])}] \approx \mathbb E[{f([\mathbf x^{k}, \mathbf X^{-k}])}]$. Then, under the above redefinition of regret, the cumulative regret incurred in this exploration stage is the same whether the learner exclusively used control set $j$ or control set $k$. However, clearly, the learner is well-advised to use $j$ for exploration rather than $k$. Under the traditional interpretation that $r_t$ is a measure of how 'bad' the decision taken at iteration $t$ is, making the cost terms explicit naturally incorporates the notion that the penalty for sub-optimal plays is lower if the play was cheap, while also penalizing using the entire budget on sub-optimal plays, regardless of whether those plays are cheap or expensive.
**Questions**:
1. We adopt the notation from Chowdhury and Gopalan (2017). The reasoning is that, at iteration $t$, before deciding the control set and partial query, the learner only has access to the dataset of observations up to time $t-1$. $\mu_{t-1}$ and $\sigma_{t-1}$ are the GP posterior mean and standard deviation given the dataset up to time $t-1$. The sequence $\\{\beta_t\\}_{t=1}^T$ is an algorithm parameter and is fully known at all iterations.
2. Thank you for pointing out this omission, the result is true with high probability. We will correct the proposition to include this in the final paper.
3. Yes, there may be some confusion here as Equation (2) is part of Lemma 4.3. Equation (2) puts a lower bound on $M_i$ that depends on the variances of the probability distributions involved.
**Limitations**:
We have a section discussing limitations in the Appendix.
We hope our response above has adequately addressed your concerns. Please reach out to us if you have any further questions.
---
Rebuttal Comment 1.1:
Comment: I want to thank the reviewers for their response. I have some follow-up comments.
1.
I want to clarify that for real-world problems, it does not mean that the authors should do some experiments in the real world. However, ideally, the authors should give some explanations of why the simulated real-world problems are close to the real-world settings. As the authors say in their manuscript, "If the real costs can also be categorized in a similar relative way like the above cheap, moderate, and expensive cost sets, then the results are expected to be similar.", indicating that the authors don't clearly know whether the cost settings (as well as other settings such as the number of control sets, the probability distribution) is close to the real-world settings.
However, I also admit that "why this problem is important?" is always a question that will not have a standard answer. Therefore, as reflected in the score, overall I have a positive impression of this work. I also want to know what other reviewers think of this point, as I notice, not just I have some concerns about the real-world settings in the manuscript.
2.
If I understand correctly, the authors point out that one difference between my definition of regret and the authors' definition of regret is that the authors' definition does not have a fixed budget $C$ constraint. If so, then I don't see why, in the case that the authors construct, we want to use $j$ rather $k$. If we don't have a budget constraint, it doesn't matter whether we use a cheaper set or a more expensive set. I think the reason that the authors need to have cost coefficients in the cumulative regret is just that the cost constraint is missing from the definition, therefore, another way should be done to incorporate this information into the regret.
Besides, when we talk about the cumulative regret, usually we care about the sublinear regret, meaning that whether $R_{T}/T$ goes to $0$ when $T$ goes to infinity. Therefore, in that constructed case, in the initial exploration stage, in some way, I don't think $k$ is "worse" than $j$ because the choice will not affect the convergence behavior.
While I am not very convinced that the authors' definition is more natural and closer to the original definition of regret in conventional BO, I agree that their regret notion is more amenable to analysis and it is still an interesting setting. | Summary: The authors study the problem of Bayesian optimization with cost-varying variable subsets (BOCVS) where in each iteration, the learner chooses a subset of query variables and specifies their values while the rest are randomly sampled. Each chosen subset has an associated cost. The authors analyze how the availability of cheaper control sets helps in exploration and reduces overall regret.
Strengths: The paper is with good structures and is well-written. The problem is interesting and potentially has wide applications. The authors provide a profound theoretical analysis. The experimental results show the benefits of the proposed method.
Weaknesses: Experimental results on a real-world application could make the paper stronger.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
1. How to choose the control set? How to choose their values?
2. The examples in the experiment section are based on simulators. A real-world example could strengthen the paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time spent reading our paper and writing this review.
**Questions**:
1. As part of the problem specification in Sec. 4, the learner is given a collection $\mathcal I \subseteq 2^{[d]}$ of control sets indexed by 1,2, ..., $m \coloneqq |\mathcal I|$. These are the control sets of which the learner is required to pick one at each iteration in the BO process. As described in Sec. 4.1, Algorithm 1's procedure to choose a control set and the partial query at each iteration is as follows:
1. Compute $g_t \coloneqq \max_{(i, \mathbf x^i) \in [m]\times \mathcal X^i} \mathbb E[u_{t-1}([\mathbf x^i, \mathbf X^{-i}])]$, i.e., the expected UCB of the best control set and best partial query given that control set. Practically speaking, this expectation and all expectations are approximated to arbitrary accuracy using Monte Carlo sampling. All acquisition maximizations are performed with L-BFGS-B with random restarts as is standard for BO acquisition maximization.
2. Collect every control set $i$ that fulfills the condition $\max_{\mathbf x^i \in \mathcal X^i} \mathbb E[u_{t-1}([\mathbf x^i, \mathbf X^{-i}])] + \epsilon_t \geq g_t$ into a set $\mathcal S_1$. These are the control sets whose best expected UCB are $\epsilon_t$-close to the best value $g_t$.
3. Further reduce this set $\mathcal S_1$ to $\mathcal S_2$ by retaining only the control sets with the lowest cost. This step ensures that Algorithm 1 uses cheap control sets when available.
4. Finally, it chooses the control set from $\mathcal S_2$ with the largest expected UCB. This step ensures that, in the case of multiple control sets having the same cost after Step 3, Algorithm 1 chooses the 'best' among them. The partial query is then the one that maximizes the expected UCB given this control set.
2. We wish to point out that the plant and airfoil experiments are based on real-world datasets. The simulators are built on these real-world datasets in an attempt to be as close to a real-world experiment as reasonably possible. Unfortunately, we do not have the means (e.g., equipment, budget, domain experts, etc) to conduct real-world experiments. We hope you will understand the limitations we work with.
We hope we have answered your questions satisfactorily. If you have any other questions that you wish for us to address, please let us know and we will be glad to respond further.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the clarification, and my questions have been addressed. I keep the score unchanged. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning the Efficient Frontier | Accept (poster) | Summary: The authors propose a novel deep learning method, NeuralEF, to tackle large-scale constraint convex optimization problems by formulating them as sequence-to-sequence (SEQ2SEQ) problems. The key idea is to learn the complex relationship using an attention mechanism between financial conditions and optimal portfolio allocation, which is often a time-consuming task in portfolio management. They demonstrate the efficacy of NeuralEF on the Efficient Frontier problem in finance, a problem known for its non-continuous and highly variable nature. In the experiments, the authors utilize a synthetic dataset with varying conditions. The results illustrate NeuralEF's ability to effectively provide accurate portfolio allocations, all the while significantly reducing the computational resources required for large-scale simulations.
Strengths: The originality and advancement of this paper lie in the proposed NeuralEF approach, which formulates a convex optimization problem as a SEQ2SEQ problem. The paper contributes to the development of the field by applying SEQ2SEQ modeling, originally a tool used in natural language processing, to a completely different domain of portfolio optimization. The concept of learning the mapping from financial conditions to portfolio allocation through deep learning has the potential to be applied to a variety of other optimization problems.
Weaknesses: - This work seems to operate under the assumption that expected returns and volatilities are known with certainty. This might be a limiting factor when considering the application of this model in real-world scenarios, where such accurate estimation is challenging.
- The study lacks a rigorous evaluation on real-world financial data. While the synthetic data-based experiments might provide insights into the model's capabilities, the applicability of the model to real-world scenarios remains unclear without empirical validation on real datasets.
- It's also worth noting that modern portfolio approaches, such as risk-based portfolios, often rely less on expected returns due to the inherent difficulty in their estimation. Without comparative studies against such methods, it might be challenging to evaluate the practicality and superiority of NeuralEF.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - Could the authors clarify how the NeuralEF model handles uncertainties in expected returns and volatilities? These are critical aspects in the field of asset management, and their treatment could significantly impact the model's performance and applicability.
- Could the authors consider evaluating the model using real-world financial data? Such evaluations could give more insight into how the model would perform in practice and help to assess its real-world applicability.
- How would the proposed model compare with other modern portfolio strategies, such as risk-based portfolios, that don't rely heavily on expected returns due to the difficulty in their accurate estimation?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: - The paper does not address the assumptions made about the certainty of expected returns and volatilities. These assumptions could significantly impact the model's performance and applicability in real-world settings where such values are typically estimated with uncertainty.
- The practical utility of the proposed model remains somewhat uncertain without empirical validation using real-world financial data and without comparison with other portfolio approaches like risk-based portfolios, which are often used due to their reduced reliance on expected returns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # W(1) & Q(1):
Both EQ(3) and NeuralEF rely on known expected returns, volatilities, and correlation matrices for optimization, yet they can be used in tandem with methods for handling uncertainty, such as Monte Carlo (MC). in the global author response, we provide two practical examples illustrating the relevance of NeuralEF with practical applications and how we can account input uncertainty. However, it's important to emphasize that our paper's focus is on accelerating EQ(3) solution. NeuralEF isn't a generalized framework for other portfolio-based methods like Minimum Variance Portfolio (EQ(1)), Risk Parity Portfolio, etc... Nonetheless, extending our work to encompass diverse portfolio allocation problems is intriguing, as these applications can be reformulated as convex optimization problems. NeuralEF effectively accelerates solving the Markowitz model [0], a cornerstone of modern portfolio theory with multiple financial applications. We provide two examples of application in the global author response in sec. "Example of real-life applications" and would be happy to mention one of the applications within introduction & related work of the manuscript in a revised version.
# W(2) & Q(2):
Assessing our model's performance on real financial data is useful for "what-if" scenario evaluations. We opted for synthetic data testing due to the following reasons: (1) Real-world financial data exhibit biases, conforming to established financial narratives, which undermines unbiased evaluation due to limited coverage of alternative financial events: if we sample two assets both in the same period, they all evolved with the same macro dynamics. (2) Variances in historical data length, correlation matrix computation, and volatility measure introduce complexities in allocation testing by altering optimization inputs, posing challenges for unbiased assessments of NeuralEF. Our synthetic data tackles these complexities by simplifying historical data selection and risk measures, while also encapsulating extreme real-world scenarios from [3-4]. It covers measures up to 200% volatility and return, providing a comprehensive outlook on model accuracy across all scenarios, including recent market crashes. We also target discontinuities by having 84% of the test data having optimization input near the discontinuity region. We can provide further details on these aspects in Section 4 with additional details about our synthetic data generation process as suggested in a different reviewer response.
# W(3)& Q(3):
as pointed out in W(1), NeuralEF is not meant to be a trading strategy or solve different equations than eq(3) by itself and we can point this out more clearly in the introduction and the abstract. A comparative study against different modern portfolio approaches on historical data for instance would allow us to answer what strategy works the best but this goes outside the scope of our paper. We provide a comparative study of NeuralEF against EQ(3) directly to assess whether the acceleration achieved by NeuralEF is sufficiently accurate which we showed in table 3 and figure 6. Additional details of this study are provided in the global author response within sec "Ablation study w/ and w/o DGAR" demonstrating its speed and accuracy advantages. The practicality and superiority of NeuralEF over running EQ(3) then becomes a question of (1) additional throughput, (2) ease of use of this methodology and (3) accessibility, (4) environmental cost of running the calculation.
[0] https://en.wikipedia.org/wiki/Markowitz_model
[1] Gatheral, Jim, and Antoine Jacquier. "Arbitrage-free SVI volatility surfaces." Quantitative Finance 14.1 (2014): 59-71.
[3]: Mazur, Mieszko, Man Dang, and Miguel Vega. "COVID-19 and the march 2020 stock market crash. Evidence from S&P1500." Finance research letters 38 (2021): 101690.
[4] Andersen, Torben G., et al. "The distribution of realized stock return volatility." Journal of financial economics 61.1 (2001): 43-76.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for taking the time to address my questions and for providing additional experiments. I appreciate the effort and clarification provided. After reading all reviews and rebuttals, I would like to keep my score. | Summary: The paper presents a sequence-to-sequence (SEQ2SEQ) deep learning model, called NeuralEF, to solve the Efficient Frontier (EF) problem in portfolio optimization. The authors reformulate the EF problem as a SEQ2SEQ task and train a deep neural network (DNN) to approximate the convex optimizer. The proposed method aims to provide a fast and accurate approximation of the EF while handling variable-length inputs and robustly respecting linear constraints. A Dynamic Greedy Allocation Rebalancing (DGAR) module is introduced to ensure that the model's forecasts remain within the feasible domain and respect the constraints.
Strengths: The paper proposes a novel deep learning-based approach to tackle the EF problem, leveraging the powerful capabilities of SEQ2SEQ models.
The DGAR module is an interesting addition that ensures the feasibility of the solutions generated by the DNN.
Weaknesses: Lack of innovation in employing sequence-to-sequence models for convex optimization problems:
While the paper presents the NeuralEF method as a novel approach for solving convex optimization problems using sequence-to-sequence models, it is important to acknowledge that existing research has already explored similar ideas. For instance, sequence-to-sequence models have been applied to combinatorial optimization problems, and transformers have been used for embedding[1,2]. To strengthen the paper, the authors should discuss the novelty of their approach in light of these existing works and emphasize the unique contributions of their method to the field.
Inadequate demonstration of the effectiveness of the DGAR module:
The effectiveness of the Dynamic Greedy Allocation Rebalancing (DGAR) module, which is a key component of the NeuralEF method, is not thoroughly demonstrated or compared with the model's performance without it. This raises questions about the impact of the DGAR module on the model's performance and its ability to handle discontinuous behavior and diverse constraint types. To address this weakness, the authors should provide a more detailed analysis of the effectiveness of the DGAR module, including an ablation study or comparison with the model without DGAR.
References:
[1] Pomo: Policy optimization with multiple optima for reinforcement learning. Advances in Neural Information Processing Systems 33 (2020)
[2] Learning to iteratively solve routing problems with dual-aspect collaborative transformer. Advances in Neural Information Processing Systems 34 (2021)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1 The formatting template used in this paper is incorrect; it should follow the NeurIPS 2023 guidelines instead.
2 Insufficient justification for the claimed speedup and practical significance:
The paper claims a significant speedup of the NeuralEF method compared to the baseline optimizer, but the justification for this speedup is not thoroughly provided. Furthermore, the practical significance of the speedup is not well addressed, considering the potential trade-offs between solution quality and training requirements. The authors claim that their NeuralEF method is 396 times faster than the baseline optimization method. However, if this speedup takes into account the 25x acceleration achieved by using a GPU, as mentioned in line 116, the actual speedup of the NeuralEF method compared to the baseline technique is approximately 15.84 times (396 / 25). It is important to consider whether this speedup is significant enough to justify the use of the proposed method over traditional techniques, given that this nework is already trained using 1 biliion samples. Moreover, there are concerns about the solution quality of the NeuralEF method, which might be worse than the baseline method when calculating the speedup.
3 Lack of comprehensive comparison with other optimization techniques:
As shown in Table 3, the accuracy for Asset case 12 is below 84% across all three metrics. Without any comparison to other methods, it is difficult to determine if this level of accuracy is good or not. It also difficult to determine if it is sufficient for practical applications.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Inadequate demonstration of the effectiveness of the DGAR module:
The effectiveness of the Dynamic Greedy Allocation Rebalancing (DGAR) module, which is a key component of the NeuralEF method, is not thoroughly demonstrated or compared with the model's performance without it. This raises questions about the impact of the DGAR module on the model's performance and its ability to handle discontinuous behavior and diverse constraint types. To address this weakness, the authors should provide a more detailed analysis of the effectiveness of the DGAR module, including an ablation study or comparison with the model without DGAR.
Poor out-of-domain generalization:
The NeuralEF method exhibits poor performance when applied to out-of-domain examples, which limits its potential applicability in real-world scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # W(1):
[1-2] and our work explore input set optimization, each with distinct yet interconnected goals. Combinatorial optimization tackles discrete choices for optimal arrangements, while EF addresses continuous portfolio weight selection for risk-return equilibrium. PPO [2] and POMO [1], powerful RL algorithms, can potentially train NeuralEF for EF optimization as it can for problems where there exists a difficulty in retrieving training labels. However, unlike [1-2], the choice between RL and supervised training (ST) hinges on experimental insights due to the low label retrieval complexity of EF and complexity vs the NP-hard problem accounted for in these studies. POMO and PPO do not guarantee hard-constraint problem-solving, a void we bridge with DGAR. Uniting and evaluating these methods for convex optimization, like EF, is interesting so as to explore how to expedite optimization problems with heterogenous constraints, though the context favoring RL or ST remains unclear. We will acknowledge these nuances in our conclusion and references.
# W(2) and limitations:
We refer to our global author response in section "Ablation study w/ and w/o DGAR:" to address this point, where we further elaborate DGAR's effectiveness. On OOD generalization, training NeuralEF on a broader input domain beyond table 1 can encompass more extreme returns and volatility scenarios. For cases with assets having over 200% returns or volatilities (unlikely in reality), NeuralEF can revert to EF optimization. Hence, NeuralEF's OOD accuracy limitation applies only in such scenarios which considering [3-4] didn't occurs over multiple financial regime including recent financial crashes.
# Q(1):
We used the template on Overleaf authored by the NeurIPS Program Committee. If there are any formatting issues, please point them out explicitly, and we will gladly fix the errors.
# Q(2):
We revamped throughput experimentation, conducting ablation studies on: (1) preprocessing at inference under certain conditions, (2) DGAR's inference impact, (3) batch size vs. multi-threaded EF execution, and (4) comparisons with Torch-EF and NeuralEF*. Results validate NeuralEF's substantial speedup over the baseline optimizer. New achievable throughput data further confirms acceleration for both methods, offering practical trade-off insights. Including these results in the paper clarifies speedup metrics and prevents oversimplification of the X-fold improvement mention, allowing for a more nuanced understanding:
a) Pytorch-EF achieves lower throughput than NeuralEF on an equivalent GPU, necessitating larger batches to surpass concurrent multi-thread CPU baseline throughput. Employing GPU-accelerated approach for many independent synchronized MC simulations is beneficial, though often infeasible due to insufficient parallel simulations. For instance, on an A100 GPU, Pytorch-EF requires 1.2 million optimization batches for 111479 evals/sec max throughput, whereas NeuralEF achieves 343760 eval/sec with 80k optimizations and 401641 eval/sec without cleaning requests.
b) In fig.1 of the attached PDF, we observe the impact of preprocessing and DGAR on achievable throughput, even on relatively affordable hardware. Efficient EF computation can be further scaled using varied frameworks: CPUs optimized for matrix multiplication, like Intel Xeon Platinum 8480+, ideal for small requests (<1K), and GPUs for at least 1K requests, justifying the proposed method over traditional approaches. Capital derivative pricing, employing MC simulations of EF for single valuation, meets the GPU requirement with simulations often consuming substantial compute time. Accelerating them with minimal constraints (e.g., GPUs) yields significant time gains.
# Q(3):
Assessing constraints below the 84% level, not explicitly enforced in NeuralEF, is nuanced. In safety-critical scenarios demanding strict adherence, NeuralEF suits cases without ζmax requirement and Vtarget deviations. For MC simulations targeting E[R], E[V], and E[x], NeuralEF is suitable. These 3 metrics measure an optimality gap where NeuralEF predictions might not be the most "optimal" solution (similar to [1]). Ranking precision responds to ε deviations between assets, often with limited practical impact. To our knowledge, no other work handles robustly heterogeneous linear constraints for EF, making method comparisons a study of accuracy-constraint trade-offs. We address this trade-off in part in the global author response within sec "Ablation study w/ and w/o DGAR".
[3]: Mazur, Mieszko, Man Dang, and Miguel Vega. "COVID-19 and the march 2020 stock market crash. Evidence from S&P1500." Finance research letters 38 (2021): 101690. [4] Andersen, Torben G., et al. "The distribution of realized stock return volatility." Journal of financial economics 61.1 (2001): 43-76.
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifying the effectiveness of the DGAR component and the novelty of your method. I find this discussion quite helpful, as it provides a clearer understanding of the paper's contributions.
Regarding the formatting template, I noticed that on the first page, the footnote indicates 2022. I believe the correct template should generate a PDF with a footnote displaying 2023.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We're pleased to learn that the clarification on the DGAR component has contributed to a better understanding of our paper's contributions and the novelty of our approach. We believe that integrating the discussion and the figures from the rebuttal into the allowed additional page for the paper-ready version will strengthen the paper.
Specifically, we propose to accentuate the following to provide an adequate demonstration of the effectiveness of the DGAR module and highlight the innovation of our approach for the EF convex optimization problems:
[A]: Display the results from the model's ablation study exemplifying the impact of model component on throughput (Fig. 1), DGAR scalability (Fig. 2c), an example of discontinuities for volatility (Fig. 2b), and highlighting the effect of DGAR on accuracy by displaying the distributional accuracy of the model with and without DGAR either within the appendix or in the additional page if space allows.
[B]: We will acknowledge the PPO [2] and POMO [1] methods, elucidating their relevance to our paper in both the related works section and the conclusion with discussion made in our answer.
[C]: We will provide insights into the limitations of NeuralEF's OOD accuracy in practical scenarios, clarifying that such constraints are rare occurrences in practical scenarios as stated in our original response.
[D]: We will enhance the discussion on the synthetic data generation in the experimental section, specifying that we employed an MC sampling scheme to uniformly cover the entire domain of table 1. Additionally, we will emphasize that our synthetic datasets mimic real-life distributions for volatility and correlation inputs, encompassing rare and extreme scenarios targeting optimization discontinuity areas (84% of the samples having at least 2 optimization inputs within ε proximity of each other). If space permits, we will further elaborate on the rationale for utilizing synthetic data to evaluate our model, as addressed in the response to reviewer "oNAD."
[E]: We will comment on the optimality gap stated in section "Q(3)" of our answer to your review and elaborate on how our approach compares to the ground truth optimal result found with eq(3). We will highlight that NeuralEF is sufficient for practical applications in cases without strict requirement on ζmax and Vtarget deviations and is easily applicable in MC simulations settings targeting E[R], E[V], and E[x] based on the result from fig 4 in the paper. We will discuss the nuance of the optimality gap metrics of the volatility and the class constraints (both with respect to their sensitivity and strict use in practical applications) highlighting that these metrics are sensible to an ε deviation. To enhance this discussion, we can plot the distributional error on the total class allocation and for violating the constraint as a function of absolute error per number of assets and classes. We can do the same for the volatility constraint if you deem it to be required.
[F]: We will provide a real-case application of NeuralEF highlighting a domain of application where our model would have an impact by using the first example stated in the global author response in sec "Example of real-life applications".
[G]: We will comment further on the speedup of our method using the results shown in fig1 of the attached PDF to account for multiple references standpoint (GPU vectorized implementation, vs single-thread process vs multi-thread process) based on the point mentioned in section "Q(2)". This should allow for a more nuanced discussion in the experimental result that justifies the use of the proposed method instead of traditional techniques, i.e. in cases where multiple concurrent evaluation of eq(3) are needed.
Regarding the formatting issue, we sincerely appreciate your keen eye for detail and for pointing out the formatting issue. We can correct this in a revised paper-ready version. Before writing this response, we have tested that our manuscript can be easily ported on the 2023 template (https://www.overleaf.com/latex/templates/neurips-2023/vstgtvjwgdng) and we confirm that it is compatible: our current manuscript is still limited to the nine content pages limit, including all figures and tables without our suggestion stated above.
Please let us know if there are any other required additions to improve the score of the paper. | Summary: This paper considers the problem of solving efficient frontier (EF), a fundamental resource allocation problem where one has to find an optimal portfolio maximizing a reward at a given level of risk. Traditionally, this optimization is solved by quadratic optimization techniques.
This paper introduces NeuralEF, a sequence-to-squence formulation, attempting to make fast neural approximations of EF. The purpose is to robustly forecast the result of the EF convex optimization problem with respect to heterogeneous linear constraints and variable number of optimization inputs.
By experimental results, the proposed NeuralEF is a viable solution to accelerate large-scale simulation.
Strengths: - The paper is well-written with clear notations and definitions.
- In the problem of solving for EF, obtaining robust and smooth results is a challenge given its discontinuous nature. The proposed method is shed light on how data-driven methods can help in smoothing the result in an organic way.
- r
- The methodology, especially the mathematical programming techniques, including formations in eq. 4-10 and the solver implementation details in Sec 4.2 are clear.
Weaknesses: - Out-of-sample result is not discussed. It would be interesting to check the out-of-sample results in a distributional manner, to highlight the behavior bias of the proposed method in practical cases.
- Out-of-domain generalization is briefly discussed in Sec. 4.3. However, EF are dependent on correlation regime and shock, therefore it makes sense to discuss the solution accuracy in different regimes.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What behavior bias characteristics does the proposed NeuralEF have?
- In Sec. 4.1, what does the solution accuracy look like in different regimes?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: no concern.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # W(1-2)& Q(2):
Out-of-sample results were not discussed in detail, mostly due to the page limit. The safe use of NeuralEF could be solved by training it on a bigger domain than what was shown in Table 1 (most likely on a longer time period and with more data) if the input domain needs to be larger. Otherwise, one can revert to the base optimization in the rare cases that it goes out of the domain, e.g. where assets go above 200% returns or the 200% volatilities, both being highly unlikely cases. With respect to correlation regime and shock, we can identify one area where NeuralEF accuracy would degrade significantly: in a regime where assets oscillate in the inflection area of the optimization, i.e. when two attractive assets have returns/volatilities ε close to each other and/or near 0. If the returns are of the same sign and ε close to 0, then the scale invariance trick mentioned in 4.3 can be used to palliate to model potential forecast failures. Because NeuralEF has been trained on synthetic data, we haven't observed behavior bias that arises directly from certain regime aside from the behavior shown in fig 6.0, where there are non-continuous changes in allocation in EQ(3). We show on fig.4 c) the distributional error of the OOD data, similar to what we did for fig.4) in the paper. We can also measure the optimality gap of the constraint and provide them in the appendix of a revised version of the manuscript, if desired.
# Q(1):
Regarding DGAR’s behavioral characteristics, there is a possibility that DGAR propagates errors over other assets due to a bad selection of the K most important assets if it can do so with respect to the constraints. For instance, consider the case of three assets where the true allocation is x=[0.5, 0.3, 0.2], the estimated allocation before DGAR is x^=[0.2, 0.7, 0.3], the maximum assets allocation is x=[0.5, 0.6, 1.0], and we have a full allocation setup c1max=αmax=αmin=1 with all assets belonging to the same class. Then DGAR will clip the allocation at the first step x'=[0.5, 0.6, 0.2] and we will get the ordering K=[1,2,0], which will force the allocation to go back to x'''=x''=[0, 0.6, 0.4]. This allocation will have a higher MAE 0.33 vs the original prediction of 0.27. We haven't spotted areas of the input domain where there are more errors when forecasting x', but we can note that some of the bigger errors arise from this issue. Hence, it would be interesting to extend this approach in such a way that DGAR can account for the confidence it has over the allocation or learn an optimal ordering policy.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. They have addressed my questions.
I'd appreciate it if the authors can literally mention the discussed concerns and limitations (W(1-2), Q(2), Q1) in a final version.
It is optional to demonstrate the optimality gap of the constraint in the appendix of a revised version of the manuscript.
I will maintain my rating. | Summary: In this paper, the authors propose NeuralEF, a deep neural network (DNN) approach that approximates what is known as the ``efficient frontier'' (EF) in economics. To this end, they use a stacked transformer encoder architecture, as well as pre-processing steps (such as ordering assets) and a greedy algorithm (DGAR) that uses dynamic programming to come up with an asset allocation. The main idea behind NeuralEF is to formulate the EF optimization problem as a sequence-to-sequence (SEQ2SEQ) problem. This allows them to use self attention inside a large language model (LLM) framework. The authors claim that this allows them to understand the relationships between the optimization inputs. They provide experimental results using NeuralEF on a convex optimization problem from generated data.
Strengths: 1. The paper is on an interesting topic: applying self-attention and LLMs to economics problems. This is important because many economic problems can not be solved in high dimension with standard tools and analysis from Economics.
2. Despite using a DNN, the authors are conscientious of their carbon footprint and compute resource usage. This makes their proposed approach more environmental as well as accessible to those with limited compute resources.
3. The authors wrote a vectorized implementation to solve multiple EF problems at once. Their proposed methodology is computationally efficient.
4. The authors augment existing DNN approaches by introducing separate modules that perform pre-processing and a dynamic greedy allocation module to respect constraints.
Weaknesses: 1. Some claims by the authors seem not to be substantiated. For example, they claim that their approach allows them to understand the relationships between the optimization inputs. Yet, there is little discussion of these relationships in the paper, and it is also unclear how their experimental results shed light in this area. The claim that their approach is robust to discontinuities resulting from large switches in assets appears also unsubstantiated.
2. NeuralEF has many moving parts and thus is difficult to understand and to use. It is unclear what the contribution of each component is. There are some details missing in the explanation of how NeuralEF works (e.g. how the initial allocation is chosen).
3. Beyond being able to leverage recent advances in LLMs, it is unclear why a SEQ2SEQ encoding of the problem makes sense in the context of solving the EF problem. Intuitively, multiple portfolios are equally efficient and there is no natural ordering of the assets in general.
4. The significance of the experimental results are unclear to me because the data generation process does not appear to be explained in the paper.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. What is an intuitive explanation of why it makes sense to sequentialize the assets? There is no natural ordering in a portfolio of assets and so, would you need to worry about making your solutions invariant to reordering?
2. What is the contribution of the main components of NeuralEF to the overall performance? For example, how much does the DGAR component affect the accuracy attained and the compute time needed? How much does the preprocessing contribute to the overall performance?
3. How does DGAR scale to larger number of assets? Even with $O(N\log N)$ complexity, this part seems costly with a sizeable portfolio.
4. Why include a class constraint when DGAR seems to just ignore it?
5. Does the result of NeuralEF depend on the initial asset allocation chosen?
6. How is the data generated? What is the function used to generate the data? What kind of noise is added to the data?
7. Why does the training set need to be so big (i.e. a billion examples)?
8. Why is the data generated for the experiments convex? Are there convexity assumptions or requirements for NeuralEF?
9. How does NeuralEF and the results in the paper contribute to understanding the relationship between optimization inputs to the EF problem?
10. The authors claim that discontinuities due to switching between assets doesn't impact the resulting portfolio return. Is this robustness to discontinuities also observed for the resulting volatility of the portfolio?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # W(1)&Q(9):
We meant to say that the use of self-attention is to help the model understand the relationships between inputs. The relationships of EF are well understood. An expansion in the appendix can detail these relationships: e.g. assets with higher expected returns and lower risks yield higher weights, leading to allocation shifts with small changes in returns or volatility around 0. The correlation matrix influences diversification benefits, shifting the efficient frontier. Constraints like maximum asset allocation impact the domain of the solution by truncating the attractivity of certain assets (as in Fig.3), etc. Regarding robustness in the presence of discontinuities, figures 5-6 (paper) and figure 2(b) of the global author response demonstrate a concrete example of our model in presence of the discontinuities. The test set was also designed to incorporate these discontinuities, with 84% of the samples having at least 2 optimization inputs within ε proximity of each other. Thus, we believe that table 3 of the paper accounts for the challenging optimization region because most of the test data samples are within the discontinuity region of the EF problem. We can point out this detail in the appendix.
# W(2)&Q(2):
We redirect the reviewer to the global author response in sec. "Ablation study w/ and w/o DGAR:" for answers to the reviewer’s questions and an explanation of the contribution of each part. We also provide an ablation study of the components motivating their contribution both in terms of accuracy and throughput.
# W(3)&Q(1):
Employing SEQ2SEQ to approximate EF holds a dual rationale: EF's optimization inputs grow by N (except the correlation, with has quadratic growth) and ordering optimization inputs on a standardized length basis (as illustrated in fig2 of the paper) allows to treat inputs as a set. Scaling-wise, solving a SEQ2SEQ problem is more resource-efficient than running the optimization itself. Since portfolio assets lack inherent ordering, any permutation of input tokens and output allocations is equivalent. Ensuring NeuralEF's invariance to reordering would enhance inference speed. However, we found that ordering tokens based on asset returns with [1.1] within a sequence accelerates training convergence and boosts accuracy, aligning with insights from https://arxiv.org/abs/1511.06391.
# W(4)&Q(6-7):
We used an MC sampling scheme to cover the entire domain (Table 1) uniformly. We made all synthetic datasets mirror real-life distributions for the volatility and correlation inputs, including rare and extreme scenarios like W(1), and explicitly target the area of the optimization where discontinuities are. We can present examples of the data generated in the appendix. Figure 2 of the attached PDF displays training set size impact on accuracy. We have observed that (1) smaller sets have lower forecast quality at inflection points and (2) larger sets enhance robustness in a wide variety of scenarios. Both small and large datasets can still achieve good MAE, but more data helps model robustness. Given the relatively low cost of generating training data we went with ~1B.
# Q(3) & Q(4):
DGAR doesn't encounter scaling problems, yet the number of inputs sent to GPU may be challenging. For instance, with 1K assets, dimensions to be sent reach a total of ~503k (mostly correlation inputs). Although there's no bottleneck in terms of complexity to solve the 1k portfolio under eq(3), DGAR's throughput varies with asset count (10, 100, 1000, 10000) and batch sizes (1 to 20000), as depicted in fig 2.c. Regarding constraints, we haven't found a way to make DGAR's respect both the maximum class and asset allocation constraints. To address this in the evaluation, we introduced metrics to measure if the class constraints and the volatility constraint were respected, allowing a direct comparison against the convex optimization eq(3), shedding light on our approach's optimality gap. We are happy to point out this detail in sec4.
# Q(5):
NeuralEF solves eq(3) without needing an initial allocation. Future work may consider an initial allocation and transaction costs with rebalancing constraints.
# Q(10):
Fig 2b) in the attached PDF illustrates the impact of inflection points, using the same example from fig.6 of the paper, where increasing the 10th asset's returns causes a jump in the allocation and affects resulting volatility. The optimal portfolio's volatility shifts with varying volatility levels, oscillating between 0.340% and 0.374% which is below the 95th quantile in term of accuracy.
---
Rebuttal 2:
Comment: Many thanks to the authors for their detailed response and additional experiments. I appreciate the clarification on why self-attention is needed. I will increase my score as I found the discussion and additional studies in the general response on DGAR and the preprocessing components helpful.
---
Rebuttal Comment 2.1:
Comment: Thank you again for your thoughtful feedback and consideration of our response and additional experiments. Your decision to increase the score is greatly appreciated. | Rebuttal 1:
Rebuttal: First, we would like to thank all reviewers for their appreciation of our paper and their valuable questions and comments. We answer all questions and provide new experimental results enhancing both the clarity of the paper and the experimental section. We would be happy to address all these aspects by stating the element of our responses in a revised paper-ready version and in the appendix to improve the paper.
# Summary of points and questions
We summarize the main points of our response from the questions and limitations stated in the review:
- Clarification on the EF problem, relationships between inputs.
- Clarification on the synthetic data generation and rationale for both training and test
- Relation with other RL works
- Ablation study of the NeuralEF components on accuracy and scaling and robustness on discontinuities
- Revamped throughput experimentation showing a more detailed picture of the speedup
- Optimality gap discussion & soundness of the experimental section w.r.t to accuracy of NeuralEF in real-case application and likelihood of in domain vs OOD accuracy.
Some of the questions and weaknesses mentioned express similar concerns. We've decided to discuss them in the global response to all reviewers to prevent repeating points across reviews. We explicitly mentioned in the official comment of the review whether a weakness (W) of a question (Q) is answered in this section.
# Ablation study w/ and w/o DGAR and preprocessing components:
NeuralEF comprises three parts. [1] preprocesses by [1.1] sorting requests by asset returns, [1.2] reducing optimization input ambiguity (Appendix B), and [1.3] projection for SEQ2SEQ. [2] a Bidirectional Encoder Representations from Transformers (BERT) that solve EF as SEQ2SEQ, with [3] DGAR to enforce constraints. [1] and [3] help accurate forecasting. [1.1] and [1.2] are optional for training if accounted in the training data but help at inference to ensure the sorting requirement of NeuralEF and clean request of some optimization input ambiguities. Fig.3 in the attached PDF illustrates an example of an ambiguity. We show an example where the 10-th asset is associated to class (c2), which is its only member. By changing the value of $\\zeta_{c_2}$ and $x\_1^{\\text{max}}$ constraints to be ambiguous relative to another, we show how accuracy deteriorates. This motivates the cleaning module. Figure 3.a-b) in the attached PDF displays the accuracy of NeuralEF models (w/ and w/o DGAR) trained for 5 days at a fixed learning rate of 5.5e−5, revealing better MAE with DGAR. DGAR exhibits a reduced optimality gap (Figure 4.a-b) of constraint adherence, further motivating the use of DGAR. Throughput of fig 1) shows that the cleaning steps are computationally inexpensive at inference.
Due to the page limit and time constraints associated to the author response period, a comprehensive ablation study considering various training data types (sort-returns/shuffled-assets;clean/no-clean) and usage of preprocessing components during training cannot be presented both for training and generating the data. This would requires us to generate 3Bs points and train multiple variants of the model for 5 days to be consistent with the other models presented in the ablation study. Note that all achieved throughput of fig 1) doesn't account for model optimization like the use of [A], which can increase the throughput of NeuralEF approach further.
# Example of real-life applications:
We provide concrete application in real-world scenarios where NeuralEF can be used:
Ex.1) Pricing financial derivatives tied to an asset basket that needs to estimate E[R], E[V], and E[x] from optimal portfolios to value options at different maturity times. Through Monte Carlo (MC) simulation using geometric Brownian motion, input uncertainty across time can be considered. Asset spot prices "S" can be simulated using the formula: Sᵢ₊₁ = Sᵢ * exp((interest_rateᵢ - (σᵢ^2)/2) * Δt + σᵢ * √(Δt) * Zᵢ, where Δt is the difference between ᵢ₊₁ and ᵢ, and Zᵢ is a normal distribution sample. This setup generates diverse paths under varying conditions, accommodating events like structural breaks in financial crashes. Calibration for volatility aspects can use option data and SSVI methods known in quantitative finance [1].
Ex.2) Quantifying forecast uncertainty impact on portfolio allocation: NeuralEF can be applied to historical data with multiple forecast degradation over the ground truth to see how error in the estimation of the optimization inputs will impact the portfolio performance over the most optimal allocation. This setup allows to back-test existing and proposed trading strategies reliant on expected returns and volatilities, highlighting suboptimal areas.
We are committed to incorporating the answer to your questions and comments into the manuscript. This will significantly strengthen the paper ensuring that we (1) highlight the novelty of NeuralEF, (2) show its impact on portfolio allocation settings and potential impact on other optimization problems with heterogenous constraints; and finally (3) make the evaluation convincing and accessible both in term of resources needed and reproducibility.
[A] https://dl.acm.org/doi/10.1145/3575693.3575702
Pdf: /pdf/a83b39af1aaeef21efe890b2bf32f4a5ef625ecb.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Mask Propagation for Efficient Video Semantic Segmentation | Accept (poster) | Summary: This paper presents a mask propagation method, MPVSS, for video semantic segmentation (VSS). MPVSS uses Mask2Former to obtain mask predictions and queries from the key frame. Then, a motion encoder extracts pixel-wise motion from the key frame and its adjacent frame. The queries from the key frame are used to decode query-wise motion features. A flow head generates query-based flow maps, and binary mask predictions for the adjacent frame are generated by applying these flow maps to the binary mask predictions of the key frame. Semantic segmentations of non-key frames are produced by matrix multiplication between the key frame's query classification output and the binary mask prediction for the adjacent frame. A detailed ablation study validates each component. Comparisons between MPVSS and state-of-the-art methods on VSPW and Cityscapes demonstrate good performance and efficiency of MPVSS.
Strengths: This paper is well-written and easy to follow.
Motivation leveraging motion estimation to reduce the computation of VSS is sound.
The proposed method of propagating the key frame’s query to the motion features is interesting and novel.
The experiments are well-designed, and the results are convincing.
Weaknesses: The key frames are selected at fixed intervals. As a result, the method may not fully address the mentioned redundancy problem.
Efficiency of the proposed method should be discussed in more detail. For instance, MPVSS uses FlowNet to extract motion features between two frames instead of relying on a backbone network to extract frame-wise features.
Fig3 (b), y axis label, “TLOPs” seems a typo.
There is not much information about training time, and any particular training strategies used in the experiments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see Weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper adequately addresses limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thanks for your constructive comments and we address your questions as follows.
**Q1.** The method may not fully address the mentioned redundancy problem as the key frames are selected at fixed intervals.
**A1.** First, we acknowledge that employing fixed key-frame intervals only results in a partial reduction of redundancy, which is a limitation of the current mask propagation framework. Secondly, a more effective approach would involve utilizing dynamic key-frame selection. However, this may introduce additional model parameters and potentially elevate computational costs due to the decision-making process for key-frame selection. In the future, we will consider exploring the implementation of dynamic key-frame selection to address the temporal redundancy challenge more comprehensively.
**Q2.** Efficiency of the proposed method should be discussed in more detail.
**A2.** We provide detailed computational costs for a single key frame and a non-key frame of the proposed MPVSS in Table E. As mentioned in Lines 43-44, the main computational demands lie in the computation of a high-resolution pixel-embedding. Utilizing the strong semantic correlation in consecutive video frames, we propose to propagate accurately predicted masks from a key-frame to its subsequent non-key frames. It only takes the cost of the flow module and mask propagation process, leading to the accuracy-efficiency trade-off.
Table E. Computational cost in terms of FLOPs (G) for a single key frame and a non-key frame that uses different backbones. We use input resolutions of 480x853 and 1024x2048 for VSPW and Cityscapes, respectively.
| Dataset | | R50 | R101 | Swin-T | Swin-S | Swin-B | Swin-L |
| ---------- | ------------- | ----- | ----- | ------ | ------ | ------ | ------ |
| VSPW | Key-frame | 110.6 | 141.3 | 114.4 | 152.2 | 223.5 | 402.7 |
| | Non-key frame | 21.0 | 21.0 | 21.0 | 21.0 |21.0|21.0|
| Cityscapes | Key-frame | 529.9 | 685.5 | 543.6 | 730.1 | 1057.0 | 1911.3 |
| | Non-key frame | 84.0 |84.0|84.0|84.0|84.0|84.0|
**Q3.** Training time and training strategies.
**A3.** We list the training hours in Table F. The training strategies for the proposed MPVSS have been outlined in Lines 168-171 and Lines 263-264. During training, we sample a pair of frames, i.e., a key frame and a non-key frame. We run the segmentation network on the key frame, and a set of query-based flow maps are estimated by the proposed flow module, which are then used to obtain the warped mask embedding for the non-key frame. We apply loss functions in Mask2Former, including a classification loss and a binary mask loss on the class embeddings and warped mask embeddings, respectively. Bipartite matching is performed between the warped mask embeddings and the ground truth masks. Subsequently, the loss gradients are propagated backward across the model to update the proposed flow module.
Table F. Training time (Hours) on VSPW dataset. Experiments are conducted with a batch size of 16 on 8 NVIDIA V100 GPUs.
| Backbone | R50 | R101 | Swin-T | Swin-S | Swin-B | Swin-L |
| ----------- | ---- | ---- | ------ | ------ | ------ | ------ |
| Mask2Former | 11.4 | 12.3 | 10.3 | 12.0 | 12.3 | 15.0 |
| MPVSS | 9.3 | 11.5 | 10.2 | 11.1 | 12.2 | 13.4 |
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and remain with my initial rating. | Summary: An efficient mask propagation framework for VSS, called MPVSS, is proposed in this paper.
MPVSS adopts a strong query-based image segmentation based on sparse key frames and warp prediction to non-key frames by generating a segment-aware flow from a newly designed flow estimation module.
With the proposed query-based flow, MPVSS achieves the performance of SOTA with a better efficiency.
Strengths: 1. MPSVSS captures segment-level information between the video frames, which provides a better propagation result than the pixel-level correspondence information used in the previous work.
2. Compared with vanilla optical flow, query-based flow provides smooth and clear compensation for non-keyframes.
3. The proposed method achieves SOTA performance with significantly reduced computational cost through extensive experiments on standard benchmarks.
4. MPVSS delivers a consistently higher level of video consistency than its base network.
Weaknesses: 1. typo: line 10: wrapped -> warped.
2. The proposed mask propagation framework is identical to that of DFF [1], Accel [2], and CoVOS [3], while the proposed query-based flow requires further experimentation to demonstrate its effectiveness. More specifically:
- In Table 2, it is difficult to say whether the query-based flow is better than Accel [2] or DFF [1], because the MPVSS uses Mask2Former for key-frames segmentation. In order to prove the advantage of propagation using query-based flow over the propagation module in Accel and DFF, the keyframe segmentation network should be unified.
- In Table 3a, the query-based flow does bring improvement, but it also introduces additional network parameters. It's hard to say whether the improvement is due to the increase in parameters.
3. Since propagation relies only on the previous keyframe, I wonder how occlusion will have a negative effect on the propagation results. There is literature using bi-directional motion vector for mask propagation [3] and shows that occlusion has a strong effect on the propagation results, especially causing ghosting effect, however this point is not discussed in the paper.
4. A measurement on the FPS should be provided.
[1] Zhu, Xizhou, et al. "Deep feature flow for video recognition." CVPR. 2017.
[2] Jain, Samvit, Xin Wang, and Joseph E. Gonzalez. "Accel: A corrective fusion network for efficient semantic segmentation on video." CVPR. 2019.
[3] Xu, Kai, and Angela Yao. "Accelerating video object segmentation with compressed video." CVPR. 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the Weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank you for your valuable feedback and address your questions as follows.
**Q1.** Fair comparison between DFF and Accel by unifying the key frame segmentation network.
**A1.** Thanks for your advice. We integrate DFF[78] and Accel[20] with Mask2Former to conduct fair comparisons on Cityscapes dataset. Compared with Mask2Former-DFF, we achieve higher mIoU scores and lower GFLOPs. And the proposed MPVSS surpasses Mask2Former-Accel on Swin-B by 0.3% with only 1/3 GFLOPs.
Table C. Performance comparisons between DFF and Accel.
| Methods | Backbone | mIoU | GFLOPs | #Params (M) | FPS |
| :---------------: | :-----------: | :--: | :----: | :---------: | :---: |
| Mask2Former-DFF | R101 | 77.1 | 457.4 | 101.7 | 7.14 |
| Mask2Former-DFF | Swin-B | 79.9 | 525.3 | 145.7 | 6.09 |
| Mask2Former-Accel | R101+R50 | 78.9 | 594.8 | 145.7 | 5.78 |
| Mask2Former-Accel | Swin-B+Swin-T | 81.4 | 680.1 | 193.1 | 4.40 |
| MPVSS | R101 | 78.2 | 204.3 | 103.1 | 12.55 |
| MPVSS | Swin-B | 81.7 | 278.6 | 147.0 | 9.54 |
**Q2.** It's hard to say whether the improvement is due to the increase in parameters.
**A2.** Compared with methods that warp masks by optical flow, our *Query-based* flow introduces additional ~1.4M parameters, which can be considered negligible. Additionally, in Table D, we measure the number of parameters for the flow module listed in Table 3(b) of the main paper. Using *Query-learned flow* already achieves better performance than optical flow with a lower number of parameters, which also proves the improvement mainly comes from the effectiveness of the module design.
Table D. Performance comparisons for different flow designs. We report the number of parameters of different flow modules, and mIoU scores.
| Flow design | #Params (M) | Swin-T | Swin-B |
| :--------------: | :---------: | :----: | :----: |
| Optical flow | 38.68 | 38.4 | 52.1 |
| Query-random | 37.56 | 38.1 | 50.9 |
| Query-learned | 37.56 | 39.2 | 52.4 |
| Query-for-OF | 40.08 | 38.5 | 52.3 |
| Query-based flow | 40.08 | 39.9 | 52.9 |
**Q3.** I wonder how occlusion will have a negative effect on the propagation results, especially the ghosting effect on VSS.
**A3.** Thanks for your valuable feedback. Indeed, we have observed instances of failure due to occlusion challenges when applying our proposed mask propagation framework to VSS datasets. For example, scenarios in which a car moves behind a tree can lead to inaccuracies in mask propagation, primarily due to the reliance on mask predictions from preceding key frames.
Nevertheless, the occurrence of ghosting effects seems to be less pronounced when our network is applied to VSS datasets in comparison to VOS. One potential explanation for this could be the smoother transitions between scenes that are commonly observed in VSS datasets. In contrast, the ghosting effects encountered in VOS datasets often stem from delayed responses to rapidly moving objects. Additionally, our query-based flow, which integrates segment-level information, might offer more stable and robust mask propagation compared to using uni-directional optical flow.
Moreover, we acknowledge that the integration of bi-directional motion vectors encoded within compressed videos in method [A] currently exceeds the scope of our work. Implementing bi-directional mask propagation appears promising, even though it might introduce additional model parameters and computational costs. This is deserving of further exploration, and we regard it as a potential area for future research. We will add a comprehensive discussion of the occlusion challenges in the revised version of our work.
**Q4.** Information on FPS should be provided.
**A4.** We measure the FPS of different methods on a single NVIDIA V100 GPU. The results are provided in Table A and Table B in the general response. We will update the corresponding tables in the revised version.
We will fix the typos in the revised version.
**Reference:**
[A] Xu, Kai, and Angela Yao. "Accelerating video object segmentation with compressed video." CVPR. 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications, I have raised score to 5. I am interested in the fact that you claimed that query-based optical flow can provide more stable and robust mask propagation than unidirectional optical flow. I look forward to the discussion and comparison between unidirectional OF, query-based flow and bidirectional OF.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer LSXE,
Thanks for your feedback and suggestions! We feel glad to address your questions and appreciate the constructive reviews for improving our work.
For further investigation between uni-directional OF, query-based flow, and bi-directional OF, we run experiments on Cityscapes.
**Query-based flow vs. uni-directional optical flow.** As discussed in Lines 353-356 in main paper, the proposed query-based flow is more robust to capture long-term temporal changing compared to using unidirectional optical flow.
**Bi-directional optical flow vs. uni-directional optical flow.** As shown in Table G, bi-directional optical flow outperforms uni-directional optical flow in terms of mIoU scores as it fuses accurate semantic maps from two key frames.
**Query-based flow vs. bi-directional optical flow.** The proposed query-based flow achieves better mIoU scores compared to bi-directional flow with fewer FLOPs for each single non-key frame. To further look into the segmentation performance, we visualized the qualitative results and found that bi-directional optical flow is ineffective when the two warped masks are not aligned well due to inaccurate *pixel-wise* flow prediction. Nevertheless, the proposed query-based flow provides a clear boundary and compensation for warping masks for irregular or small semantic segments because of the *segment-aware* flow estimation compared to using uni-directional or bi-directional optical flow. Because of the limitations on text-only responses during the discussion phase, we are unable to display visualizations here. We will include more quantitative and qualitative results for better illustration in the revised version.
Table G: accuracy comparison (mIoU) for uni-directional optical flow, bi-directional optical flow and the proposed query-based flow. We also report computational efficiency in terms of FLOPs for each single non-key frame.
| Backbone | Uni-directional OF | Bi-directional OF | Query-based flow |
| :-----------: | :----------------: | :---------------: | :--------------: |
| Swin-T | 79.6 | 80.2 | 80.7 |
| Swin-B | 80.5 | 81.1 | 81.7 |
| **FLOPs (G)** | 47.3 | 89.6 | 84.7 |
Best regards,
Authors of #6943 | Summary: This paper presents an approach for video semantic segmentation (VSS) by focusing on the aspect of efficiency. The authors propose a novel mask propagation framework that is built upon a computationally intensive query-based image segmentor called Mask2Former[1]. Instead of processing every frame, the framework leverages the image segmentor to process only the key frames. To generate masks for non-key frames, a novel query-based flow map estimation module is introduced, which predicts optical flows to warp the masks from key frames. The experimental results demonstrate that the proposed framework achieves competitive performance, with only a minimal drop of approximately 1% to 2% when compared to the baseline [1]. Notably, the proposed method achieves these results with significantly reduced Floating Point Operations (FLOPs).
[1] B. Cheng, I. Misra, A. G. Schwing, A. Kirillov, and R. Girdhar. Masked-attention mask transformer for universal image segmentation. In CVPR, pages 1290–1299, 2022.
Strengths: 1. The proposed method demonstrates competitive performance on two widely recognized benchmarks, showcasing only a slight decline in the mIoU score, ranging from 1% to 2%. These results are achieved while significantly reducing computation costs (FLOPs).
2. The novel query-based flow estimation module introduced in this paper surpasses traditional pixel-wise optical estimation methods by producing better flow maps. This advancement holds great promise for the field of transformer-based flow estimation.
Weaknesses: Please see my comments in below Questions section.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. It would be inappropriate to claim that the proposed method (MPVSS) "achieves SOTA performance". From Table 1 and Table 2, it becomes evident that MPVSS yields slightly lower mIoU scores compared to its baseline counterpart (Mask2Former). Furthermore, Mask2Former already attains SOTA performance when compared to other methods presented in the table, so the primary contribution of MPVSS lies in its efficiency. Please kindly rectify these statements to accurately reflect the contributions of MPVSS.
2. It would be insightful to delve further into the reasons why the learned query from the mask generation branch (Q^k_O) enhances flow estimation. While the paper briefly touches upon this topic in a single sentence (line 220 - line 222), it would be great to include visualizations or attention maps that demonstrate the gain of the learned query on flow estimation.
3. In line 240 - line 242, two flows (query-based flow and pixel-based flow) are stacked to generate the final flow predictions. To gain a better understanding of the effect of the query-based flow, it would be helpful to visualize the query-based flow and pixel-based flow separately. Additionally, it would be beneficial to clarify whether the first row (i.e., Optical flow) in Table 3 (b) refers to the "pixel-based flow".
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thorough review and constructive questions.
**Q1**. It would be inappropriate to claim that the proposed method (MPVSS) "achieves SOTA performance".
**A1**. Thanks for your advice. We will demonstrate our contribution as "achieve SOTA accuracy and efficiency trade-offs".
**Q2.** Why the learned query from the mask generation branch ($Q^k_O$) enhances flow estimation.
**A2.** (1) Previous studies in DETR-like framework explore the initialization of decoder query and demonstrate that providing good prior of decoder query leads to faster convergence and better performance. (2) Additionally, as mentioned in Lines 220-222, we utilize the learned queries to enable each flow query to extract motion information for each segment appeared in the key frame. (3) We offer a visual comparison between utilizing the learned queries $Q^k_O$ through the flow maps and the attention maps derived from the motion decoder. The visualization results provided in **Figure A** in the general response PDF. As shown in row (f)(g) and row (c)(h) in **Figure A**, with $Q^k_O$ initialization, the associated attention maps focus on particular regions of the motion map, leading to improved flow maps for the corresponding segments.
**Q3.** Visualization of the query-based flow and pixel-wise flow.
We provide the visualization in **Figure A** in the general response PDF. The query-based flow and pixel-wise flow complement each other. The query-based flow captures the overall motion for the specific segment, while the pixel-wise flow adds more detailed movement at a fine-grained level, especially for the edge of the segment. This combination enhances the stability of the final flow maps for mask propagation.
**Q4.** Whether the first row (i.e., Optical flow) in Table 3 (b) refers to the "pixel-based flow".
**A4.** Yes, "Optical flow" in the first row of Table 3(b) refers to the method warping mask predictions using pixel-based flow estimated by an optical flow model.
---
Rebuttal Comment 1.1:
Title: Reviewer 41Ay
Comment: Dear Reviewer 41Ay,
Could you please read the author's rebuttal and other reviews, and indicate whether your comments have been addressed? Thank you.
Best, AC | Summary: The paper is to develop an approach to video semantic segmentation via propagating the segmented mask in key frames to non-key frames. Experiments were conducted on several databases with various comparisons.
Strengths: Focusing on improving the computational efficiency for video semantic segmentation;
The paper is well-written.
Weaknesses: It seems that the work lacks of novelty. In my understanding, the key idea is to compute the mask in key frames, and then propagate the segmentation result to non-keyframes, in order to reduce the computation cost. However, in many previous video analysis works, doing many computation in key frames, and then applying the result to non-keyframes, is quite natural.
The segmentation in key frames is performed by using an existing method, Mask2Former [7].
The flow estimation between frames is quite normal: using FlowNet [11] for motion encoding, and transformer based approach [7] for decoding. Not presenting a new method.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See my comments in the Weakness part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: limitation is talked in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thanks for taking the time to review our paper and we address your questions as follows.
**Q1.** Questions about the novelty of the proposed method.
**A1.** As recognized by Reviewers oxr6, 41Ay, and cyWY, the primary innovation of this research lies in a **novel and efficient mask propagation framework**. This framework encompasses a *potent query-based image segmentor* and a *highly capable query-based flow estimator*, where the whole framework allows for the accurate propagation of mask predictions from key frames to non-key frames.
As discussed in Lines 43-47 and Lines 57-64, applying a strong query-based image segmentor to the task of VSS is highly computational expensive. To reduce the computational cost, current approaches rely on optical flow to propagate features from the key frame to other non-key frames, but they still suffer from performance degradation due to the limitations of pixel-to-pixel optical flow estimation. In this context, the proposed mask propagation framework and the query-based flow module design is non-trivial.
Moreover, the motivation and design of the proposed flow module differ from established models like FlowNet or Transformer-based optical flow models. As discussed in Lines 57-74, traditional optical flow estimation focuses on pixel-level correspondences between adjacent frames. In contrast, our flow module employs learned queries from the key frame to aggregate motion information **at the segment level** for each pair of adjacent frames. This approach captures **segment-specific movement** over time, resulting in enhanced accuracy and consistency for mask propagation in the task of VSS.
---
Rebuttal Comment 1.1:
Title: Reviewer 8jSp
Comment: Dear Reviewer 8jSp,
Could you please read the author's rebuttal and indicate whether it has changed your opinion? Currently, you are the only one with a reject rating, so your opinion is very important. Thank you.
Best,
AC | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their valuable comments.
### Novelty
Most reviewers recognize the novelty of our method.
*"The combination of segmentation and optical flow in a single model is a novel approach, particularly considering the use of query-based flow maps."* (Reviewer oxr6)
*"The novel query-based flow estimation module introduced in this paper surpasses traditional pixel-wise optical estimation methods..."* (Reviewer 41Ay)
*" The proposed method of propagating the key frame’s query to the motion features is interesting and novel. "* (Reviewer cyWY)
### Promising results
All reviewers agree that our method provides a competitive accuracy and efficiency trade-off.
*"The proposed method achieves commendable performance on standard datasets such as vspw and cityscape."* (Reviewer oxr6)
*"The proposed method demonstrates competitive performance on two widely recognized benchmarks... These results are achieved while significantly reducing computation costs (FLOPs)"* (Reviewer 41Ay)
*"The proposed method achieves SOTA performance with significantly reduced computational cost through extensive experiments on standard benchmarks."* (Reviewer LSXE)
### Accuracy and efficiency trade-off
In Table A and Table B, we add the number of parameters for methods listed in Table 1 and Table 2 of the main paper. FPS is measured on a single NVIDIA V100 GPU with 3 repeated runs. Compared with other methods, the proposed MPVSS achieves favorable accuracy and efficiency trade-off. We will include Table A and Table B with the number of parameters and FPS in the revised version.
Table A. Performance comparisons with state-of-the-art methods on VSPW dataset.
| Methods | Backbone | mIoU | WIoU | VC$_8$ | VC$_{16}$ | GFLOPs | #Params | FPS |
| :------------: | :------: | ---- | ---- | ---- | ----- | ----- | :-----: | :---: |
| Deeplabv3+[4] | R101 | 34.7 | 58.8 | 83.2 | 78.2 | 379.0 | 62.7 | 9.25 |
| UperNet[56] | R101 | 36.5 | 58.6 | 82.6 | 76.1 | 403.6 | 83.2 | 16.05 |
| PSPNet[71] | R101 | 36.5 | 58.1 | 84.2 | 79.6 | 401.8 | 70.5 | 13.84 |
| OCRNet[65] | R101 | 36.7 | 59.2 | 84.0 | 79.0 | 361.7 | 58.1 | 14.39 |
| SegFormer[58] | MiT-B2 | 43.9 | 63.7 | 86.0 | 81.2 | 100.8 | 24.8 | 16.16 |
| SegFormer | MiT-B5 | 48.9 | 65.1 | 87.8 | 83.7 | 185.0 | 82.1 | 9.48 |
| CFFM-VSS[50] | MiT-B2 | 44.9 | 64.9 | 89.8 | 85.8 | 143.2 | 26.5 | 10.08 |
| CFFM-VSS | MiT-B5 | 49.3 | 65.8 | 90.8 | 87.1 | 413.5 | 85.5 | 4.58 |
| MRCFA[51] | MiT-B2 | 45.3 | 64.7 | 90.3 | 86.2 | 127.9 | 27.3 | 10.7 |
| MRCFA | MiT-B5 | 49.9 | 66.0 | 90.9 | 87.4 | 373.0 | 84.5 | 5.02 |
| Mask2Former[7] | R50 | 38.5 | 60.2 | 81.3 | 76.4 | 110.6 | 44.0 | 19.44 |
| | R101 | 39.3 | 60.2 | 81.3 | 76.4 | 141.3 | 63.0 | 16.90 |
| | Swin-T | 41.2 | 60.1 | 82.5 | 77.6 | 114.4 | 47.4 | 17.13 |
| | Swin-S | 42.1 | 62.6 | 84.5 | 80.0 | 152.2 | 68.9 | 14.52 |
| | Swin-B | 54.1 | 63.1 | 84.7 | 79.3 | 223.5 | 107.1 | 11.45 |
| | Swin-L | 56.1 | 70.3 | 86.6 | 82.9 | 402.7 | 215.1 | 8.41 |
| **MPVSS** | R50 | 37.5 | 59.0 | 84.1 | 77.2 | 38.9 | 84.1 | 33.93 |
| | R101 | 38.8 | 59.0 | 84.8 | 79.6 | 45.1 | 103.1 | 32.38 |
| | Swin-T | 39.9 | 62.0 | 85.9 | 80.4 | 39.7 | 114.0 | 32.86 |
| | Swin-S | 40.4 | 62.0 | 86.0 | 80.7 | 47.3 | 108.0 | 30.61 |
| | Swin-B | 52.6 | 68.4 | 89.5 | 85.9 | 61.5 | 147.0 | 27.38 |
| | Swin-L | 53.9 | 69.1 | 89.6 | 85.8 | 97.3 | 255.4 | 23.22 |
Table B. Performance comparisons with the VSS methods on Cityscapes.
| Methods | Backbone | mIoU | GFLOPs | #Params (M) | FPS |
| :---------: | :------: | :--: | :----: | :---------: | :---: |
| FCN[37] | R101 | 76.6 | 2203.3 | 68.5 | 2.83 |
| PSPNet[71] | R101 | 78.5 | 2048.9 | 67.9 | 2.88 |
| SegFormer[58] | MiT-B1 | 78.5 | 243.7 | 13.8 | 20.7 |
| SegFormer | MiT-B5 | 82.4| 1460.4 | 84.7 | 7.20 |
| CFFM-VSS[50] | MiT-B0 | 74.0 | 80.7 | 4.6 | 15.79 |
| CFFM-VSS | MiT-B1 | 75.1 | 158.7 | 15.4 | 11.71 |
| MRCFA[51] | MiT-B0 | 72.8 | 77.5 | 4.2 | 16.55 |
| MRCFA | MiT-B1 | 75.1 | 145 | 14.9 | 12.97 |
| Mask2Former[7] | R50 | 79.4 | 529.9 | 44.0 | 6.58 |
| | R101 | 80.1 | 685.5 | 63.0 | 5.68 |
| | Swin-T | 82.1 | 543.6 | 47.4 | 5.41 |
| | Swin-S | 82.6 | 730.1 | 68.7 | 4.31 |
| | Swin-B | 83.3 | 1057.0 | 107.0 | 3.26 |
| | Swin-L | 83.3 | 1911.3 | 215.0 | 2.11 |
| **MPVSS** | R50 | 78.4 | 173.2 | 84.1 | 13.43 |
| | R101 | 78.2 | 204.3 | 103.1 | 12.55 |
| | Swin-T | 80.7 | 175.9 | 114.0 | 12.33 |
| | Swin-S | 81.3 | 213.2 | 108.0 | 10.98 |
| | Swin-B | 81.7 | 278.6 | 147.0 | 9.54 |
| | Swin-L | 81.6 | 449.5 | 255.4 | 7.24 |
Pdf: /pdf/a0016a080f68357e2b660f5d5b2c2edf6dd2e340.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper investigates an important and fundamental task: video semantic segmentation (VSS). While image semantic segmentation has received significant research attention, VSS has been relatively overlooked due to limited datasets and computational resources. This paper makes a valuable contribution to the field by proposing a method that combines a segmentation network with a flow network. Unlike traditional flow networks, the proposed approach employs query-based flow maps that correspond to each segment. Experimental results demonstrate the effectiveness of the method on two widely-used datasets, namely vspw and cityscape.
Strengths: 1. The paper addresses an important but relatively underexplored task.
2. The combination of segmentation and optical flow in a single model is a novel approach, particularly considering the use of query-based flow maps.
3. The proposed method achieves commendable performance on standard datasets such as vspw and cityscape.
Weaknesses: 1. The paper lacks specific details regarding the training of the flow modules. For example, what is the loss function used to train query-based flow? How are the flow networks (encoder/decoder) initilized? Do you use trained weights for these modules? Also, more visual examples about the query-based flow maps should be given.
2. The rationale behind utilizing both pixel-wise flow (F^{PF}) and query-based flow (F^{QF}) to generate query-based flow maps is not well-justified. In my opinion, pixel-wise flow is the flow map for every pixel while the query-based flow is only for specific segments. Equation 4, which concatenates both flows, may make the query-based flow not focus on the segments, but on all the pixels.
3. It would be helpful to include information on the number of parameters for all the methods listed in Table 1. Additionally, it would be beneficial to specify the frames per second (fps) achieved by the mask2former and mpvss methods using a single GPU for inference.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please see the weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive comments. We address all the questions below:
**Q1.** Training details for the proposed flow module, e.g., loss function used to train query-based flow, initialization of the flow network (encoder/decoder), utilization of pretrained weights. More visual examples of query-based flow maps should be provided.
**A1.** We have mentioned the training details for the proposed MPVSS which involves the training process for the proposed flow module in Lines 168-171 and Lines 263-264. (1) Loss function: During the training phase, the classification loss and binary mask loss are employed on the class embeddings and warped mask embeddings, respectively, after performing bipartite matching between queries and the ground truth masks, as demonstrated in Mask2Former. Subsequently, the loss gradients are propagated backward across the model to update the proposed flow module. (2) Initialization of the flow network: for the motion encoder, we utilized the weights from FlowNet encoder which is pre-trained on the synthetic Flying Chairs dataset; the motion decoder and flow head are randomly initialized. We will make the training details clearer in the revised version.
We provide more visualization of query-based flow maps in **Figure A** in the general response PDF.
**Q2.** Justification of the rationale behind utilizing both pixel-wise flow and query-based flow to generate flow maps.
**A2.** First, as mentioned in lines 237-239, we use the pixel-wise flow to refine the query-based flow and generate the final flow maps for each segment. Second, to better understand the rationale of utilizing pixel-wise flow and query-based flow, we provide visualization for pixel-wise flow and query-based flow separately in **Figure A**. The query-based flow and pixel-wise flow are compensated. The query-based flow captures the overall motion for the specific segment, while the pixel-wise flow adds more detailed movement at the fine-grained level. By fusing the two kinds of flow, the final flow maps are more stable for mask propagation.
**Q3.** Information on the number of parameters and FPS.
**A3.** We add number of parameters and FPS for each method in Table A and Table B in general response. Compared with other methods, the proposed MPVSS achieves both higher FPS and mIoU scores with different backbones on both datasets, which demonstrates the favorable accuracy and efficiency trade-off. We will include Table A and Table B with number of parameters and FPS in the revised version.
---
Rebuttal Comment 1.1:
Title: Reviewer oxr6
Comment: Dear Reviewer oxr6,
Could you please read the author's rebuttal and other reviews, and indicate whether your comments have been addressed? Thank you.
Best, AC
---
Rebuttal Comment 1.2:
Comment: Thanks for your response. My concerns have been addressed. I would like to keep my initial rating: weak accept. | null | null | null | null | null | null |
Language Model Alignment with Elastic Reset | Accept (poster) | Summary: This paper proposes a new approach for fine-tuning language models (LMs) with RL in order to achieve a good trade-off between maximizing reward and minimizing drift from the initial model, which is typically undesirable since it results in losing some capabilities while acquiring others. The approach consists in resetting the model to an exponentially moving average (EMA) of itself, while also resetting the EMA model to the initial one. The method is simple and demonstrates good performance on three different settings with varying degrees of complexity, including RLHF on top of LLaMA-7B for a QA task.
Strengths: 1. The proposed approach is simple and effective.
2. The evaluation is quite thorough considering multiple settings and thus demonstrating the generality of the approach.
3. The paper is well written and the problem setting is clearly stated.
4. The problem setting is of high importance to the community. The authors point out a major limitation with existing RLHF methods and propose a method for improving upon it.
Weaknesses: 1. The main limitation of this study is the evaluation protocol used to assess the results. The model's performance is evaluated by the same reward model used to train it. However, it is well known that LMs fine-tuned with RL(HF) are prone to overoptimizing their reward (models), overfitting and actually performing badly when evaluated by humans (which is a more robust metric and ultimately what we care about for many applications). While I understand that performing human evaluations can be expensive, it is very difficult to assess the validity of these results otherwise. It could be the case that Elastic Reset is a more powerful optimization approach that overoptimizes the reward better than PPO i.e. can obtain high reward during training but this performance doesn't actually transfer well to unseen prompts when evaluated by humans. At the very least, I suggest using a different reward model for evaluation such as a different base model (of similar size) trained on the same data or the same model trained on a different dataset such as the summarization data from [1] or the HHH dataset from [2]. You could also hold out part of your data and train a separate reward model with a different base on it in order to bring it more in-domain.
2. Can you include experiments with varying reset intervals for the all tasks? It's important to know how sensitive the model is to this parameter and better understand if similar / same values work across different tasks. Do you have any suggestions or insights for selecting this hyperparameter for new tasks?
3. It would also be interesting to see how the results change if only some of the parameters are reset e.g. the last few layers as it typically done when fine-tuning LLMs.
References:
[1]. Stiennon et al. 2020, Learning to summarize from human feedback.
[2]. Bai et al. 2022, Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why don't you show results with the same KL coefficients for PPO and Elastic Reset in Figure 5?
Do smaller values of KL not work at all for PPO? What about larger values for Elastic Reset?
2. You mention that Elastic Reset may also work without the KL penalty but you never test this hypothesis. Can you include an experiment with Elastic Reset and KL coefficient of 0? For a fair comparison, it would also be interesting to compare with PPO without the KL penalty, which I assume will perform much worse.
3. Figure 6 is rather noisy, can you include a smoother version in the appendix to better see how the two methods compare with each other?
4. In Figures 5 and 6, the light blue line can barely be seen on printed paper, can you please use a different / stronger color?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors clearly state the limitations of their study at he end of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review, we are glad you find the approach simple and effective, the writing clear, and the problem of high importance to the community. We agree that drift is a major limitation for RLHF methods and wish to address it robustly.
**Testing with another Reward Model**
We agree that evaluation is key and find the proposed evals interesting but hope to demonstrate why our protocol is already robust to reward-hacking. Testing with models trained on other reward datasets (e.g. HHH) doesn’t make sense for any of our tasks since they are optimizing very different rewards (StackExchange helpfulness is good coding != HHH Helpfulness which is friendliness / general QA. Summarization is just human-preferred summaries). Testing with a new reward model trained on held-out examples is possible but has not been done in previous literature and would deviate our results from existing baselines (by using a different training reward model without held-out examples).
We think our protocol is robust to reward hacking as
1. The pareto graph measure is robust
A method that does better on the pareto graph must achieve higher reward while staying closer to the original model (lower KL) which, by definition, is reducing drift. By staying closer to the original model, it is likely to better transfer, maintain more of the original model’s ability, and less likely to be over-optimized (reward-hacking)
2. External tests for IMDB and StackLLaMA show robustness
We evaluate on a set of separate test prompts for the IMDB task (Table 2) and demonstrate that our method outperforms the baselines there, transferring well to unseen prompts. On StackLLaMA, we use a separate coding benchmark HumanEval that should be correlated with good StackExchange answers, but is completely separate from our reward model. We find better coding performance with the Elastic Reset-trained model, clearly showing that it has learned better coding, not reward hacking.
**Varying Reset Interval**
We provide new results on IMDB varying the reset interval from 10, 17, 26, and 35 epochs, keeping everything else constant. We plot new results in Figure 3 in the additional pdf attached in the global response at the top. As shown in the pareto curve, Elastic Reset is robust to choice of reset interval, we simply chose 17 to fit two resets into 50 IMDB epochs.
Sadly, we could not run this for StackLLaMA as it takes too long for this rebuttal period.
**Elastic Reset vs PPO with same KL**
As shown in Figure 5c, PPO diverges quickly with KL 0.01, and does even worse with smaller KL coefficients
**Experiments without KL**
We ran both PPO and Elastic Reset without KL penalty in IMDB and plot results in Figure 2 in the additional pdf. PPO diverges quickly but Elastic Reset without KL actually performs comparable to PPO’s best result with KL.
**Cleaner Figure 6**
We have added smoothing with a rolling mean of window size 10 but could not include it in the additional pdf due to lack of space. We will add this to the appendix for the final paper.
**Partial Resetting**
We agree this idea is interesting and include it as possible future work in Section 8.
**Light Blue is Too Light**
We have made the lightest blue stronger/darker for the final paper, please see the graphs in the additional pdf.
---
Rebuttal Comment 1.1:
Title: Post-Rebuttal Review
Comment: I thank the authors for their detailed answers to my questions and appreciate that they ran additional experiments as suggested, which seem to support the claims in the paper.
Regarding the reward model used for evaluation, I agree with the authors that the experiments support some degree of generalization, particularly the ones on StackLLaMA. However, I was not convinced by this statement "Testing with a new reward model trained on held-out examples is possible but has not been done in previous literature and would deviate our results from existing baselines (by using a different training reward model without held-out examples)." First of all, more and more papers are using multiple models for evaluation ([1], [2], [3] to name a few) so I expect this to soon become standard practice, and in any case it provides a more robust evaluation. What I'd propose is to use two different reward models, the one that you already have which would allow easy comparison with existing literature, and another one that would strengthen the robustness of your evaluations.
I also agree with reviewer tYGJ's comment that evaluating the approach on more challenging tasks beyond the IMDB one would further strengthen the paper and make the results more convincing and relevant to practical applications.
In conclusion, I am willing to increase my score to 6 conditioned that the authors commit to including evaluations using another reward model to ensure the results are robust to this choice.
References:
[1]. AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback. Dubois et al. 2023.
[2]. LIMA: Less Is More for Alignment. Zhou et al. 2023.
[3]. Secrets of RLHF in Large Language Models Part I: PPO. Zheng et al. 2023
---
Reply to Comment 1.1.1:
Title: Results with more Reward Models
Comment: We followed the reviewer's suggestion and train two more reward models exactly as our first but with different random seeds. We evaluate LLaMA 7B after supervised finetuning zero-shot, and after RLHF with PPO and Elastic Reset as in Table 3. We measure the change in reward from the initial model reward and show the average and standard error across the seeds. We also add change in perplexity for reference.
| | $\Delta$ Reward $\uparrow$ | $\Delta$ Perplexity $\downarrow$ |
| --- | --- | --|
| Zero-shot | 0.00 | 0
| PPO | 0.81 $\pm$ 0.06 | 0.19
| Elastic Reset | 0.96 $\pm$ 0.09 | 0.14
We agree this is a more robust eval and thank the reviewer for the suggestion. We would like to note the novelty of this approach as all three papers the reviewer referenced were released after the NeurIPS deadline and none of them evaluate over multiple trained reward models. LIMA and AlpacaFarm evaluate with OpenAI APIs not trained reward models, though AlpacaFarm does use multiple prompts to approximate multiple annotators. Zheng et al (2023) do train two reward models but they are used separately for separate languages (English and Chinese) as far as we can tell. | Summary: This paper aims to address the problem of language drift issue during RLHF (reinforcement learning with human feedback), which is also known as alignment tax and reward hacking. The problem is that during RLHF process, the model can "overfit" to the given suboptimal rewards while forgetting some important skills such as linguistic capabilities. The authors proposed a method named Elastic Reset that periodically reset the online model with an exponentially moving average (EMA) of its previous checkpoints. The authors use a few tasks to showcase the effectiveness of the proposed Elastic Reset method on top of GPT-2 and LLaMA-7B, and shows that it is better than vanilla PPO with a reduced KL penalty.
Strengths: The method is rather simple and easy to implement. It just needs to save the parameters of previous checkpoints during training, so that every $n$ steps, one can compute an EMA and merge it to the current model.
The empirical results suggest that the proposed method is effective on the selected tasks and datasets, outperforming the simple PPO and NLPO baseline methods.
Weaknesses: - The method itself is not particularly novel. Using Reset mechanism and EMA of model parameters to mitigate overfitting is rather common, although it might be a novel application in RLHF.
- The "drift" problem is not very clearly defined and formulated. The motivation is not well justified. Many descriptions are references while there is no concrete experiments and case studies to show the problem of drifting clearly.
- I believe the key issue that this paper wants to address is essentially the same to many continual learning problems -- learning new knowledge while not forgetting the acquired skills. Therefore, many CL methods such as experience replay, regularization, EWC (Elastic weight consolidation), should also be applicable. But none of them is mentioned in the paper. The authors focused too much on the RLHF literature, using newly invented terms, however, ignored that the key challenge can be formulated with existing problem setup and can be addressed by existing techniques.
- The selected tasks and datasets are quite narrowed. The experiments on GPT-2 and even smaller models are also not that convincing in that RLHF is rarely used on such LMs. I suggest authors can replace those experiments with more commonly used datasets to support the claims.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Line 219, why do you "greatly reduce the KL coefficient $\beta$"? From the description, it seems that you want to amplify the drift issues, but it seems choosing different $\beta$ can significantly influence both PPO and Elastic Reset. How do you decide choosing which $\beta$ to compare your method and other baselines? Do you have to decide this before you see the real test data?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors use Section 8 to describe the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad the reviewer finds our method simple and effective. We hope the following clarifies the method and importance of our work.
**Novelty**
We are not aware of any method that resets to an EMA to counter overfitting, and we can’t find any method that also resets the EMA model as we do with Elastic Reset. We would be happy to cite others and explain how our method compares if the reviewer can give references.
**Definition and Importance of “Drift”**
We define drift as “performance degradations of a language model as a result of RL finetuning and improvement on a reward objective”, see [Lazaridou et al (2020)](https://arxiv.org/abs/2005.07064) for a full taxonomy of drifts. The phenomenon also known as “alignment tax” is generally agreed to be a major downside of RLHF finetuning [(Askell et al, 2021)](https://arxiv.org/abs/2112.00861) and many previous works that we cite have demonstrated the issue in toy tasks [(Lee et al, 2019)](https://arxiv.org/abs/1909.04499) to real-world RLHF chatbots [(Bai et al, 2022)](https://arxiv.org/abs/2204.05862).
**Connection to Continual Learning**
We agree there are many links to CL. We focus on RLHF-specific terms, known methods, and benchmarks to make our work most useful to the many RLHF projects currently in progress. We are happy to add a summary of CL connections to our related work as well as other references the reviewer may have:
The pretrain-then-RL-finetune setup with the goal of maintaining pretrained knowledge can be seen as a two-step, RL-specific instance of continual learning and therefore language drift has links to catastrophic forgetting (McCloskey & Cohen, 1989). There is a clear similarity between mitigation methods: rehearsal (Robins, 1995) or experience replay (Rolnick et al, 2019) is equivalent to multitasking with the pretraining objective (Lowe* et al., 2021) and weight-update regularization (Kirkpatrick et al., 2017) has similarities to KL regularization (Jaques et al, 2019).
**Are Tasks Appropriate?**
The IMDB task is the most popular RLHF benchmark, used as a test-bed for many papers including the original work [(Ziegler et al, 2019)](https://arxiv.org/abs/1909.08593). We use the standard version from the GRUE benchmark [(Ramamurthy et al, 2022)](https://arxiv.org/abs/2210.01241) because it includes a strong, tuned PPO baseline and allows us to demonstrate trends and do quantitative evaluations that are reasonable given our compute.
**Choosing Hyperparameters**
Following the GRUE benchmark, we chose all our hyperparameters on the validation set of the IMDB and test on the test set (Table 2). StackLLaMA was too expensive to tune so we simply applied what we knew from IMDB for our hyperparameters.
**Elastic Reset uses smaller KL coefficient**
Because PPO drifts so much, it requires a very high KL coefficient and does not work well otherwise. Elastic Reset, accounts for drift by resetting so if you use a high KL coefficient, resets will not be useful within IMDB’s 50 epochs as there will not be much drift to counter. We choose an optimal, smaller KL so that there is more drift for Elastic Reset to counter.
We compare baselines against our own method and choose the best KL coefficient for each within a fixed compute budget i.e. 50 IMDB epochs in the GRUE benchmark. In Figures 5c,d we also show that Elastic Reset works across a wide range of smaller KL coefficients that PPO doesn’t and all of them outperform PPO’s best setting. We find that Elastic Reset even works without KL and is competitive with PPO+KL's best setting (see Figure 2 in the additional pdf included in the global response at the top)
---
Rebuttal Comment 1.1:
Comment: Nice explanation and I have raised my score accordingly. Thanks!
---
Rebuttal 2:
Title: Please Review Rebuttal
Comment: Can the reviewer please read and respond to the rebuttal? We have run new experiments that we believe address all the noted issues with the paper and may merit an increase in score. If there are any outstanding issues, we would like the chance to respond before the discussion period is over. Thank you. | Summary: This paper proposes Elastic Reset, a simple technique for countering language drift and reward model overfitting when optimizing a language model policy against some communicative reward via reinforcement learning, as is done in RLHF. The idea of elastic reset is to periodically reset the trained model to an exponentialy-weighted moving average (EMA) between the initial pretrained model and the online model trained since the last reset.
The authors demonstrate on a variety of RLHF tasks from relatively simple (pivot translation) to relatively substnatial (StackExchange QA) that Elastic Reset seems to be a simple way to mitigate language drift which outperforms other approaches, including the commonly-used KL penalty in RLHF. They propose a nice way of interpreting the tradeoff by using a "pareto frontier graph", which plots methods' downstream task reward and measure of language drift on x/y axes, similarly to ROC curves, and demonstrate that for some given compute budget, Elastic Reset dominates existing baselines like REINFORCE and PPO with KL penalties.
Overall there's a lot I like about this paper, but there are some issues I have with the experimental setup that prevent me from unconditionally recommending the paper for acceptance. If my concerns are clarified or resolved, I am willing to update my score and look forward to the author response.
Strengths: - A simple technique that appears to give gains across a variety of RLHF benchmarks at varying scales (from translation to stackexchange Q/A). I am impressed by the breadth of experiments in this paper. It seems this could be one of those simple tricks that practitioners find useful and widely deploy in future RLHF pipelines (but time will tell whether the results are robust enough).
- Good comparison to other sensible baselines, with some sensible ablations on the IMDB mock sentiment task, but some areas for improvement here (see Weaknesses)
- I like the Pareto figures, reminiscent of ROC curves, which demonstrate the tradeoff between task performance and language drift and how to identify an optimal method under a given practitioner's constraints. I agree with authors that this is the right way to think about language drift, though I have some points of confusion (see firsrt Weakness)
Weaknesses: ## Unclear how robust elastic reset under different compute constraints
- The IMDB and StackLLaMA experiments demonstrate something subtly worrying in my view: they show that there are portions during training where elastic reset *doesn't* help over comparable baselines. For example, in Figure 4 a/b, training prior to the first two resets demonstrate a "spiking" behavior where the model overfits to the reward and has higher language drift than the other two methods. It is only after the 2nd reset, untnil the end of training, that the reset causes task reward to rise higher than existing methods *but*.
- This feels "lucky" to me, and it is hard to measure how robust elastic reset is under different compute constraints. For example, lets say we only have enough compute to train the model up to (but not after) the 2nd reset, i.e. epoch ~33 in Figure 4. It seems like in this case we would *not* prefer to use the elastic reset model, since it seems to still be in the regime of overfitting, and it is only after the 2nd reset that things start to look better.
- A similar concern exists for StackLLaMA where there is not a significant difference between Elastic reset and PPO, until around ~500-600 epochs, when there suddenly is (and perhaps there's even a regime in 400-500 where Elastic Reset is overfitting).
- To address this concern, it seems like the pareto graphs, and in general the performance of elastic reset, need some notion of compute budget to more accurately assess when it is appropriate to use elastic reset. For example, what does the pareto frontier graph look like if we only consider training up to about epoch 33 in Figure 4? (i.e. before the 2nd reset)? Is elastic reset still the preferred choice? Would changing the number of resets change anything? (Perhaps I'm misinterpreting the pareto graphs here).
- A more detailed analysis of how many resets are needed and how robust elastic reset is to the timing of rests would partially alleviate these concerns (see next point)
## Could use more carefully controlled baselines
- The baselines could be clearer. If my understanding is correct, the *only* way in which elastic reset diverges from traditional RLHF is by periodically resetting the model to be the EMA of the past n model steps. As authors say, this is a strength of the method, but the comparisons tend not to directly compare to a model with/without elastic reset. For example, considering Elastic reset with 3 resets during training, the sensible baseline is to compare to the exact same method (e.g. REINFORCE), with the same hyperparameters, just with 0 resets during training. But the baselines in the paper seem to always have a slight confounding factor, e.g. for the pivot translation task L152, a KL penalty for Elastic Reset is added on top of the REINFORCE baseline that does not seem to be present in vanilla REINFORCE; same for L219 on top of PPO (the beta parameter is reduced, and the decision to let the PPO model "drift more" is not well justified); only the StackLLAMA experiment in Section 7 appears to stay consistent (L280), but the improvement of elastic reset here is not as clear.
- More generally it would be better if, instead of a "1 vs many" comparison where a single implementation of elastic reset is compared to a single implementation of PPO and REINFORCE baselines, a "paired" comparison strategy was adopted, where for each method and specification of hyperparameters for each baseline, authors measure the effect of elastic reset on top (as authors say, it seems simple to just add this onto any arbitrary alignment technique, even not necessarily RLHF). Does Elastic Reset improve consistently over other RLHF methods, keeping hyperparameters consistent? Of course, it doesn't have to improve all the time, but knowing when it helps and when it doesn't (because maybe the baseline method by itself keeps language drift under control) would be nice.
- Again, Figure 5c is lacking some context. The Elastic Reset figure is built on PPO with beta = 0.01. I appreciate the KL ablation but does Elastic Reset dominate PPO across all KL penalty values kept constant? Otherwise what is the reason for setting the KL penalty lower for elastic reset?
- Again, to address this concern, I would love an investigation of how many resets affect performance, and any guidelines for choosing a set number of resets, in the same style as the nice ablations in Figure 5. Authors state that it is difficult to find a heuristic for how often to reset (L307), but graphs showing the effect of performance for different reset timescales and/or compute would be enormously informative, and help address the first main weakness I outlined above.
## Mathematical connection/intuition as to why elastic reset differs from distillation and/or KL
- I get the feeling there are some mathematical connections to both KL penalty and alternating RL/distillation algorithms that are not fully explored in the paper. I haven't thought too deeply, but, for alternating RL/pretraining methods (S2P; Lowe et al., 2021) and iterated distillation, one can think of the distillation/SL phase as precisely doing a sort of model reset to the initial pretrained model via gradient descent to minimize KL between the online model and the initial model. For RLHF with KL penalties, we can also view this as a sort of bayesian inference, computing a model average of the prior pretrained model and the posterior (online) model trained via RL ([Korbak et al., 2022](https://arxiv.org/abs/2205.11275)). The number of steps taken is a hyperparameter that, if chosen carefully, results in a model after distillation that is likely some average of the online model and the initial pretrained model, as in elastic reset. If elastic reset is a simpler or more efficient way of doing the same thing as such a distillation phase, this is a great thing, but it's not quite clear to me now. Could authors clarify any differences between distillation and elastic reset? In particular, why might we expect elastic reset to be **more** performant than than alternating RL/distillation or a KL penalty, given that they seem to have the same motivation (which might even be mathematically formalized)?
## Minor
- L99 "Elastic Reset takes inspiration from both of these"—it's not quite clear what "both" refers to.
- Some qualitative examples of model outputs in the appendix could potentially be nice to have in the paper, perhaps even identifying what model outputs look like at different points on the pareto frontier curve for different training methods.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Main questions in Weaknesses section. Some more minor questions:
- L100 "EMA on CPU"—are authors sure this is more efficient? If EMA is on CPU, don't you need a full model copy from GPU to CPU for each step? This seems expensive, and suggests that Elastic Reset would also benefit from keeping the EMA on the GPU. (You could theoretically keep the pretrained model used for KL penalties on the CPU and copy over, but this would significantly slow things down)
- Authors could add some intuition (or experiments) that discuss how important is it that the reset resets to an **exponentially-weighted** moving average? Why not just a literal model average between the new model and the pretrained model?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review! We are glad you find our method simple but effective and tested against sensible baselines across a breadth of experiments. We also believe pareto curves are the right way to think about language drift and hope that our work increases their adoption as a standard in RLHF evaluation.
**Compute Constraints**
Our method does require 2 resets to outperform baselines but notably uses the exact same compute as the baseline to achieve these results. This is in line with all other works using resetting that frequently require many resets to outperform baselines.
We also agree that compute is an interesting axis of comparison so we’ve created a new graph where we plot each run between resets as its own pareto curve so practitioners can choose at which reset to stop (see Figure 1 in additional pdf attached in the global response). Thank you for the suggestion. A user with less compute can also make resets more frequent to get lower reward but less drift (see Figure 3 in additional pdf).
**Baselines**
We always compare to the exact model without Elastic Reset (i.e. 0 resets)
- KL Penalty (Reinforce + KL) in Translation Game
- PPO (includes KL Penalty) in IMDB
- PPO (includes KL Penalty) in StackLLaMA
**Fair baselines use different KL coefficients**
Because PPO drifts so much, it requires a very high KL coefficient and does not work well otherwise. Elastic Reset accounts for drift by resetting so if you use a high KL coefficient, resets will not be useful within IMDB’s 50 epochs as there will not be much drift to counter. But Elastic Reset can still work with that large KL coefficient: simply train for more than 50 epochs to get more drift and reset less frequently (e.g. every 50 epoch instead of 17).
For a fair comparison, we limit both runs to the same compute. So following all previous work, we choose the best KL coefficient for each baseline and our own method within a fixed compute budget i.e. 50 epochs for IMDB following the [GRUE benchmark](https://rl4lms.apps.allenai.org/grue). In Figures 5c,d we also show that Elastic Reset works across a wide range of smaller KL coefficients that PPO doesn’t and all of them outperform PPO’s best setting.
**Choosing Number of Resets**
On IMDB, we tested resetting 1,2,3,5,and 10 times but found no improvement over 2. We hope the new reset-pareto graph described above helps choose the number of resets and allows future work to demonstrate the tradeoff.
**Elastic Reset Intuition**
Distillation is one possible way to view the EMA and so Elastic Reset can be seen as a compute-efficient alternative to Iterated Learning (which alternates learning and distillation). We can see that an EMA model drifts less than its online model (see Appendix D.5) which could imply it is an effective distillation for RLHF.
Another reason why Elastic Reset works is it doesn’t reset the value function. So after each reset, the model starts with a better initialization (EMA) but also better value function that can more directly point towards high reward (see Appendix D.4). This also helps explain the improvements after each reset in Figure 1 in the additional pdf.
**EMA on GPU vs CPU**
It is indeed easier to keep EMA on GPU and we do this for our experiments. If memory is tight, it is possible to do EMA updates every $n$ steps instead of every step and therefore the GPU to CPU copy is less frequent. On IMDB, we found updating the EMA every 100 steps achieves similar results to updates every step.
**Weighted average instead of EMA**
We use an EMA since it is a simple and efficient update with each gradient update and our own experiments suggest it mitigates drift (Appendix D.5). Simple weighted averages are an interesting idea and will likely work but we leave it for future work as we don’t expect any improvements or advantages over EMA.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thanks to authors for the response. Upon looking at Figure 4a/b closely it seems like there are indeed still regimes where under 1 or 2 resets, doing elastic reset allows one to get higher reward with less drift. Figure 1(c) in the supplement is helpful for illustrating this point and would be useful to include in the supplementary material.
The other responses addressing questions regarding elastic reset intuition, KL penalties, etc are also helpful.
I'm raising my score to a 6.
---
Rebuttal 2:
Title: Please Review Rebuttal
Comment: Can the reviewer please read and respond to the rebuttal? We have run new experiments that we believe addresses all the noted issues with the paper and may merit an increase in score. If there are any outstanding issues, we would like the chance to respond before the discussion period is over. Thank you. | Summary: Finetuning language models with reinforcement learning (RL) using human feedback, i.e. RLHF, has emerged as a promising paradigm for aligning large language models (LLM) to human preferences. Though RLHF has shown promising results when training models such as ChatGPT, RL has some inheritance drawbacks. Merely optimizing a reward model trained on human preferences can degrade performance, known as reward hacking, alignment tax, or language drift in the literature. This paper argues that the standard way for addressing this is insufficient and proposes a new idea. In particular, the authors propose, Elastic Reset, a technique to reset the model weights to address reward hacking.
Strengths: Addressing reward hacking is an age-old problem, and many solutions have been proposed. The strength of this paper is that the solution proposed has several key benefits:
- Easy to implement: The authors implemented their idea on top of several existing RLHF frameworks and showcased the benefit of Elastic Reset to existing frameworks RL algorithms.
- Written Presentation: The author's presentation of the proposed algorithm, explanations, and ablations studies were justified and thoroughly explained.
- Experiments: The authors presented several experiments across three very different domains and showcased the benefit of their proposed method.
- Clear Definition of Language Drift: Often language drift is not clearly defined, but found very clear examples to showcase language drift issues.
Weaknesses: Although the proposed algorithm is straightforward to implement and has shown good results across three challenging datasets, the approach has several weaknesses.
- Resetting on-policy vs. off-policy: Resetting weights are typically done with off-policy algorithms because the policy has a problem exploring when increasing the replay ratio. Whereas for on-policy to deal with exploration, we typically add temperature parameter, entropy loss coefficient, or some other exploration bonus. Instead, in the on-policy case, the authors use reset to keep the policy close to the original policy so it does not experience language drift, which seems like the opposite use-case of the original intent.
- Second reset: The first reset in which EMA is reset to the initial model means that you are still searching around an epsilon ball around the initial model. I am unsure why this would be much better than KL divergence, which explicitly does this.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How did you decide on resetting of the first stage? Given stage one, how did you decide on resetting the second stage?
- Why did you include the supervised learning (SL)+PPO results in your experiments? Training RL from scratch is known to be a difficult task. Most RLHF success stories warm start the RL model using SL. The results in the paper seem to use the RL with PPO results from the GRUE benchmark. But SL+RL results are always stronger.
[Algorithm] - Sentiment Score - Perplexity
[PPO +SL] - 0.626 - 35.045
[NLPO + SL.] - 0.611 - 33.82
Is the proposed approach is not compatible with RL+SL?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review! We’re glad you found our problem compelling and clearly defined, our method easy to implement, and our experiments thorough in demonstrating the benefit of our method.
**On-policy Elastic Reset vs prior work off-policy Reset**
A major difference between our work and prior work in resets is that we reset to an EMA whereas prior works (like Nikishin, Schwarzer et al, 2022) reset to a new random initialization. Resetting to random can indeed help with exploration, but we don’t think that is the benefit of our method. Instead we believe that
1. After a reset, the value function is maintained and provides a better direction, see Appendix D.4 for an in-depth summary. Supporting this, we find that each run after a reset is better than the last as shown in Figure 1 in the additional pdf at the top.
2. The EMA model drifts less than the online model because it smoothes out the gradient steps, see Appendix D.5. Just using an EMA is too slow, though, and it is difficult to achieve high reward with an EMA alone. Iteratively resetting to an EMA strikes a good balance.
**Elastic Reset vs KL Divergence**
KL simply pulls all gradients towards the pretrained model. Not all change from the pretrained model is bad, as argued by [Gao et al (2022)](https://arxiv.org/abs/2210.10760) there are good gradient directions to move in, even outside the epsilon ball. Elastic Reset stays close to the model but the EMA also seems to provide an inductive bias against drifting in non-linguistic directions (see Appendix D.4)
**Hyperparameter Choices**
We chose all our hyperparameters on the validation set of the IMDB and then applied them to StackLLaMA. We tested resetting 1,2,3,5,and 10 times but found no improvement over 2. We tested resetting the EMA model 2,4x less and more frequently but found no improvement over the same reset schedule.
**SL + PPO**
We do include SL + PPO for the appropriate tasks (Translation Game and StackLLaMA). We don’t do SL+PPO for IMDB following the GRUE benchmark that found stronger results without SL [(Ramamurthy et al, 2022)](https://arxiv.org/abs/2210.01241).
---
Rebuttal 2:
Title: Please Review Rebuttal
Comment: Can the reviewer please read and respond to the rebuttal? We have run new experiments that we believe addresses all the noted issues with the paper and may merit an increase in score. If there are any outstanding issues, we would like the chance to respond before the discussion period is over. Thank you.
---
Rebuttal Comment 2.1:
Comment: I want to thank the authors for performing additional experiments.
**On-policy Elastic Reset vs prior work off-policy Reset**
Thank you for the explanation.
**Elastic Reset vs KL Divergence**
Thank you for the explanation.
**Hyperparameter Choices**
Thank you for the explanation.
**SL + PPO**
In the GRUE benchmark (table 3), we see that PPO+Supervised has a sentiment score of 0.626 and a perplexity score of 35.049, whereas PPO has a sentiment score of 0.602 and a perplexity score of 33.816. Furthermore, table 2 shows that RL+Supervised is always better than Supervised, which is not the case for RL without supervised warm starting.
[1] IS REINFORCEMENT LEARNING (NOT) FOR NATURAL LANGUAGE PROCESSING: BENCHMARKS, BASELINES, AND BUILDING BLOCKS FOR NATURAL LANGUAGE POLICY OPTIMIZATION by Ramamurthy et al. 2023
---
Reply to Comment 2.1.1:
Title: IMDB Clarifications
Comment: For the IMDB benchmark, we do use the strongest baseline "PPO" and it is warm-started. Elastic Reset outperforms both PPO and Supervised+PPO and we'd like to clarify the differences between the latter methods.
The initial model for the PPO baseline is GPT-2 after supervised finetuning on IMDB. This is exactly what we're doing on our other tasks as well and guarantees the lowest perplexity for our initial model under the data distribution. What Ramamurthy et al (2023) call "Supervised + PPO" further finetunes that initial model on just the positive examples (and therefore already drifts from the distribution of *all* movie reviews).
"Supervised + PPO" achieves higher reward than PPO but also has much more drift i.e. higher perplexity. In Ramamurthy et al's main paper it is unclear which model is better but in the Appendix Table 5 for an ablation, the same target KL shows PPO outperform Supervised + PPO. With the default hyperparams, we run an extra experiment to achieve a higher reward and perplexity with PPO by training it 2x longer (100 epochs). We find it achieves the same perplexity as Supervised + PPO in 50 epochs but higher reward. Elastic Reset trained similarly 2x longer outperforms both methods.
| |Sentiment $\uparrow$ | Perplexity $\downarrow$ |
|--|--|--|
PPO (appendix KL inf) | 0.838 $\pm$ 0.061 | 41.897 $\pm$ 1.806 |
Supervised + PPO (appendix KL inf) | 0.796 $\pm$ 0.004 | 42.916 $\pm$ 1.716 |
|||
PPO (main paper) | 0.602 | 33.816 |
Supervised + PPO (main paper) | 0.626 | 35.049 |
PPO trained 2x longer (ours) | 0.730 $\pm$ 0.002 | 35.093 $\pm$ 0.2 |
Elastic Reset trained 2x longer (ours) | **0.736 $\pm$ 0.01** | **34.722 $\pm$ 0.6** |
We hope this clarifies our results, please let us know if you have any questions. | Rebuttal 1:
Rebuttal: We've run some extra experiments in response to reviewer comments, please find three figures attached:
1. Plotting each reset of Elastic Reset separately
2. PPO and Elastic Reset with best KL vs without KL (coefficient 0)
3. Elastic Reset over a range of reset frequencies
Pdf: /pdf/a2cb2b8e120cfce8f61973d032883d0de612c1d1.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Towards A Richer 2D Understanding of Hands at Scale | Accept (poster) | Summary: This paper builds a large-scale hand-object interaction dataset named Hands23 that contains 257K images, 401K hands, 288K objects, and 19K second 33 objects and combines different image sources (Visor, Epic-kitchen, COCO, Internet articulation and novel videos). Compared to the previous hand-object interaction datasets, it includes fine-grained contacts, detailed hand-grasp labels, second objects, and object segmentation. The resulting dataset covers a large amount of hand-object interactions in both ego and exo views. Based on the dataset, they show that a baseline model (Mask-RCNN) can perform well on Hands23 and also generalizes better to other datasets (than previous work). Thus, Hands23 provides a useful resource for the community to study hand-object interactions with applications from computer vision to robotics.
Strengths: 1) The proposed dataset is large and covers a wide distribution of hand-object interaction scales in both first and third-person views.
2) Data were richly annotated: hand bounding boxes and segmentation (standard), contact (follows [43], in-contact object (following [53], secondary object (novel), hand grasps (following Cutkosky taxonomy). Thus, the authors innovatively combined to provide a more comprehensive annotation than previous work at scale. Thus, the proposed dataset contains more fine-grained annotations, e.g., second object, detailed contacts and hand-grasp labels.
3). Ethics, privacy and demographics were particularly well addressed. Faces were obfuscated for privacy and Demographics and Realism of the data was analyzed.
4). The proposed baseline model can predict all the rich interaction information and shows good generalization ability to Ego4D dataset.
5) This is a comprehensive paper with excellent Supp. Materials.
Weaknesses: 1) One of the main advantages of the proposed dataset is the “second object” defined as an object in contact via a handheld tool. However, the motivation for introducing the second object and the corresponding possible applications are not as well discussed.
2) This paper shows the zero-shot generalization to Ego4D dataset. However, Ego4D only contains the first-person view, which is less challenging due to rel. fixed hand scale and few (hand) occlusions. Therefore, it would be good if the authors could also show the zero-shot generalization to third-person data.
3) While I do think the paper makes significant advances (thus my score), it is not higher, as one would hope for even richer annotations.
4) One more minor comment, the authors state that “We only annotate non-crowd instances with at least 10^3 pixels of area.” While that makes sense it would also be good to annotate crowded areas (just like it is done for pose estimation data in COCO).
Regarding relations to prior work I have two minor suggestions:
- This paper gives fine-grained contacts annotation ({touch, hold} and {tool, container, neither}). However, some previous works in human object interaction (e.g., Discovering Human Interactions with Large-Vocabulary Objects via Query and Multi-Scale Detection) provide even more detailed object and contact classes (e.g., unlocking door). Therefore, the advantage of the current annotation type could be further discussed.
- Line 29: “we extend this to a richer vocabulary that distinguishes touching, holding and using, and includes grasps” Here, GRAB [56] should also be mentioned as it does something similar. Perhaps one should also state that EPIC kitchen has a much broader vocabulary?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Overall the paper is clear, but I do have a few suggestions/questions:
- Object labels in the figures are not too self-evident (e.g., ‘ObjC, NP-Fin’ in Figure 1, etc.)
- The caption for Table 2 does not mention the metric (the text does, but would be good to add to caption)
- In the caption of Table 2 you state: “no other dataset produces good results on Hands23.” Yet, 100DOH gives a reasonable 78.5. Likely the only difference is, thus, scale? What if you train on a fraction of Hands23 that has as many images as 100DOH?
- Last but not least, I would find it really useful to get an assessment of the accuracy of the human labelers.
Overall, this is a strong paper, and indeed I was wondering why they did not include Ego4D in the dataset. This comment clarified it: “We were unable to annotate Ego4D due to strict restrictions on data redistribution, but can use self-training to integrate Ego4D and our data” That’s great, but could already be mentioned earlier and I think Ego4D and a few other datasets (e.g. Grab, Discovering Human Interactions with Large-Vocabulary Objects via Query and Multi-Scale Detection) should be in Table 1.
In the Supp Mat it is stated that the data format for the release is not clear yet. I think it would be ideal to follow, e.g., COCO or Epic kitchen for consistency in the literature.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations:
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback which we believe will improve the paper.
**Q1: Could you provide more motivation for second-objects?**
**A1:** We will include a more explicit and centrally organized discussion of second objects if accepted. Since humans use tools to accomplish tasks, second objects are needed for a complete and rich understanding of hand interactions. Consider, for instance, a hand holding a knife. Without the second object, one doesn’t know whether the action is cutting bread, transferring chopped garlic to a pan, scraping something off a cutting board, or even brandishing the knife. Indeed, knowing this distinction is important for connecting to narrations: if one is drilling into a table, the second object is the part that is described as the direct object, rather than the drill.
**Q2 Does the model generalize well on 3rd person data?**
**A2:** Indeed, our motivation for the diversity of data used is so that the model has a good chance of generalizing well to 3rd person data. We have some partial generalization experiments while testing on other datasets as has been noted. These, however, test only hand detection. We will add generalization to other 3rd person datasets if accepted. In the meantime, we show some results on the SAM dataset in the figure pdf. We selected SAM because it is diverse, recent, and high quality. Figure 4 shows two rows of results on SAM; the top is selected and the bottom is random. Our method generalizes fairly well on unseen 3rd person data.
**Q3: What about even richer annotations**
**A3:** We highly welcome suggestions for more labels to further enrich Hands23. We additionally will ensure that is possible for the community to add more annotations.
**Q4: In COCO, why only annotate non-crowded instances with at least $10^3$ pixel area?**
**A4:** We originally did this because we thought that the crowds of COCO (often next to baseball fields or tennis courts) were too small for hands to be reliably understood and parsed. We note that $10^3$ pixels are under $32^2$, which likely leaves each hand being just a pixel. However, we will re-examine this design decision to see if there are many hands amongst crowd instances that are visible.
**Q5: Suggestions about prior work, discussion of Ego4D**
**A5:** Thank you. We will add a discussion about these works and add them to the table. Briefly, we think that our less fine-grained annotations can be obtained very easily and provide a great first initial parse of the interaction. We see these works as complementary, and think the community benefits from having data at different levels of granularity and scale. We will additionally move the discussion of Ego4D and difficulty annotating it earlier in the paper.
**Q6: Paper suggestions**
**A6:** Thank you. Will will fix these if accepted.
**Q7: Is the performance gain of Hands23 primarily due to scale?**
**A7:** Thanks for this great question. To check if scale really matters, we trained the hand detection model using the same amount of data (90K) as the 100DOH trainval set. The resulting model obtains 83.4 AP (Hands23 with 90K images), which is close to the original model’s 85.6 (Hands23 full), and higher than 100DOH’s 78.5 AP. This experiment suggests that not only scale, but also the diversity of the dataset matter.
Furthermore, if one evaluates only on subsets of the dataset, while the 100DOH model often does as well as the Hands23 model, the 100DOH model’s performance is substantially worse on other subsets. For instance, on VISOR, New Videos, Articulation, Hands23 is better by 2.3, 2.3, and 2.6 AP respectively. However, on COCO Hands23 is better by 17.6%. We note that this is only testing hand detection; we have often found that the other aspects of performance struggle (e.g., 100DOH is known to struggle with egocentric hand side as reported on the project website)
**Q8: Will the dataset format match prior work?**
**A8:** Yes, we intend to have the dataset format match prior work so that it is as easy as possible for the community to use.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clear answers and experiments addressing my comments! I was and remain very positive about this paper. | Summary: This paper introduces a hand object interaction-related dataset, Hands23, that providing rich labels for hand images (segemantion and boundingbox of left hand, right hand, object and second object, types of contact, grasp and touch). It labels the existing three datasets (COCO, EPIC-KITCHENS VISOR, and Internet Articulation), as well as self-collected videos. Based on the new dataset with rich labels, the paper uses RCNN to train a multi-task model, which performs and generalizes well.
Strengths: - [Size and rich labels] The presented dataset is much larger than existing datasets and provide segemantion and boundingbox labels of hands and objects.
- [Experiments] The model trained on Hands23 for hand or object detection shows good generalization performance. It outperforms the models trainend on other datasets.
- [Annotation] The annotation procedure is elaborated in great detail in the supplementary.
Weaknesses: - [Multi-task] The paper highlights the importance of rich labels for hands and introduces a multi-task framework (Fig.3). It would be interesting to provide more insights on the interaction of different sub-tasks. For example, do the auxiliary labels affect hand segmentation or bounding boxes? what is the benefits of richer labels?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - [Action] As Hands23 consists of a large number of videos, what do you think about action labels?
- [SAM] The segmentation labels are generated with the help of Segment Anything (Line.142). I am wondering if the paper could provide some observations or insights when working with Segment Anything or even other large models to generate labels.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yeah. The authors have addressed the limitations in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review and answer the questions below.
**Q1: What are the benefits of multi-task training? Does it hurt the performance on segmentation?**
**A1:** The benefits of the multiple tasks is primarily the richer predictions, which we envision will enable various applications such as robotics, hand-grasp analysis, and parsing video datasets for understanding hands or finding demonstrations.
Once losses are scaled properly, we think that multitask training does not hurt performance much. We also trained a version consisting only of 3-class (hand, first, and second object) instance segmentation without extra heads. The performance was similar. Hands were slightly better (+3.7 detection, +3.5 segmentation), first objects were slightly worse (-1.2 detection and segmentation), and second objects were slightly worse (-1.7 detection, -0.4 segmentation). We believe this variance is within the range of random seeds and the intrinsic minor differences when comparing two setups, but will provide a more detailed analysis if accepted.
**Q2: What do you think about action labels?**
**A2:** We hope that our model will help with action understanding. We do not have action labels for the new videos subset of Hands23, but we chose VISOR precisely since it is connected to the excellent EPIC-KITCHENS action labels. Beyond labeling the dataset, we do believe that the output of our system could help serve for action recognition since the state of hands, first objects and second objects often are informative for action recognition.
**Q3: How does SAM perform on predicting masks?**
**A3:** We will include additional information about SAM if accepted, since we did find a number of interesting observations while both using SAM and trying to develop our SAM-like system.
Briefly, we found that SAM generally performs well and robustly when provided with bounding box prompts. There are some limitations and cases where more failures emerge. One is that SAM has difficulty deciding the end of the wrist for hand bounding boxes and can produce jaggy masks as the wrist. Another is that SAM can have trouble with thin objects where multiple objects appear in the same box. SAM occasionally will segment another object rather than the intended one. However, overall, as shown by our analysis at L238, SAM produces quite good results. These are where failures tend to be more frequent, rather than places where SAM systematically does not work.
We also want to point out Section D in the appendix, where we discuss SAM compared to our own similarly structured system. We compare the proposed model’s performance between using masks from SAM and our own internal mask prediction system. The bounding box detection performance is largely identical, but segmentation is much better using SAM.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. The authors solved my concerns. I agree that the paper's novelty is insignificant, but I also appreciate the effort to collect and annotate such large-scale data related to interaction. Thus, I still tend to borderline accept. | Summary: The paper presents a method and dataset for hand-object understanding. Specifically, the dataset provides bounding boxes and segmentation annotations for (1) hands, (2) objects that are in contact with hands, and (3) second objects touched by tools. The annotation also includes contact and grasp type for second objects touched by tools. Overall, there are annotations for ~260K images spanning four different datasets. The segmentations masks are automatically extracted using the off-the-shelf method, Segment Anything.
Strengths: - The proposed dataset has images from both first-person and third-person views. This can help develop hand interaction detection methods that generalize well on different domains.
- The dataset has images from other popular benchmarks like EPIC-KITCHENS and COCO. This could be beneficial to the community to develop and train unified methods for human understanding.
- The authors demonstrate the benefits of data through cross-dataset experiments.
- The proposed dataset annotates richer contact vocabulary, such as distinguishing between touching vs holding.
Weaknesses: The proposed architecture does not convince me fully. There are a few major concerns.
- Currently, the method first detects three 'object' classes: hand, object, and second object. These classes are detected independently. However, there is a clear correlation between these three classes. For instance, the object's location in contact with the hand depends on the location of the hand. Also, the location of the second object depends on the location of both the hand and the first object. However, in the current method, this is not reflected; the three classes are detected independently.
- In L217-L218, every hand is matched to an object with the highest interaction score. This excludes the possibility that a hand can contact more than one object. For example, a hand can hold a knife and the chopping board at the same time. The current model design does not reflect this.
- When detecting object and second-object interaction, does the model also consider hand-object interaction? For example, consider object A (first object) and object B (second object). When inferring the interaction score between the two objects, will the model consider if object A was in contact with the hand? If object A was not in contact with the hand, then object B cannot be the second interaction object. Is there any such explicit modeling?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please see the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review of the paper and respond to their three questions and weaknesses below.
**Q1: Does the method model the correlation between three classes?**
**A1:** Thank you for the great question. It is true that our model treats the three classes independently in terms of formulation, but by using a shared backbone, we believe that the model is capable of internally and implicitly modeling the relationships between the three classes. We choose to frame the problem as independent to make the problem amenable to existing object detection machinery. Moreover, the relationship is detected based on features from both objects as well as the relative location of paired bounding boxes (more details at line 211).
To verify our hypothesis of the implicit modeling of object relationships, we provide qualitative results from videos in Figure 2 showing all detected objects above a low threshold (0.8 for hands, 0.3 for first and second objects) without association inference. The *before* image shows the object without the hand and the *after* image shows the hand in contact. In the before example, the object is not detected or detected with a lower confidence score. In the after example, when the hand appears, the object is detected or its score increases. This suggests that the model is implicitly using the other object categories during its detection.
**Q2: Can the model capture cases of one hand in contact with multiple objects?**
**A2:** Thank you for the thoughtful comment about cases of a hand in contact with multiple objects. In data annotation, we annotated a single hand/object contact to make data annotation scalable. However, our model is, in principle, able to handle the multiple contact case because it predicts contact state for each pair of hand and object as an independent classification (i.e., the objects do not compete in the prediction). This is unlike the Shan et al. formulation, which explicitly precludes multiple contacts by using a single offset vector.
If accepted, we will include a more thorough analysis and investigate additional annotations. In the meantime, we did a small experiment to see if the model has this capacity (since during training, it is identifying which object is annotated as in contact). Instead of keeping only the object with the highest in-contact score, we modify the inference algorithm to keep all predicted in-contact objects with a high contact score (threshold=0.8) and low IoU with other in-contact objects (IoU threshold = 0.2). In Fig 3, we show examples of multiple-object contact: (a) a hand holding a brush and touching the surface of a phone case, (b) a hand grabbing food and putting them into the bowl, (c) a hand holding two beans, and (d) a hand holding the comb and a strand of hair. In other cases like (e), the model will find the object by two bounding boxes because the object is separated by occlusion. There are some failure cases too.
**Q3: Does the model consider hand and first-object interaction before deciding on first-object and second-object interaction?**
**A3:** The inference algorithm does indeed consider interactions between the hand and first-object when detecting interactions between the first and second object. In our inference, the contact association is always starting from a hand. The inference algorithm will only consider first-object and second-object contact if the first object is in contact with a hand.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for clarifying my questions. I am satisfied with the authors' responses regarding **Q2** and **Q3**. However, I am still concerned regarding **Q1**. The proposed method should explicitly model the correlation between the three object classes. Currently, the three classes are detected independently. Although the proposed method uses a shared backbone, the backbone is primarily used as a feature extractor and therefore does not model relationships between three classes. Given that the task is to detect hands, an object in contact with the hand, and a second object in contact with the first object, the proposed method should model such relationships. However, I do not see any such formulation here. Therefore, I also agree with Reviewer ufdR's concerns regarding limited technical contribution and keep my original rating borderline reject.
---
Reply to Comment 1.1.1:
Comment: We are glad that our responses to Q2 and Q3 resolved the previous concerns. We thank the reviewer for additional feedback that can let us more directly discuss the reviewer’s concerns about explicit and implicit modeling.
**Is there any explicit modeling in the framework?**
Although the detection framework models the detections independently, we do want to point out that the association component (and thus the full detection system) does explicitly model interactions between hands, first objects, and second objects. In particular, the network is not only looking at top detections to find first objects and second objects, but instead, the contact association MLPs model the in-contact relationships. In the inference process, a hand must be detected first; if the hand is predicted as object-contact with a particular object, only then is that object produced as a detection. Similarly, a first object must exist in order for a second object to be associated with it and therefore detected.
**Is the backbone just feature extraction?**
We understand the reviewer’s comment about the backbone being just feature extraction, and this is certainly true for many detection models. However, in our case, we would point out two particular details that, in our view, make the backbone not a generic feature extractor but instead the part of the network that does most of the work:
- First, the backbone is a 101-layer deep ResNeXt-101. By the time the data has reached the detection heads, most of the computational work and processing has been done – the MLPs are a handful of layers and depend on the backbone having already understood the object categories.
- Second, while the backbone is ImageNet pretrained, it has been further trained for 400K iterations with a batch size of 16, thus seeing >6M training images (and thus seeing ~6B proposals), and therefore being shaped by the detection task and the auxiliary MLP heads. To correctly recognize that an object is a second object, the network needs to implicitly spot hands and a tool in use. If the network generically finds potential second objects (as opposed to objects that are currently contacted by a tool), it incurs a high loss because these are not labeled as second objects. Indeed, our figures in the rebuttal suggest that the backbone is doing this.
With a sufficiently deep network, we believe that such modeling is possible, much like how GPT-4 is able to model fairly complex grammar structures implicitly through sufficient complexity.
**What’s the technical contribution?**
In our view, the strongest technical contribution of this paper (as well as many other papers) is not in algorithmic or generic ML machinery, but rather in problem formulation, extensive work on dataset creation, and analysis. Please see our response to ufdR for a more extensive discussion. | Summary: This work presents a framework that outputs rich interaction information for hands that are in an interactive state on a daily basis, and a large dataset that supports the model. The framework is based on the standard RCNN object detection mechanism and has a simple structure that can be easily extended to meet subsequent information additions for interactive hand understanding.
Strengths: - A large dataset of human hand interactions is proposed, which provides rich and high-quality data annotation that facilitates the understanding and prediction of human hand behavior.
- A framework for manual interaction understanding, with a simple structure that is easy to extend and conducive to directly supplement more interactive information.
- Additional categories of how human hands interact with objects are classified and understood.
Weaknesses: - The understanding of human-object interaction is still in the form of classification mapping, which is a large and tedious classification and lacks simplicity.
- The framework is based on the object detection approach, which may be less applicable for large scenes or scenes that are messy.
- It could be ragarded as an incremental supplement for the work of Shan et al, not innovative enough.( Dandan Shan, Jiaqi Geng, Michelle Shu, and David F Fouhey. Understanding human hands in contact at internet scale. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 482 pages 9869–9878, 2020.)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The narrative of the article is clear and there is little doubt.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 1 poor
Limitations: A large number of 3D hand-object interaction state estimation methods already exist in the community, which provide a more fine-grained analysis of contact. The prominent problem with current 2D-level interactions exists in the robustness to small and multiple targets, but these are subsumed into the Limitation of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their review. We respectfully disagree on a number of key points that we will describe below.
We believe that the reviewer is suggesting that 3D hand-object reconstruction is sufficient to solve the task we tackle, and so we will respond to this first before responding to the three weaknesses.
**Q1: Can 3D hand-object reconstruction solve hand-object interaction understanding?**
3D hand-object reconstruction does provide an alternate fine-grained characterization but often does not generalize well to realistic data such as internet videos, and often only works on a fixed set of objects with given 3D object models or a limited set of (often hand-held) objects. In the real world, many objects that can be in contact with hands. Some objects like large fridges and tables may be larger than the full frame and be occluded and impossible to reconstruct. Others may be tiny and too numerous to have reliable reconstructions, such as the many small pieces of hardware while assembling furniture. Sometimes hands have self-contact with the body. Further, tools interact with objects as well, which is not captured by existing hand-object reconstruction methods, to the best of our knowledge.
Our method finds this interaction in realistic data, possibly with many people and without needing a fixed set of objects in advance. In fact, 2D models such as the ones we use have been used to localize hands and objects for 3D methods, such as Frankmocap which uses the Shan detector to work on isolated hands. This is, of course, in addition to a wide variety of other tasks including action recognition, interactive object understanding, and robotics manipulation learning.
As a demonstration of the difficulty of the data, we show two existing interaction models on the hands23 test set in Figure 1 in the figure response. As shown, none of the methods generalize well.
**W1: The interaction formulation is large and tedious and lacks simplicity**
**A1:** We respectfully disagree with this characterization. The method is simple enough to fit into one page, and we provide a few extra classes that are directly usable, as shown by the application examples such as Figure 6 and supplement section A.3. These analyses let us understand the diverse/rich types of hand/object/second-object interaction that exist. We genuinely do not understand what is meant by tedious. If the reviewer is referring to the pairwise interaction classification, we note that this is simple and can likely be further accelerated with various tricks.
**W2: Are detection-based frameworks less applicable to large and messy scenes?**
**A2:** We respectfully disagree. We precisely chose to formulate the problem as object detection/instance segmentation because we want to tackle large, messy scenes with unknown numbers of objects. Indeed, we show results on COCO, which has many scenes with multiple people and in fact, this is where we show a large improvement over Shan et al. in hand association (74.2 vs 55.9 F1 for 6+ hands). Detection is the standard approach for handling an unknown number of objects, and without a concrete alternative, we’re at a loss to respond.
**W3: “It could be ragarded as an incremental supplement for the work of Shan et al, not innovative enough.”**
**A3**: We strongly disagree with this characterization. The paper introduces (a) a richer set space of labels, including first and second object contact which is not provided at this scale in any other dataset, as well as grasps and fine-grained contact annotations; (b) annotations for 250K images with clear copyright status and data protections; (c) a method to produce the rich set of labels that improves over Shan et al., especially for complex scenes; (d) various smaller contributions such as SAM-enabled segmentations, analysis of potential biases in detection performance. It is true that there are similarities to Shan et al., which is because the core idea works well and has seen substantial use in many downstream tasks. Building on this past work is a strength, not a weakness.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's response. From their reply, it seems that the authors are rather dissatisfied and believe that I might not have fully comprehended or appreciated their work.
The core idea of this paper is that the authors believe that it's plausible to model the interaction state of hands and objects using bounding box associations, and even extend this concept to predicting future interactions involving a second object. However, in my perspective, such a model might not be effectively suitable for complex scenarios. This viewpoint is echoed to some extent by the authors themselves in the "Limitations" section, particularly when bounding box regression errors are prominent. It's not that I dismiss the potential of a simple model outright, but rather due to the oversimplified nature of bounding box descriptions (or the extracted features) that completely discard crucial information such as hand poses – which is the underlying reason behind my inquiry about 3D hand poses.
The model and methodology proposed by the authors are akin to attempting to discern intricate details of an object from a heavily blurred image. It's not that it's inherently impossible, but rather that it defies conventional wisdom. Of course, I speculate that we could make educated guesses aided by large datasets:-)
While I do acknowledge the commendable objectives the authors aim to achieve, after thoroughly considering their response, my conviction remains unchanged: I still think this paper lacks **technical contributions**. In fact, the use of bounding boxes to represent interaction states has already been explored in previous work, and the attempt in this paper to introduce second-object prediction doesn't significantly alter this paradigm.
The most contribution of this paper, it seems, is the dataset. Nonetheless, I remain unconvinced that this in itself meets the bar of NeurIPS.
Consequently, regarding this work, I do not change my score as reject.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the detailed description of their concerns. We appreciate these additional details since they have helped us better understand the concerns about the paper and the reviewer’s perspective. Since the discussion period is short, we understand that we likely cannot have another back-and-forth, but do want to provide a response for the reviewer before the period ends.
We’ve attempted to summarize the concerns into questions to help explain what we believe the concern is and to organize the discussion. We apologize if we misinterpret and do not mean to speak on the reviewer’s behalf.
**Q1: Does it make sense to model hand-object interaction using 2D bounding boxes, especially in complex scenarios, and compared to hand poses and other such representations?**
In short, we think that reliable 2D information about hands and objects can have a substantial impact, and we believe that the value of 2D bounding boxes can be seen by their use in downstream applications.
While it may defy conventional wisdom for some in the community, we do believe that others in the vision community and further afield find the output valuable. To get a sense, we looked through papers that cite the Shan et al. paper. In the past week, two Arxiv papers [A,B] have directly used the Shan et al. model in downstream tasks: using all of its boxes [A] and using it as a way of finding the object in contact when a semantic detector cannot find an object mentioned in a caption [B]. Indeed, researchers in a wide variety of fields have used the Shan et al. model, for instance in robotics [31,60,68], action understanding [15,20], and post-stroke rehabilitation [58]. By providing a fuller understanding of contact via second objects (see Q1 for N4EC for a succinct summary of why the second objects are needed), we think our proposed dataset, labels, tasks, and model can have even more impact.
The reason why these downstream tasks use the box model is that boxes are often sufficient for tasks like video pretraining, tracking held objects, finding a held object for reconstruction, etc. Indeed, because box-based systems only produce a box, such systems can work reliably on the wide variety of objects that will appear in daily life and in a wide variety of images, even without a 3D model and when the object is hard to name (the use case for [B]). While 3D hand pose has become far more reliable, hands can be collected in great quantities and are a single (albeit deformable) object for which a basis can be learned. In contrast, hand-held objects and tool-contacted objects are far more varied, and suffer from a long tail.
We agree with the reviewer that in the long-run, it is good to have a more detailed analysis of hands, objects, and interactions such as 3D hand poses, contact (such as what is being captured in-lab), and forces. At the same time, we think that multiple levels of richness provide value: SMPL does not make pedestrian detection obsolete since one may be able to spot a pedestrian reliably even if the reconstruction would be too difficult.
In order to get to the richer representations that we all want, we see boxes as a necessary first stage. For hand reconstruction, the first thing that needs to be known is where the hand is, and whether it is right-vs-left. The proposed system provides this, plus auxiliary information that may help with understanding or categorizing interaction. We likewise hope that future reconstruction systems will be able to parse the rich 3D interaction shown on the left of Figure 1 of the paper. However, we suspect that such systems will first need a detection system to provide where the objects are and what is connected with what (for instance, to determine which contact losses to impose in reconstruction). Providing such detection information reliably is itself a challenge and requires data, labels, and models.
Title: Official Comment by Authors (1/2) | Rebuttal 1:
Rebuttal: We put all figures mentioned for each response in this PDF.
Pdf: /pdf/1bea5d9cd691af03ceb3507de0463b45baa52dfc.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
AI for Interpretable Chemistry: Predicting Radical Mechanistic Pathways via Contrastive Learning | Accept (poster) | Summary: The paper proposes a set of methods for modeling chemical reactions that involve radicals during the reaction process. The authors first introduce the current landscape of chemical reaction datasets based primarily on the USPTO and discusses the USPTO shortcomings in terms of reaction interpretability and its inability to showcase reaction involving multiple steps. Next the authors briefly describe the RMechDB dataset, which does contain pathways with radicals, followed by a description of their methods. The methods are based on OrbChain, which provides a standardized way of describing chemical reactions with radical and arrow pathways. After that, the authors introduce their predictive methods, including two-step prediction, plausibility ranking, contrastive learning with a reaction hypergraph and text-based sequence to sequence models. The authors conduct experiments for all the aforementioned methods to further understand their capability in accurately describing radical reaction pathways on the RMechDB dataset. The performances of the different methods vary across different settings of the conducted experiments. The authors then provide a pathway search example, further description of their package and a conclusion.
Strengths: The paper provides has the following strengths:
* Originality: The papers provides a new perspective on chemical reaction modeling that involves radicals and is also more interpretable for classically trained chemists.
* Quality: The paper describes and analyzes four different and relevant methods for the radical modeling problem and provides clear motivations for their importance.
* Clarity: The paper motivates the problem they address quite and describe the necessary background.
* Significance: Expanding the capabilities of machine learning models to provide more interpretable reaction models with more steps in the reaction process could have significant impact on various chemistry related problems.
Weaknesses: The paper could be further improved:
* Providing a clearer description of the context of the results related to original problem the authors motivated. How well do the described methods provide more interpretability to chemical reaction modeling? How do the metrics the authors measure relate to that original premise? [quality, clarity, siginificance]
* The authors only briefly describe pathway search, but provide little context for what their results mean. What does a recovery rate of 60% imply? How does the reaction tree look like and how interpretable is it? [clarity]
* The authors refer the reader to the appendix very often, which I think contains a lot of significant information needed to fully understand the experimental results. I recommend putting more of that information in the main paper. [quality, clarity significance]
* The authors only provide a brief description of the RMechDB dataset and its unclear if that paper had any modeling methods the authors could compare their proposed methods to. Further clarification on this would be helpful. [clarity]
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Could you clarify if the RMechDB paper provided any modeling methods?
* Would it be possible to provide more context for the results, ideally in the figure and tables themselves, to further understand the experiments? How good does a Top1, Top2, Top5 score need to be practically useful, for example?
* Is it possible to have text-based methods also express the intermediate steps in reactions involving radicals? Why or why not?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors do not provide a detailed discussion on limitations. A discussion on limitations would make the paper stronger.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the constructive comments from the reviewer. We have addressed most of their comments in the revised version of the manuscript.
**Comments on weaknesses**
* We agree that the description of the pathway search experiment is too succinct, especially as we believe that the ability to do pathway searches is one of the main contributions of this work. To address this point, we have now updated the manuscript by including a better explanation of the pathway searches. Within the revised Appendix, we also include the complete list of 100 pathways tested together with the information on whether the intended target was recovered or not (60% of the time). In addition, we are adding a one-page pdf that contains an example of a pathway search, with a complete visualization of the reaction tree leading to the target product.
**Answers to the questions**
* The RMechDB article (reference 25) introduces a data set with a corresponding database and web server for radical mechanistic reaction steps. It does not involve any reaction modeling or prediction. However, as a service to the machine learning community, RMechDB comes with a standard train-test data split that can be used by anyone to compare different methods. We have added this information to Section 3 of the revised manuscript.
* All the tables report the top N metrics (e.g. N=1, N=2, N=5, N=10) for the corresponding prediction in particular for single-step prediction. These are the standard metrics in the field (Schwaller, Philippe, et al 2019, Coley, Connor W., et al 2017, Irwin, Ross, et al. 2022, Tu et al 2023). As an intuitive example, the significance of the top N metric for single-step prediction (Table 4) is best understood in the context of the pathway (sequence of single steps) search problem. For instance, with a pathway search of length 3, a top 5 metric of 90% results in a 73%=(90%)^3 probability of recovering the target with a tree with a branching factor of 5 provided the pathways considered are congruent with the training set. Additional results are presented in Figure 3 where we assess the robustness of the predictors with respect to reaction type (left) and reactant size (right) using the top-5 metric, which are often important considerations in practical applications. For instance, atmospheric reaction predictions require a predictor to be robust with respect to the six radical reaction types described in the paper, as all six types occur frequently in the atmosphere. We have clarified all these points in the revised version.
* Regarding the reviewer’s question in the third bullet above, the short answer is yes, with a caveat regarding the size of the training datasets. The RMechDB dataset consists of 5500 reactions only, which is in general too small for training a large text model from scratch. However, as stated in Section 4.4, we are able to use a pre-trained model (pre-trained on the US PTO dataset) and fine-tune it using the RMechDB dataset. This Molecular Transformer (Schwaller, Philippe, et al 2019) approach is capable of predicting radical mechanistic reactions and provides intermediate byproducts. However, the existing text-based models do not predict orbital interactions (i.e. arrow codes). Furthermore, the pretraining is done on the reactions in the US PTO data set which are often non-balanced and report only the major product, resulting in biased predictions.
**Comments on limitations**
* We agree with the reviewer that the limitations of our work are not sufficiently addressed. In the revised version of the manuscript, we have added a section discussing the limitations of our reaction predictor. In particular, we itemize the limitations in the following three categories:
1. Limited training data: RMechDB, as the first public database of radical mechanistic reactions, includes only 5500 mechanistic reactions. Although these reactions are carefully curated, developing more advanced machine learning models such as LLMs typically requires datasets that are several orders of magnitude larger.
2. Limited range of radical chemistry: RMechDB, is focused on textbook reactions and atmospheric phenomena. This focus may affect the generalization capability of the proposed predictor in completely different areas that involve radical chemistry.
3. Limited search methods: Here, we used breadth-first search to search and expand the pathway tree of reactions. Although this approach guarantees the exploration of all the possibilities within the reaction tree, more intelligent search methods [Agostinelli, Forest, et al. 2021] may be capable of speeding up pathway searches.
---
Rebuttal Comment 1.1:
Title: Thank for the additional details
Comment: The authors have clarified many of my major concerns and I have adjusted my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's constructive comments and their willingness to adjust their score. We welcome further input to enhance our paper's quality in the time ahead. | Summary: The authors provide a reaction predictor system that provides an accurate and interpretable prediction of radication reactions. Due to the lack of training data, there is a dearth of reaction predictors for radication reactions. The authors present 3 deep-learning-based approaches. The first approach is a two-step process that identifies possible reactive sites and then ranks the reactive site pairs. The second approach uses a contrastive learning approach to identify the most reactive site pairs. Finally, the authors also show a transformer-based approach to perform sequence-to-sequence translation from products to reactants. In the two-step, OrbChain approach, the authors present a GNN-based approach to identify reactive sites and a siamese network-based approach to rank the plausible reactive sites. Multiple reaction representations to perform plausibility ranking. The contrastive learning approach also uses a GNN and both a custom atom pair representation and a hypergraph representation are evaluated. Finally, a pre-trained MolGPT on USPTO dataset is used as well. The authors find that the graph-based methods outperform MolGPT and the contrastive learning methods yield the most accurate results.
Strengths: - The authors compare multiple models to show the efficacy of different types of models such as GNNs and text-based Transformers for reaction prediction
- The authors also use multiple representations and model architectures for a very thorough evaluation of the proposed reaction
Weaknesses: - It is not clear how or which of the three algorithms described is used in RMechRP.
- The presentation of the paper could be improved. There are 3 approaches described with multiple models and representations for some approaches. A short summary of the findings and comparisons or a visualization of the approaches could significantly improve the presentation
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Line 100: How is the arrow-pushing mechanism A represented in OrbChain?
- In Table 2, what is the Atom Fingerprint method? Morgan fingerprint? ECFP?
- What is the loss function for the contrastive learning approach? (Might be in the appendix)
- Link 334: contrstive -> contrastive?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: - Limited comparison as the authors don’t include a related works section so it is difficult to contextualize the scope of their current work to the field.
- Is there a reason the MolGPT model could not be trained on the new RMechDB dataset for evaluation rather than only fine-tuning?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the comments from the reviewer. We have addressed most of their comments in the revised version of our paper.
**Comments on the weaknesses**
* We agree that it is not obvious which model was used for RMechRP. The model used is the best combination of the two-step prediction method (as shown in Table 4), which provides the best compromise between speed, accuracy, and chemical interpretability. We have clarified this point in the revised version in Section 5.3.
We agree with the reviewer that the number of models and representations might be confusing. To clarify this point, we have prepared a new figure that we have added to the revised manuscript (see also attached pdf).
**Answers to the questions**
* The arrow-pushing mechanism is represented using the numbers associated with the atoms and bonds involved in the arrow-pushing mechanism. It is best to illustrate this with an example.
Reaction SMIRKS: CC(C)[O:10][N+:20]([O-])=O.[Ar]>>CC(C)[O:10].[O-][N+:20]=O.[Ar]
Arrow codes: 10,20-10;10,20-20
The arrow codes above are representing two arrows (separated by ‘;’). The first arrow starts at the bond between atom 10 and atom 20 and ends at atom 10. The second arrow starts at the bond between atom 10 and atom 20 and ends at atom 20. This information is already explained in the original RMechDB article (reference 25 in the manuscript) which we now cite also in line 100 to clarify this point.
* Within Table 2, Atom Fingerprints refers to the method that is described in line 138. To avoid any confusion with other kinds of fingerprints, we have changed the name of this method to Atom Descriptor at line 138 and we have revised Table 2 accordingly.
* The loss function of the contrastive learning approach is provided in line 48 of the appendix (Equation 1). For even greater clarity, we have also added this equation to Section 4.3.1 of the revised main manuscript.
* In addition to the typo in line 334, we have fixed all the remaining typos in the main manuscript and the appendix in the revised version.
**Comments on the limitations**
* As the reviewers noticed, we merged the related work and introduction into one section where we reviewed the other predictors. As stated in the second paragraph of that section, most of the reaction predictors are operating at the level of multi-step transformations, not at the level of single mechanistic steps. To the best of our knowledge, the only other reaction predictor focused on predicting mechanistic steps is the one by Bradshaw et. al. 2018, which however is focused on non-radical reactions. So, our proposed reaction predictor is the only predictor with the scope of predicting radical reactions at the mechanistic level.
* Other text-based models can be trained on the RMechDB data. However, it is essential to realize that RMechDB consists of a relatively small set of 5500 mechanistic reactions. Large language models (e.g., MolGPT) require a much larger number of training samples. While we did try to train an LLM from scratch, we found that it did not work well and thus adopted the more practical strategy of pretraining the LLM on the larger US PTO dataset, followed by fine-tuning it on the RMechDB dataset.
---
Rebuttal Comment 1.1:
Comment: As the deadline is approaching, we are keen to ensure that our responses addressed all the concerns raised in your review.
Based on your valuable feedback, we have made substantial revisions to improve the quality and clarity of our submission. Therefore, could you kindly consider updating your review or score to reflect these improvements?
If there are still any unresolved concerns or areas that need further clarification, please let us know so that we can address them promptly. | Summary: - a new model is described for prediction of radical chemical reactions
- the model is trained on a dedicated database of radical reactions for atmospheric chemistry, an important application
- several, reasonable baselines are evaluated
Strengths: - reasonable, state of the art ML modelling (contrastive learning, attention GNNs, reasonable reaction representations inspired from molecular orbitals, building on previous work by Baldi's group)
- reasonable strong baselines (transformers)
- compelling results
- important application
Weaknesses: - other baselines, like MEGAN https://pubs.acs.org/doi/abs/10.1021/acs.jcim.1c00537 or https://www.nature.com/articles/s42256-022-00526-z could be considered
### Related work
Several references in the introduction are not correct:
The Cao & Kipf MolGAN paper should be removed, because it does not deal with chemical reactions.
similarly, the Rogers et al ECFP does not deal with reaction prediction, and should be removed in the intro.
On the other hand, the Segler et al paper should be cited as an ML paper.
The ELECTRO paper by Bradshaw et al should be added. https://arxiv.org/abs/1805.10970
contrastive learning to distinguish between plausible and implausible reactions has already been used in https://www.nature.com/articles/nature25978 (called in-scope filter there), which should be referenced as well
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: no questions, this is a solid, straightforward paper in my opinion
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the fair comments from the reviewer and the fact that they acknowledged the importance of this work. We agree with all the suggested changes. We have revised our manuscript by implementing all the comments from this reviewer. Specifically, within the revised draft, we have removed the Cao & Kipf MolGAN and Rogers et al (ECFP) papers from the introduction. In addition, we have added the two suggested references (Bradshaw et al. 2018 and Siegler et al. 2018) to the introduction. Furthermore, we also cite Segler et al. again in Section 4.3 which is on contrastive learning.
---
Rebuttal Comment 1.1:
Comment: Thank you! | Summary: Authors present two models that predicts radical chemistry reactions. The first model 'OrbChain' is comprised of two components 1) one GNN model for predicting pairs of reacting atoms/groups, 2) a model which ranks the plausibility of these pairs. The second model is a a fine-tuned Rxn-Hypergraph model, adapted for the task of predicting radical mechanism by using an atom classifier model.
Strengths: + Selects an interesting problem domain, specifically radical based chemistry.
+ Authors plan to open source the Radical Mechanistic Prediction model and release software for easier use.
+ Compares against relevant baselines, such as fingerprint representations, MolecularTransformer.
Weaknesses: The presentation of results could be more clear. In particular:
- It would be helpful to have a clear statement of the key contributions provided by this work. The list of desiderata provided at the end of section 2 are important, but I believe that these properties have already been provided by previous models, especially references [20, 21] for radical reactions.
- If I understand correctly, OrbChain is the name of the two part model, but components of the model are still used for the second modeling approach using the fine tuned Rxn-Hypergraph model.
- The table formatting makes the results somewhat difficult to parse, it would be helpful to have more spacing between the caption and the table, and for the
- Table 3, where there is one column with 'AP \n Morgan2 \n TT' and it wasn't immediately obvious that these are different molecular descriptors.
- For Figure 3, I believe the reaction type should be 'Homolysis' rather than homolyze.
- For Figure 3, It would be helpful to have a sense of the number of reactions in each class to better compare the relative performance by the model between reaction classes.
- There are several typos in the manuscript, e.g. a missing close parenthesis in lines 48-49 of page 2, 'weather' instead of 'whether' on line 188 on page 5, some tense mismatches. Please review for grammar errors.
In Section 2, authors state that 'None of the currrent reaction predictors can offer ... chemical interpretability, pathway interpretability, or balanced atom mapping'. There are actually several models that provide interpretability for reaction mechanisms/reaction type. In addition to the works on radical mechanism prediction cited by the authors as references 20 and 21:
- In https://arxiv.org/pdf/1805.10970.pdf, Bradshaw et al. predict electron pair pushing mechansims with a generative model.
- The MolecularTransformer model has also been shown to provide atom mapping by visualizing attention weights (https://arxiv.org/pdf/2012.06051.pdf, Figure 2)./
For text based models such as Molecular Transformer, could you quantify the percentage of reaction predictions that suffer from a 'balance problem'? It isn't clear to me that this is a big issue with MolecularTransformer or other text based models.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Could you provide more information on the train test split used in the RMechDB? Are there any splits that are used to measure generalizability of predictions to out of domain reactions (e.g. by reaction type, atom types, structure similarity)
- Since interpretability is one of the benefits highlighted by this modeling approach, it would be interesting to see more examples of mechanism prediction by the proposed model in the main text.
- For the Pathway Search task, the results say that 60% of the reactants were found in the expanded reaction trees (of 10 step mechanisms). How well do current non-ML techniques perform on this task? How many of the proposed reactants are false positives; are false positive predictions of reactants detrimental to the problem prediction?
- What is the final model used in the RMechRP Software?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I do not identify any negative societal implications.
By my understanding, the presented model is intended to be limited for only radical based reactions, as it is trained on this domain of reactions, and not for other types of chemical reactions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the comments from the reviewer. We have addressed most of their comments in the revised version of the manuscript.
**Comments on the weaknesses**
* We agree that references [20, 21] followed a similar approach toward reaction prediction. Nonetheless, our reaction predictor marks a substantial advancement across various facets, including training data, reaction modeling, and machine learning techniques. Notably, while [20, 21] present radical predictors built upon basic machine learning models trained on a limited dataset comprising only 96 radical reactions. These references cannot provide chemical and pathway interpretability.
* It is important to note that OrbChain is not a prediction method. As explained in Section 4.1, OrbChain is the name of the method we developed to model a radical mechanistic reaction based on idealized molecular orbitals. Using OrbChain as a tool for modeling mechanistic reactions, we are able to develop the first two prediction methods called: two-step prediction and Contrastive learning.
* Within the revised manuscript, we have fixed the spacing between the caption and the table to improve readability.
* Within the revised manuscript, we have added information on AP, Morgan2, and TT used for the reactionFP method in Section 4.2.2.
* About Figure 3, the reviewer is correct regarding the name of the reaction type. Within the revised manuscript, we have changed “Homolyze” to “Homolysis”.
* In Figure 3, the number of reactions in each class is shown by the blue bar which is labeled “All Reactions”.
* In addition to all the typos mentioned by the reviewer, we have fixed all the remaining typos in both the main manuscript and the Appendix.
* In Section 2, where we delve into the advantages of our proposed reaction prediction method, we introduced the concepts of "chemical interpretability" and "pathway interpretability." Based on our definitions, existing reaction prediction models fall short of delivering these two benefits. This deficiency arises because these models are either not trained with orbital level information (for chemical interpretability) or lack training on mechanistic reaction data (for pathway interpretability). Although the current reaction prediction models can offer interpretability within the context of machine learning (e.g., attention weights), it's important to distinguish this from the chemical and pathway interpretability as per our definitions. The third advantage of our reaction prediction model lies in its ability to maintain balance. The concern of balance preservation is seldom addressed in models trained on USPTO data, given the inherent imbalance of the data source. Therefore, the qualitative assessment of the reviewer that the balance effect “is not a big issue” lacks substantiating evidence. Conversely, when applying a reaction predictor to specific real-world scenarios like drug degradation and mass spectrometry, preserving the balance is an essential consideration. Since our reaction predictor is capable of predicting balanced reactions, it has the potential to be applied to a range of problems where the use of current reaction prediction models is not practical.
**Answers to the questions**
* The RMechDB article (reference 25) released an online platform of radical mechanistic reactions with splits into train and test sets. Quoting from the RMechDB paper, “the reaction data are carefully split into train and test data where the test set replicates the distribution of the reaction type and reactive orbitals of the training data”. Also, RMechDB data (both train and test sets) are extracted from the radical chemistry textbooks and research articles on atmospheric chemistry. Therefore, there is no test set to explicitly measure the generalization capability of the model for out-of-domain reactions.
* We agree that the description of the pathway search experiment is too succinct, especially as we believe that the ability to do pathway searches is one of the main contributions of this work. To address this point, we have now updated the manuscript by including a better explanation of the pathway searches. Within the revised Appendix, we also include the complete list of 100 pathways tested together with the information on whether the intended target was recovered or not (60% of the time). In addition, we are adding a one-page pdf that contains an example of a pathway search, with a complete visualization of the reaction tree leading to the target product.
* The final model used in the RMechRP software is the best combination of the two-step prediction method to provide the orbital and pathway interpretability. Within the revised draft, we have added this information to Section 5.3.
---
Rebuttal Comment 1.1:
Comment: Thank you authors for their explanations and efforts to make the work more accessible. The addition of Figure 1 in the attachment is very helpful for laying out the problem and the proposed solutions. I think that the authors could do more to expand either Figure 1 or the caption to Figure 1 to better highlight the contributions by this work.
Based on my understanding of your responses and rereading the paper, my understanding of the main contributions of this work are:
- Benchmarking several methods on the radical specific dataset RMechDB (ref. 25). The transformer model, RxnHypergraph are previously published models, but authors here introduce new features to make use of the RxnHypergraph model. Of the proposed three methods, only the two-step prediction method provides both chemical interpretability (i.e. an assignment of molecular orbitals), but all methods contribute pathway interpretability.
- OrbChain is a new GNN model presented in this work that can 1) Classify reaction types, 2) Predict reaction outcomes using the reaction types. The method here is somewhat similar to the reaction prediction algorithms published ref. 20 (Learning to Predict Reactions). I do not follow the authors' point in the rebuttal about how the model in this reference does not provide chemical interpretability -- If I understand this work correctly, I believe specific one-step mechanisms are predicted by the model in the form of identifying electron 'sources' and electron 'sinks' and solving a matching problem. If I understand correctly, the main difference with OrbChain is that it is a GNN model, and covers a much wider scope of reactions because of the use of RMechDB as training; is this correct?
With the changes made by the authors to help clarify the contributions of this work, and the changes proposed by the authors to improve readability/typos, I have edited scores given in my original review.
Question specifically about Figure 1 in the attachment:
- What is meant by OrbChain generating 'Labels'? Does this mean reaction classification labels?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their constructive comments. We also appreciate their willingness to adjust their score. Using their second set of comments we have further revised our manuscript.
**Regarding Figure 1 in the attachment**
* We agree with the reviewer that the caption is perhaps too concise. As a result, in the revised version, we have expanded the caption to read:
“This is a schematic depiction of the prediction problem, the processing tool (OrbChain), and the three approaches. The three approaches are: Two Step Prediction, Contrastive Learning, and Text-Based. The first two approaches use OrbChain to find the reactive orbitals for training and to form the products during inference.”
**Regarding the first question on OrbChain**
* We wish to clarify that OrbChain is not really a GNN, but rather a reaction processing tool that is used by the GNN models (see Equation 1 describing OrbChain). This processing step assigns labels to molecular orbitals and their atoms before training the GNNs. This also answers the second question about OrbChain raised by the reviewer: OrbChain provides labels at the level of orbitals and atoms, not at the level of reactions.
The text-based models do not need OrbChain because they operate directly on the text representations, not the orbitals. This reviewer is correct in stating that the use of the RMechDB dataset significantly expands the scope of the reactions. | Rebuttal 1:
Rebuttal: This one-page pdf file includes three figures that are generated to improve the clarity of the paper and also to provide a better response to the reviewers' comments.
Pdf: /pdf/367681b88861369410d1d9ebce9685fa2dd26598.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Adversarially Robust Distributed Count Tracking via Partial Differential Privacy | Accept (poster) | Summary: This paper studies adversarial robustness for the problem of counting on distributed streams, where the input may be adaptive based on previous outputs by a protocol across $k$ sites and the objective is for a central server to output an additive $\alpha m$ approximation to the stream of length $m$. When the input is oblivious, there is a randomized protocol that uses $\tilde{O}(\sqrt{k}/\alpha)$ communication, which improves upon a deterministic protocol using $\tilde{O}(k/\alpha)$ communication. However, it is not clear how the protocol behaves when the input is adaptive. This paper gives a protocol that uses $\tilde{O}(\sqrt{k}/\alpha)$ communication, showing there is essentially no overhead for adversarial inputs.
The algorithm works by having each site split its input into blocks and notify the central server if its number of updates within a server block surpasses a certain randomized threshold. Whereas a previous work in adversarially robust streaming used sparse vector technique and differential privacy to protect the internal randomness of the algorithm, the challenge is that distributed streams are event based, and so sparse vector technique will not work. Instead, this paper proposes a new notion of partial differential privacy and shows that not only does it handle generalization with respect to the sample accuracy, but also the distributional accuracy. Thus the output is close to its expectation, which circumvents a complicated analysis of the estimator.
Strengths: There are multiple solid contributions by this paper.
- Firstly, it introduces the problem of adversarial robustness to distributed streams.
- Secondly, it gives an algorithm that shows there is no overhead for the counting problem, which is not clear at all a priori.
- Thirdly, the paper introduces the concept of partial differential privacy and proves interesting generalization properties of partial differential privacy. The paper notes that partial differential privacy may be of independent interest.
- There is a good summary of related work for adversarial robustness, the central techniques in those works, and why they do not apply in this setting.
- Adversarial robustness on streams is a relevant topic for both the machine learning and theory communities.
Weaknesses: - The communication cost is actually $O(k+\sqrt{k}(\log N)/\alpha)$ rather than $O(\sqrt{k}(\log N)/\alpha)$, so the regime of improvement is for $k>\frac{1}{\alpha^2}$, though it should be noted that this same weakness is also present in previous work [13] on oblivious distributed streams. Nevertheless this should be corrected in the formal guarantees.
- While there is a good summary of related work for adversarial robustness and why they do not apply in this setting, it is not clear why natural modifications to the techniques of [13] would not work.
Minor comments:
- It should be noted that $\tilde{O}$ is being used to suppress polylogarithmic terms in $N$, rather than $\tilde{O}(f)=O(f)\cdot\text{polylog(f)}$, due to the notation $\tilde{\sqrt{k}/\alpha}$ to mean $O(\sqrt{k}(\log N)/\alpha)$.
- [18] is "Hassidim" rather than "Hasidim" (note the correct spelling in [21])
- Line 294: "the the"
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is it possible to think of partial differential privacy to protect a certain set of elements in the manner of carefully distributing the privacy budget as in [20]?
EDIT: I acknowledge receipt of the author response and am currently choosing to maintain my score.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
1. The communication cost is actually $O(k+ \sqrt{k}(\log N)/\alpha)$ rather than $O(\sqrt{k}(\log N)/\alpha)$, so the regime of improvement is for $k > 1/\alpha^2$, though it should be noted that this same weakness is also present in previous work [13] on oblivious distributed streams. Nevertheless this should be corrected in the formal guarantees.
It is true that the communication cost is $O(k\log N + \sqrt{k}\log N/\alpha )$; thanks for pointing this out and we will make this clear in the revision. However, $O(k\log N + \sqrt{k}\log N/\alpha )$ is still always better than $O({k}\log N/\alpha)$, which is the deterministic bound.
2. While there is a good summary of related work for adversarial robustness and why they do not apply in this setting, it is not clear why natural modifications to the techniques of [13] would not work.
Thanks for the question. It is not impossible to obtain robust algorithms by applying natural modifications to oblivious algorithms. However, to prove such modifications actually achieve robustness can be very challenging, since we need to rule out all adaptive adversarial strategies and the posterior distribution of the random bits after adaptive interactions can be extremely difficult to analyze. This is a common challenge in both distributed and centralized streaming settings and is the main motivation behind techniques such as the DP framework, which provide analytical tools for proving robustness.
3. Minor comments.
Thanks for your comments. We will correct them in the revision.
4. Is it possible to think of partial differential privacy to protect a certain set of elements in the manner of carefully distributing the privacy budget as in [20]?
There are similarities between the two techniques. However, they are quite different in several aspects. First of all, the overall framework in [20] still require full DP to apply the "DP to generalization" theorem. On the contrary, our partial DP notion is a strict relaxation on that. The relaxation is essential, since it is difficult, if not impossible, to achieve full DP without sacrificing communication in our setting, due to the event-driven nature.
More technically, [20] applies the revised sparse vector technique of [KMS]: it removes data points participating in too many "meaningful queries" to protect their privacy. However, in our problem the privacy budget of each data point is not "smooth": a single query can make it completely revealed; and we cannot anticipate such events beforehand and remove the data before it is revealed. This motivates us to define partial DP which allows privacy leakage on a small subset. In some sense, the privacy leaked set plays the same role as the deleted set in their framework, except that the deleted set in their framework is still private while the privacy leaked set is not.
For applications of DP where only their generalization is concerned, partial DP can be an alternative tool to the revised sparse vector technique of [KMS]. It is easier to use since it doesn't need to do privacy budget control explicitly and there is no need to track which data points are compromised. For example, we may simply apply the standard sparse vector technique without removing data points and prove that it satisfies partial DP. This can potentially simplified the design and analysis of robust algorithms.
[HKM+] Avinatan Hassidim, Haim Kaplan, Yishay Mansour, Yossi Matias, Uri Stemmer. Adversarially robust streaming algorithms via differential privacy.
[KMS] Haim Kaplan, Yishay Mansour, Uri Stemmer, The Sparse Vector Technique, Revisited. | Summary: The paper provides an algorithm or the distributed count tracking that both enjoys the communication advantage from randomization and is robust to adaptive adversaries. Another contribution of the paper is that it introduces a new *partial differential privacy* definition and completes related generalization theorem that can have potential broader implications.
Strengths: **Originality** The proposed method is novel by creatively designing a new privacy criteria and randomizing the communication to account for adaptive adversaries.
The new definition *partial DP* and the generalization theorem will be useful for other stream data setting.
**Quality** The theoretical analysis looks sound; the authors can verify the efficient tradeoffs among privacy, accuracy, and communication of the proposed method.
**Clarity** The problem is well-defined and clearly stated. The method and the analysis are connected smoothly.
**Significance** The problem being solved is important. The new DP definition can have broader impact for other private ML tasks.
Weaknesses: **No experiments** Could you please justify why you think experiments are not needed in your paper? Because you claim your proposed method is the first robust tracking algorithm in the domain, it seems important to offer some baseline experiments to help future work.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. I am curious what are potential scenarios that your newly proposed partial DP may be important? It would be helpful to suggest some.
2. Do you think it would be a great idea to briefly mention how the algorithm you propose can be easily adapted for other robust distributed tracking beyond the count tracking?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors do not discuss the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. No experiments.
Our primary focus is on the theoretical aspects of the problem. Since the robustness of an algorithm against all possible adversaries, which is a desirable property, can only be established by theoretical proofs, we think a thorough theoretical study is of higher priority. However, we acknowledge the value of incorporating experimental studies as a complementary perspective to our theoretical findings. We will make efforts to conduct experimental studies and make the code available to facilitate future work.
2. What are the potential scenarios that your newly proposed partial DP may be important?
* Firstly, partial DP should be an important tool for designing robust distributed tracking algorithms, as the event-driven nature poses challenges in achieving differential privacy with a moderate level of noise. Even in the centralized streaming setting, where DP is achievable, partial DP could potentially simplify the design and analysis of robust algorithms, since it is always easier to satisfy partial DP than DP.
* From a broader perspective, we believe that the generalization property of partial DP can find wider applications beyond robust algorithms design. In any application using the generalization property of differential privacy, one can consider replacing DP with the less stringent partial DP.
3. Do you think it would be a great idea to briefly mention how the algorithm you propose can be easily adapted for other robust tracking beyond the count tracking ?
Thank you for the suggestion. Count tracking is a key building block for tracking heavy hitters and quantiles [1], so we think by combining techniques from [1], it is not difficult to adapt our algorithm for such problems. On the other hand, many other tracking problems can differ significantly , it remains a good research problem whether the algorithm in this work can be directly applied to other tracking problems. However, the challenge of privacy protection posed by the event-driven nature is inherent in all tracking problems, and thus we believe that the generalization theorem of partial differential privacy developed in this work can be a helpful tool in general. We will provide a briefly discussion on this in the revision.
[1] Zengfeng Huang, Ke Yi, and Qin Zhang. Randomized algorithms for tracking distributed count,
frequencies, and ranks. | Summary: This work studies adversarially robust count tracking in a distributed computation setting. In particular the work considers a setting with an adaptive adversary that can choose future inputs based on the outputs produced by the server so far. This is in contrast to prior work on distributed count tracking that considers only oblivious adversaries that must choose the entire input stream before the session begins.
This is the first work to consider robustness against active adversaries in the distributed setting. However, prior work in the central model [18] proves that a differentially private algorithm can be used to extend obliviously robust algorithms to ones that are robust against active adversaries while retaining similar accuracy guarantees. Taking inspiration from [18], this work uses a notion related to differential privacy, and a distributed algorithm that is robust against oblivious adversaries, to construct a distributed algorithm that is robust against active adversaries. Instead of a general black box reduction like in [18] (one that can be applied to any robust algorithm), this work directly modifies some oblivious algorithms to obtain algorithms that are robust against active adversaries. The work additionally argues that the communication complexity of their algorithms is near optimal.
Strengths: Like the authors mention in their manuscript, and support with a clear discussion: it is very important to study the robustness of streaming algorithms in the presence of active adversaries since this is a more realistic model of the settings in which these algorithms operate.
This work provides a distributed count tracking algorithm that is robust against active adversaries while also achieving near optimal communication complexity and accuracy.
Weaknesses: This work states that the event driven nature of distributed tracking algorithms prevents them from protecting randomness in the DP sense. However, they do not provide a concrete claim or any formal proof of this fact. The only explanation of this claim is a brief paragraph on page 4 of the manuscript, making it difficult to understand what exactly the authors mean by the statement and also making it difficult to verify if their claim is true. Given that this claim is the central motivation for defining partial differential privacy, I would have hoped for a more thorough treatment of the claim. They also do not define differential privacy against adaptive attackers in the distributed setting.
This paper doesn’t provide a discussion of the privacy consequences of using their partial DP notion to provide privacy in distributed settings. Since this work defines a new notion that they call privacy, I believe it is essential to provide at least some discussion of:
- the limitations of this new notion
- whether the definition should be expected to provide real world privacy for computations
Minor comments:
- Despite switching between discussions of additive and multiplicative error throughout the manuscript, this work does not define accuracy anywhere and often uses the phrase accuracy without an ‘additive’ or ‘multiplicative’ quantifier.
- This work often states that they use ‘differential privacy’ to achieve robustness against active adversaries in the distributed setting, therefore conflating differential privacy with the weaker notion defined in this work.
- The way that the authors phrase the statement that their algorithm does not treat existing oblivious algorithms as a black-box is somewhat misleading. The statement seems to imply that a non black-box use of prior algorithms is generally desirable; However, while a non black-box use of algorithms is sometimes necessary to achieve optimal guarantees – a black-box use of prior algorithms often leads to a more generalizable result.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Regarding the claim: “the event driven nature of distributed tracking algorithms prevents them from protecting randomness in the DP sense.”
- [With respect to the discussion on pg 4] Why does the fact that the server can update the output only after it receives an input imply that you cannot provide privacy against active attackers in the distributed setting? Why should we expect this to be different than in the central setting?
- In the above claim, are you considering adaptive differential privacy as in https://arxiv.org/abs/2112.00828?
- Could you explain, formally, what exactly your claim is?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: This paper doesn’t provide a discussion of the privacy consequences of using their partial DP notion to provide privacy in distributed settings. Since this work defines a new notion that they call privacy, I believe it is essential to provide at least some discussion of:
- the limitations of this new notion
- whether the definition should be expected to provide real world privacy for computations
Finally, if this notion does not indeed provide intuitive individual or group privacy in the setting the authors consider, this should be made very clear – perhaps by calling the notion something other than ’privacy’.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. This paper doesn’t provide a discussion of the privacy consequences of using their partial DP notion to provide privacy in distributed settings. Since this work defines a new notion that they call privacy, I believe it is essential to provide at least some discussion of: the limitations of this new notion? whether the definition should be expected to provide real world privacy for computations?
* We emphasize that the robust tracking problem studied is not a privacy protection problem per se: we only want to track the answer accurately against an adaptive adversary, and there is *no* privacy concern. According to the framework of [1], what we really need is to prove the algorithm satisfies a generalization result; and informally speaking, DP implies generalization [1], which is the reason why DP is used. A key contribution of this paper is that we show DP is actually not necessary for this purpose, and a relaxed version, namely partial DP, is sufficient.
* So, whether partial DP is a good notion to provide privacy and what are its limitations in privacy sensitive applications are not so relevant to the theme of the paper, since we only care about its generalization performance, for which we provide a rigorous proof. One can call the new notion another name without using the word privacy, but we feel partial DP is appropriate because of its intimate connection to DP.
2. About the event driven claim: Why does the fact that the server can update the output only after it receives an input imply that you cannot provide privacy against active attackers in the distributed setting? Why should we expect this to be different than in the central setting?
* We only claim that, due to the event driven nature, running a centralized DP algorithm on the server does not necessarily provide DP guarantee for all sites; the paragraph on page 4 provide a counterexample. So the framework in [1] is not immediately applicable. However, we do not make a claim that "this imply that you cannot provide privacy against active attackers in the distributed setting". In fact, as we have discussed on page 4, sites can protect privacy by adding noise locally, except that this results in a bigger error.
* Basically, we provide intuitive reasons why achieving full DP and the same error as in the oblivious setting simultaneously is difficult. In line 171 of page 4, the statement is "there is a fundamental challenge...", where we do not claim that this is theoretically impossible; it could be achievable but seems extremely challenging.
* On the other hand, we show that partial DP with optimal error is achievable, and more importantly, partial DP has the same robustness guarantee as DP, and thus the fundamental challenge mentioned above is circumvented. To sum up, it is difficult, if not impossible, to apply the old method [1] in our setting (but we have never made this a mathematical statement); The contribution of this paper is that we provide a new method which circumvents this difficulty and solves the problem optimally.
3. Are you considering adaptive differential privacy as in https://arxiv.org/abs/2112.00828?
No. The difference is twofold. First, in their setting, the sensitive dataset is received in a streaming fashion, while in our problem, the actual dataset we want to protect is the random bits used in the randomized tracking algorithm (not the data streams), which are all given in the beginning. Second, they only consider the centralized setting, which is quite different from our distributed setting.
4. Multiplicative or additive error?
In Section 1.1 (line 86) of our paper, we introduce the objective of our problem, which is to output an estimate within $(1\pm \alpha)$ multiplicative error. Our analysis consider each round separately. In each round the count N increase by at most a factor of 2 (see line 273), and thus an $(1\pm \alpha)$ multiplicative error is equivalent to an $O(\alpha) N$ additive error in this round. So we focus on additive error in each round, which is easier to analyze. We appreciate the feedback and will make this clearer in the revised version.
5. Conflating differential privacy with the weaker notion.
Thanks for pointing it out. We will try to rephrase these statements to avoid such misunderstandings.
6. Statements about non black-box use.
Sorry for the misunderstandings. We point out the non black-box use is only to emphasize that the techniques and analyses used in our work are very different from existing ones with black-box use. We agree that a block-box use of priors algorithms is more generalizable. We will try to rephrase these statements in order to mitigate any potential misunderstandings.
[1] Hassidim A, Kaplan H, Mansour Y, et al. Adversarially robust streaming algorithms via differential privacy[J]. Advances in Neural Information Processing Systems, 2020, 33: 147-158.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses! I have a better understanding of the work and have updated my review accordingly. | Summary: The paper considers a distributed count tracking problem introduced at PODS 2012: k parties receive (possibly adversarial) updates to a common counter, and need to communicate with a server to maintain an approximation of the total number of updates received. The goal is to minimize communication from the parties to the server. In PODS 2012 it was shown that communication proportional to $\sqrt{k}$ is sufficient, improving on a simple deterministic algorithm for which communication is proportional to $k$. The $\sqrt{k}$ bound was also shown optimal up to polylogarithmic factors. However, the upper bound crucially assumes an oblivious adversary, such that the inputs to the parties may not depend on the estimates reported so far. The present paper shows that this assumption can be removed. The idea is to use techniques from differential privacy, but existing techniques do not apply, so the paper proposes a new version of these techniques that is applicable in the distributed setting.
UPDATE: After reading the review of MMAV and the corresponding rebuttal I am increasing my score slightly.
Strengths: - Resolves a decade-old open problem regarding one of the most natural problems in distributed functional monitoring
- Introduces interesting new techniques for dealing with adaptive, adversarial inputs, which should arguably be the standard model for any dynamic algorithm claiming to be robust
Weaknesses: - The paper does not really try to argue for relevance to machine learning. Unless this is addressed, it would probably be of interest to only a small fraction of NeurIPS attendees.
- Though adversarial robustness and the connection to differential privacy has been studied in NeurIPS before, this has been in the context of streaming algorithms, which are probably closer to machine learning applications. The paper feels more like a database theory paper than a NeurIPS paper.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - Can you elaborate on why the results of the paper are of interest for a machine learning audience? Are there, for example, potential implications for federated learning?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: I would have liked the model to be described more clearly. If my understanding is correct, only communication from sites to the server is counted, while the server is able to broadcast estimates "for free" to the sites. It is unclear in what settings this kind of asymmetry in accounting for communication is a good model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. The paper does not really try to argue for relevance to machine learning. Can you elaborate on why the results of the paper are of interest for a machine learning audience?
* First, as we have repeatedly emphasized in our paper, a key contribution of this paper is introducing a new notion of differential privacy and proving a new generalization theorem, which are both highly relevant topics in machine learning. In particular, as we all know, the study of generalization is a central topic in machine learning research.
* Second, as another reviewer said, adversarial robustness on streams is a relevant topic for the machine learning community. Recently, a lot of research papers, e.g., [1, 2, 3], on this topic have been published in machine learning conferences like NeurIPS and ICML. We extend centralized streaming to the distributed streaming setting, which is no less relevant, since many real world machine learning scenarios are inherently distributed and the data are received as multiple streams. Moreover, the counting problem studied in this paper is even more basic and ubiquitous than many other problems studied in prior work.
[1] Hassidim A, Kaplan H, Mansour Y, et al. Adversarially robust streaming algorithms via differential privacy[J]. Advances in Neural Information Processing Systems, 2020, 33: 147-158.
[2] Cherapanamjeri Y, Nelson J. On adaptive distance estimation[J]. Advances in Neural Information Processing Systems, 2020, 33: 11178-11190.
[3] Cohen E, Lyu X, Nelson J, et al. On the robustness of countsketch to adaptive inputs[C]//International Conference on Machine Learning. PMLR, 2022: 4112-4140.
2. Only communication from sites to the server is counted. while the server is able to broadcast estimates 'for free' to the sites.
* We do count the communication from the server. The communication cost is defined in Section 1.1 (Problem Definitions and Previous Results), which is the **total communication cost between the the server and all sites** (line 88). In fact, we only assume there is a point-to-point communication channel between each site and the server (line 73), so each broadcast is counted as $k$ point-to-point messages, which incur a cost of $O(k)$.
* Our algorithm has $O(\log N/\alpha \sqrt{k})$ rounds (see section 4.2 Accuracy and Communication , line 334), and there is only one broadcast in each round (see Algorithm 2, line 10), and thus the total communication cost for broadcasting is also $O(\sqrt{k}\log N)/\alpha$, which is bounded by the total communication cost we claimed. We will add explicit explanations in the future version to make this clearer. Thanks for pointing it out.
---
Rebuttal Comment 1.1:
Title: Follow-up questions to rebuttal
Comment: Thanks for clarifying the model, which has made it easier to understand the results. Part of my confusion has to do with parameter restrictions. When you say that the protocol has $O(\log N/\alpha\sqrt{k})$ rounds you implicitly assume that $k = O(\log^2(N) \alpha^2)$. Since $N$ is increasing, if $k$ is fixed this condition does not seem to hold initially, so it would seem that you need an additive term to cover the part of the protocol for which $N$ is not large enough to make the condition hold?
I acknowledge that related problems have appeared in ML venues, but I still feel that the problem studied here is a step further away from ML applications than these works.
---
Reply to Comment 1.1.1:
Title: The condition on k
Comment: Thanks for the reply. We didn't make it very clear in the submission. Here are some details on the number of rounds.
In each round the number of items increases by roughly a factor of $(1+\alpha \sqrt{k})$. Thus, to count the number of rounds we consider two cases: 1. $\alpha \sqrt{k}<0.5$ and 2. $\alpha \sqrt{k}\ge 0.5$. For case 1, we have $k< 0.25/\alpha^2$ and the number of rounds is $O(\log N/\alpha \sqrt{k})$. For case 2, the number of rounds is $O(\log N)$. In both cases, the number of rounds is well-defined and thus there is no restrictions on the value of $k$. The communication cost per round is $O(Ck)$, and therefore the total cost is always bounded by $O((Ck+C\sqrt{k}/\alpha)\log N)$. We will clarify this in the revision. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Synthetic Experience Replay | Accept (poster) | Summary: Authors propose to use diffusion models for generating new data based on online interactions or offline dataset. Generating new samples allows online and offline algorithms to perform better. Method can be useed with any offline method and any online method which utilize replay buffers.
Strengths: Useful idea which works. Both offline and online RL settings are covered as well as both visual and non-visual tasks. Good ablations and large-scale experiments.
Weaknesses: I would recommend to validate the approach using Antmaze D4RL datasets as this domain is much more challenging than locomotion and maze2d.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Maybe I missed it, but how would performance change in offline setting if we used original data with the synthetic one? Does it decrease performance? I don't understand the motivation to throw away original data otherwise except demonstrating that diffusion produces good data which could be an ablation.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: I'm don't knowvery much about how complicated it is to train a diffusion model to solve an arbitary task so making it work seems like a potential bottleneck. Probably, it is faster and easier to run an additional hyperparameter search for the RL algorithm than make diffusion work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time to read our paper and provide useful comments. We now address their individual concerns.
**Antmaze experiments**
We thank the reviewer for this comment. We would like to point out that the larger dataset offline experiments are more for validation (e.g., ensuring that diffusion models can faithfully model the distribution such that it doesn’t harm performance). As we noted in our response to reviewer 2GdG, we note that the D4RL datasets are by construction relatively large and generated under stochastic policies and starting points, therefore we’d expect the benefit from SynthER to be limited in this setting as the dataset is already relatively diverse.
Nevertheless, we show results for AntMaze using the TD3+BC algorithm in Table 3 of the supplementary PDF. We verify that diffusion can indeed also match the original data and even improve on the umaze dataset. We hope this provides additional confidence in our approach.
**Mixture experiments**
The reviewer is right, the motivation to throw away the offline data is simply to make the point that the diffusion generated samples are high enough quality to faithfully model the original distribution, in particular when that dataset is large and diverse. However, the reviewer raises an interesting point regarding the mixture of offline and diffused data. In Table 4 of our supplementary PDF, we show that similar to the online experiments, there is no problem with mixing real and synthetic offline data in a 50-50 mix. As we discussed in the paper, the performance with the real, synthetic and mixed data are close to each other.
**Ease of training diffusion**
Thank you for the question, we find that it is actually the exact opposite. The diffusion model uses one single set of hyperparameters for almost all experiments and requires no additional tuning. This is in stark contrast to typical RL hyperparameter sweeps. Furthermore, we unlock performance that is likely not possible even with simply sweeping different hyperparameters on the base RL algorithms.
We once again thank the reviewer for their constructive comments, which have improved the quality of our experiments and clarity of our paper, and kindly ask them to raise their score if they believe we have addressed their concerns. If issues still remain, then we would be more than happy to discuss these.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for answering my questions and running additional experiments. I'm increasing the confidence score.
---
Reply to Comment 1.1.1:
Comment: Thank you so much again for your feedback which greatly improved our paper! | Summary: The paper presents Synthetic Experience Replay, a reinforcement learning algorithm employing a generative model based on diffusion to enrich the training dataset of a learning agent. The method is adapted to offline and online RL, and compared to traditional augmentation strategies (e.g., the addition of random noise) in continuous control tasks, from proprioceptive and visual inputs, while applied to different policy optimization algorithms. The paper also presents detailed analyses on the approach, including results on the quality of the generated data and on using larger neural networks.
Strengths: **Originality**: the general direction of using synthetic data for training machine learning algorithms has been vastly explored in the past, with some applications to reinforcement learning. The approach presented in the paper is, thus, not outstandingly novel. However, one of the main points in the paper is that the main bottleneck in previous approaches leveraging synthetic data in the context of RL has been the quality of the generative model. To the best of my knowledge, this is the first use of diffusion models for synthetic data generation.
**Quality**: the quality of the work is reasonably good. The experiments are well-chosen and the experimental designs are generally good.
**Clarity**: the writing quality is generally very high. The paper is well-written and easy to follow in most parts, with a clear presentation of the experimental results.
**Significance**: I believe demonstrating the effectiveness of powerful generative models in generating synthetic data for reinforcement learning algorithms to use is important for the community, given the simplicity and generality of the approach.
Weaknesses: **Major Concerns**
- My main concern is on the number of seeds employed for the experiments in the paper. I understand the authors might be subject to computational constraints, but I find 4 repetitions only to be generally not enough to fully trust an individual experiment. I would suggest the authors to run more seeds per experiment and, if possible, use the methods proposed in "Deep Reinforcement Learning at the Edge of the Statistical Precipice" (Agarwal at al., 2021) for aggregation across multiple tasks.
- In Section 3.1, the paper says that Tabular VAE and Conditional Tabular GAN have been evaluated using the default hyperparameters proposed in the original paper that applies these generative models to tabular data. However, no dataset from the benchmark employed in the original paper contains robotics interactions similar to the one from D4RL, and this thus begs to question of whether better hyperparameters for such different datasets might exist. I believe the paper would be improved by evaluating those approaches after a careful search for optimal hyperparameters.
**Minor Concerns**
- Some potentially important references are missing from the paper. In particular, I'm thinking of the recent paper on using synthetic experiences (meta-learned) "Should Models Be Accurate?" (Saleh et al., 2022), and of some recent papers applying, albeit with other goals, diffusion models to reinforcement learning such as "Planning with Diffusion for Flexible Behavior Synthesis" (Janner et al., 2022) and "Is Conditional Generative Modeling all you need for Decision-Making? (Ajay et al., 2022).
- The clarity of the paper would benefit from a more explicit description of how the diffusion model is employed for generating the data. One can guess it is naively applied to generating each dimension of each state, action or reward, but being explicit is better than being implicit in this case.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I ask the author to address the concerns highlighted above in my review.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The computational limitations of the approach are stated in the paper, saying that the approach is computationally better than REDQ. However, the comparison is executed at a replay ratio of 20; it would be desirable to understand what is the actual computational cost of the new data generation and training of the diffusion model even at lower replay ratios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time to read our paper and provide useful comments. We now address their individual concerns.
**More seeds**
Thank you for pointing out this oversight on our behalf. We have now increased the seeds used for all the experiments in our paper to at least 6 and show as many as possible in Figure 1 and Table 1 of the supplementary PDF. We have now also performed a meta-analysis over our empirical evaluation using the RLiable framework in Figure 2 of the supplementary PDF. As the reviewer can see, our existing findings still stand, and in fact with the additional seeds as suggested, we achieve statistically significant results over prior methods.
**Tuning of other generative models**
Thank you for pointing out this oversight on our behalf. We have now tuned the TVAE and CTGAN models following the method of [1]. We used a similar extensive search space for the models as listed in Table 10 and Table 11 of the Appendix of [1] and used 30 trials each for the halfcheetah-mixed dataset. The results are presented in Table 2 of our supplementary PDF.
With extensive tuning, we are able to improve the performance of TVAE and CTGAN on the halfcheetah-mixed dataset. However, this still remains roughly half of what is possible with the diffusion model. We will update the final version of the paper with these results.
[1] TabDDPM: Modelling Tabular Data with Diffusion Models. Akim Kotelnikov, Dmitry Baranchuk, Ivan Rubachev, Artem Babenko.
**How was the data generated?**
The reviewer is absolutely correct in assuming we generate all states/actions/rewards/next states at once (e.g. joint diffusion). We discuss this in Appendix A but will include this point more explicitly in the camera ready version.
**Breakdown of online training time**
Thank you for the question, in Appendix D.1 we list the running time of the “SAC (SynthER)” runs as 21.1 hours. We can further break this down into
Diffusion training: 4.3 hours
Diffusion sampling: 5 hours
RL training: 11.8 hours
Therefore, the majority of training time is from reinforcement learning with an update-to-data ratio (UTD) of 20. We will include this breakdown in the camera ready.
**Citations**
We appreciate these pointers to references, in fact, we already cite Ajay et al. (2022) and Janner et al. (2022) in our paper (please see citation 3 and 35). However, we appreciate the pointer to Saleh et al. (2022) and will aim to discuss the paper in the camera ready.
We once again thank the reviewer for their constructive comments, which have improved the clarity and thoroughness of our paper, and kindly ask them to raise their score if they believe we have addressed their concerns. If issues still remain, then we would be more than happy to discuss these.
---
Rebuttal Comment 1.1:
Title: Thank you for your work on improving the paper
Comment: Dear authors,
Good job! Thank you for running this additional experiments and for your answer. My main concerns were addressed and I believe the rigor of the experimentation has been much improved by the new results. Please make sure to put the rliable plots in the final version of the paper.
I will raise my score to 7.
---
Reply to Comment 1.1.1:
Comment: Thank you so much again for your feedback which greatly improved our paper! | Summary: The paper proposes Synthetic Experience Replay (SYNTHER), a novel diffusion-based approach to upsample an agent's collected experience in deep reinforcement learning (RL). The authors demonstrate that SYNTHER is effective for training RL agents in both offline and online settings, and can improve performance for small offline datasets and larger networks. The paper presents results in both state-based and pixel-based environments and provides evidence that synthetic training data can enable more sample-efficient and scalable training for RL agents.
Strengths: 1. The paper introduces a novel approach, SYNTHER, which employ generative diffusion model to upsample an agent's collected experience. This method can potentially resolves the challenge of data scarcity in RL. Generally, this method is easy to understand and effective, and may have a broad impact on sample-efficient RL algorithms.
2. The authors provide thorough experiments and evaluations in offline and online settings, demonstrating the effectiveness of SYNTHER in improving performance and sample efficiency. The authors also demonstrated the superiority of the diffusion-based model compared to other generative models such as GAN and VAE.
3. The paper is well-structured, with clear introduction, background and methodology. The author also provide open-source code for reproducibility.
Weaknesses: 1. The performance improvement of SYNTHER seems to be very limited under the setting of small networks.
2. The paper didn't demonstrate the advantages of using SYNTHER on existing RL algorithms in both online and pixel based scenarios, nor did it study the impact on model-based RL algorithms in online settings.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Can you provide the experimental results of SYNTHER with a model-based RL method without BC?
2. Can you provide the experimental results of SYNTHER in online and pixel-based setting?
3. SYNTHER can only generate 1-step transition, so n-step bootstrap or GAE cannot be used in imagined data. Did the baselines use n-step bootstrap or GAE value estimation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: SYNTHER have limited results in pixel-based and online settings. This paper has no negative social impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time to read our paper and provide useful comments. We now address their individual concerns.
**The role of the offline experiments/small networks**
We thank the reviewer for this point; we note that the D4RL datasets are by construction relatively large and generated under stochastic policies and starting points, therefore we’d expect the benefit from SynthER to be limited in this setting as the dataset is already relatively diverse. We also note small networks more or less can solve for these environments in the default dataset setting. Finally, we view the full offline experiments as more validation that SynthER can scale to larger datasets while still faithfully capture their underlying distribution so as not to harm behavior learning.
**Model-based**
We see model-based as being relatively orthogonal to our work. While SynthER could of course be deployed alongside a model-based method, it’s unclear how best to use the diffusion data, as model-based methods already produce their own “synthetic” data through imagined rollouts; do we run these two processes independently or somehow combine them? One immediate direction could be to improve the quality of a world model using SynthER, but we note that the key challenge in RL lies more with issues of over-fitting in the TD-learning loss function, and regularization is required to carefully handle this [1,2]; in this work we regularize using diffusion-based data augmentation. We would expect to see limited benefits on augmenting the already ‘stable’ supervised signal in world model learning, and moreover would be disinclined to add additional complexity and training time to the already demanding model-based RL framework.
[1] Stabilizing Off-Policy Deep Reinforcement Learning from Pixels, Cetin et al., ICML 2022.
[2] Efficient Deep Reinforcement Learning Requires Regulating Overfitting, Li et al., ICLR 2023.
**Online pixel-based**
This is an interesting research direction, which we believe is out of scope for the purposes of this paper. A key issue is addressing multi-modality to generating high-dimensional images alongside low-dimensional actions and rewards. Whilst we present some results in the paper combining the proprioceptive architecture with a fixed latent representation, we believe this is not optimal in general for pixel-based applications, particularly as learned latent representations are likely to change over the course of training.
We tried some experiments using intermediate representations from Convolutional U-Nets to jointly diffuse actions/rewards alongside images, but initial results were not promising, particularly for modeling the low-dimensional action/reward distribution. This suggests a more complex approach is required. Given that multimodal diffusion is a significant on-going body of unresolved research, we therefore leave pixel-based experiments to future work.
We once again thank the reviewer for their constructive and interesting comments, which have improved the clarity of our paper, and kindly ask them to raise their score if they believe we have addressed their concerns. If issues still remain, then we would be more than happy to discuss these.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. However, the last question was not effectively answered. I will keep my score at 7.
---
Reply to Comment 1.1.1:
Title: Additional Response
Comment: Thank you so much for your response. We apologize that we missed the reviewer's final question, as it was not in the original review. We will now answer this here.
**SYNTHER can only generate 1-step transition, so n-step bootstrap or GAE cannot be used in imagined data. Did the baselines use n-step bootstrap or GAE value estimation?**
This statement is untrue, SynthER can be extended to $n$-step transitions by expanding the input to the diffusion model. We refer the reviewer to Line 305 of our manuscript which addresses this. Concretely, instead of modeling 1-step transitions by diffusion tuples of the form $(s, a, r, s')$, we could model expanded tuples of the form $(s, a, r, s', a', r', s'')$ and so on. This represents only a modest increase in the dimensionality of the input space and can easily be handled by the diffusion model. This would allow us to use $n$-step bootstrap or GAE methods.
The proprioceptive results used 1-step bootstraps. The latent offline visual experiments in fact already use a 3-step bootstrap with a latent that is derived from a 3-step transition.
Please let us know if this has answered the reviewer's doubt or if we can elaborate further. | Summary: Building on the recent success of generative models, the authors propose Synthetic Experience Replay to improve how experience or data is used within reinforcement learning algorithms. While data is normally collected through interaction with an environment, the authors suggest that the experience replay can be augmented with additional data now artificially generated from a diffusion model. The paper considers both the online and offline settings of RL, compares with previous tabular data augmentation methods, and measures performance on pixel-based environments.
Strengths: + [Quality] The authors conduct an exhaustive set of experiments on standard online and offline RL benchmarks.
+ [Clarity] The paper is well-motivated, organized, and clearly written. Related works are carefully documented and contributions are clear.
+ [Significance] The results are quite compelling, especially in the case of upsampling from small datasets. Considering that no algorithmic changes are necessary, many works stand to benefit from this simple, yet effective strategy.
Weaknesses: + [Originality] Augmenting data in the replay buffer with synthetic transitions is not new. For example, previous works have attempted to learn a dynamics model during the course of standard online RL training to fill up the replay buffer with fake transitions. Most of the novelty in this work comes from 1) using a diffusion model to generate data and 2) demonstrating experimentally that this kind of synthetic data can be useful in a variety of settings.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: -
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time to read our paper and provide useful comments. We now address their individual concerns.
**Novelty**
Thank you for the question - we appreciate the reviewer’s concerns. While we agree that other methods have been proposed to generate synthetic data, the strength of our approach is in it’s simplicity. Indeed, several highly influential recent RL papers, published at similar venues, have shown that simple ideas can be both intuitive and highly effective [1,2,3].
On this point, we note that not only do we demonstrate the capacity for diffusion models to faithfully model the distribution of a replay buffer (unlike prior classes of generative models), but we also observe its utility in augmenting and expanding effective buffer size, allowing us to address issues with overfitting that can arise in RL [4,5]. This results in significant improvement over existing approaches, despite surprisingly relying on minimal underlying algorithmic modification (i.e., just train our diffusion model on the replay buffer). To speak to this point about superlative results, we observe a statistically significant improvement of SynthER over prior methods in the new experiments we ran through the RLiable framework (see Figure 2 in the supplementary PDF in the general response).
[1] A Minimalist Approach to Offline Reinforcement Learning, Fujimoto and Gu, NeurIPS 2021.
[2] Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels, Kostrikov et al., ICLR 2021.
[3] Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning, Yarats et al., ICLR 2022.
[4] Randomized Ensembled Double Q-Learning: Learning Fast Without a Model, Chen et al., ICLR 2021.
[5] Efficient Deep Reinforcement Learning Requires Regulating Overfitting, Li et al., ICLR 2023.
We once again thank the reviewer for their constructive and interesting comments, which have improved the clarity of our paper, and kindly ask them to raise their score if they believe we have addressed their concerns. If issues still remain, then we would be more than happy to discuss these.
---
Rebuttal Comment 1.1:
Comment: We would like to thank the reviewer again for their time in reviewing our manuscript. As the discussion period is coming to a close, we were wondering if the reviewer had any further questions or queries that we could address.
Thanks again! | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their helpful and valuable feedback. We were pleased that the reviewers found our paper well-written, believed that our idea was widely applicable, and agreed that our experiments were thorough and of a high standard.
We noticed that each reviewer had fairly unique concerns, but observed a shared theme regarding the need for additional experimental validation. We are excited to share improvements to our experiments in our supplementary PDF and will discuss overall improvements in the general response. Any remaining individual concerns will be addressed in the specific reviewer responses.
**More seeds**
Thank you to reviewer MwZd for raising this, we have now increased the number of seeds used in all the experiments to a minimum of 6 each. We show as many as possible in Figure 1 and Table 1 of the supplementary PDF. We see no significant change in results, which should increase confidence in the robustness and reproducibility of our experiments.
**RLiable analysis**
As additional validation, we evaluated SynthER under the RLiable framework in Figure 2 of the supplementary PDF, where we observe statistical significance in SynthER’s improvements over the baselines.
Finally, we thank all the reviewers for their constructive and interesting comments, which have improved the clarity and experimental rigor of our paper. We take this opportunity to therefore kindly ask them to raise their score if they believe we have addressed their concerns; if issues still remain, then we would be more than happy to discuss these in the coming days.
Pdf: /pdf/8f3bae4d10162542855c1b17df62dc37c23a9aea.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
The Exact Sample Complexity Gain from Invariances for Kernel Regression | Accept (spotlight) | Summary: This paper analyzes the impact on sample complexity of encoding invariances into kernel functions in the context of kernel ridge regression. A kernel function is invariant to the actions of a group if its output does not change when its inputs are acted on by members of the group. The paper shows that for finite groups, the sample efficiency is effectively multiplied by the size of the group. For groups of positive dimension, it shows the sample efficiency is improved by (1) reducing the effective dimension of the input manifold, and (2) reducing the volume of the manifold.
Strengths: - Significance: This paper analyzes how invariance improves sample complexity for kernel ridge regression in a much more general setting than prior work. For example, it allows the input space to be any compact manifold, and the invariance to be represented by any smooth compact Lie group (prior work, for example [5], had assumed the input manifold was a sphere and that the invariance was to a finite group).
- Quality: Although I have not checked the proofs, the quality of the work appears high. The generalization bound in Theorem 3.1 is proved to be minimax optimal in Theorem 3.3, an interesting/important result.
- Clarity: The paper is generally well written and clear. See “Weaknesses” section for places for improvement.
- Originality: It seems the approach of using differential geometry to analyze these generalization bounds is original, and that the authors developed some tools of independent interest in this area.
Overall, I believe this paper makes an important theoretical contribution to a problem space that has been of significant interest to the machine learning community: namely, how to make models invariant to certain input transformation (either via data augmentation, or by changing model architecture), and understanding how encoding these invariances impacts the models’ generalization performance and sample complexity.
Weaknesses: Perhaps the greatest weakness of this paper is at times seeming too “mathy” (https://arxiv.org/pdf/1807.03341.pdf), without sufficient grounding in concrete examples or empirical results. For example:
- It would have been useful to go over more examples of invariances whose groups have positive dimension, and understand more concretely how these invariances affect the generalization bound.
- It could be useful, for equations (5) and (7), to give examples of how this quantity changes for different finite vs. infinite (positive dimension) groups. One example I had in mind, for $x \in R^2$, was rotational invariance, where the rotations could be in a finite group (e.g., rotate by multiples of $\alpha$ degrees, for any $\alpha$ that evenly divides 360) or an infinite group (e.g., rotate by $\beta$ degrees for any real-value $\beta \in [0, 360)$). How would these situations compare, in terms of equations (5) and (7), as the value of $\alpha$ approaches 0?
- It would have been helpful to understand which of the examples given would not have been possible to analyze with prior results.
- It would have been useful to run experiments (even toy/synthetic experiments if necessary) to demonstrate that the bounds from the paper are actually predictive of model performance.
NIT: $\omega_d$ is defined in Theorem 3.4, but used in prior theorems.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Perhaps a broader discussion of limitations would be useful.
- What deep learning scenarios would the results from this paper apply to or not apply to?
- What important open problems remain to be solved?
- How empirically predictive are these theoretical results, in different settings? In what cases would these results definitely not be empirically predictive?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the respected reviewer for their helpful comments. Here we provide a response to the questions/weaknesses mentioned by the reviewer.
- Question: "It would ... generalization bound."
- Answer: We thank the reviewer for their comment. While we devoted a full section to examples and applications (Section 4 in the main body of the paper), we appreciate the reviewer's suggestion and as there are many examples of learning under invariances (e.g., in physics) with different groups, we will survey even more in our next version to clarify the various kinds of gains available according to our results.
- Question: "It could ... approaches 0?"
- Answer: We will add the example provided by the reviewer to the paper and add more explanations since it allows easy computation of the gain and makes the paper more readable. Indeed, since we are going to approximate a one-dimensional group using finite groups, the gain is in the exponent and thus the one-dimensional group is always better (for large sample sizes).
- Question: "It would ... prior results."
- Answer: Since all the provided examples are not using spheres, they are not possible to be analyzed using the prior work. We will clarify this in the next version of the paper.
- Question: "It would ... model performance."
- Answer: Thanks for making this suggestion. Although we believe that the results are theoretical in nature, we plan to add supporting experiments to the next version of the paper.
- Question: "NIT ... theorems"
- Answer: We will clarify it in the next version of the paper.
- Question: "What deep learning ... to?"
- Answer: The results apply to the kernel approximations of deep networks (e.g., lazy regimes), but for general deep learning it's still an open problem. We will clarify this in the paper.
- Question: "What important open problems remain to be solved?"
- Answer: Perhaps the main open problem is to generalize the results of the paper to sample the complexity of other statistical estimation problems (e.g., Wasserstein distance, density estimation, etc.) under invariances on manifolds. Also, it's interesting to exploit the possibility of applying the results to NTK regime for invariant models. We will add this to the paper.
- Question: "How ... predictive?"
- Answer: We believe that the gain is observable almost everywhere in practice, but the quality of the approximation fairly depends on how the assumptions on the paper are satisfied in practice. We will add more explanations about this limitation to the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thoughtful responses, I look forward to seeing the next draft! | Summary: The paper analyzes the generalization error of kernel ridge regression on manifolds with a kernel invariant to a group action. Compared with previous work, the results hold more generally for groups of positive dimension.
Strengths: The paper tackles an important issue, is well-written, with well-chosen and appropriate examples, and appears technically sound. It also offers a differential geometric perspective on learning theoretic issues which might have its own interest.
Weaknesses: Though already significant in its present form, the paper would gain in interest if it could provide more "agnostic" generalization error bounds in various ways, or at least comment on how (and if) this would be possible.
- Assumptions on f*: While interesting per se, the results are inapplicable to practical cases without knowledge of the target function f* (and the noise level sigma^2). In particular, Theorem 3.1 makes strong assumptions on the target function f* and holds only for the optimal value of the regularization parameter. It would be much stronger if it could be reformulated to hold more broadly for all regularization parameters with, e.g., the norm of \hat{f} instead of that of f*, and with f* only involved through the empirical risk (like more "traditional" error bounds based for instance on the Rademacher complexity).
In other words, if invariance to a (finite) group G intuitively implies that "each data point is worth |G| data points", does it also imply something like "the complexity of a model class taking into account the invariance is the standard complexity of the model class divided by (the square-root of) |G|" (where the complexity can refer for instance to the Rademacher complexity)?
- Assumptions on G: the results depend on knowledge of the group G. What if G is only approximately known? For instance, what if the kernel K is invariant to G' while f* is invariant to G with G' slightly different from G (or if G' contains only a subset of the transformations in G)?
- How much of the result relies on the uniform distribution of x on the manifold? Could you comment on what terms would be affected by a change of measure in Theorem 3.1 and if the gain as G grows remains of the same order in this case?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: No question
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No limitation identified.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the respected reviewer for their helpful comments. Here we provide a response to the questions/weaknesses mentioned by the reviewer.
- Question: "Assumptions on f*"
- Answer: We thank the reviewer for mentioning this important comment. Indeed, the KRR algorithm does not need any information about the function $f^\star$ rather than its Hilbert space norm $|| f^\star ||_{\mathcal{H}}$ (just to determine the regularization parameter). However, according to Equation (120) in the appendix, we only need an upper bound on the Hilbert space norm of the function $f^\star$ (which is a common assumption while studying kernels). The upper bound will let us compute the regularization parameter and run the algorithm without any further knowledge of the target function, and also Theorem 3.1 is still valid if one replaces the upper bound on the Hilbert space norm of the target function instead of its true value (same is true for the noise variance). We note that having an upper bound on the Hilbert space norm is essential to prove an upper bound on the population risk because the Hilbert space norm, intuitively, characterizes the "complexity" of the class of function to be learned. We thank the reviewer for mentioning this important aspect of the result and will add a detailed explanation in the next version of the paper.
Also, for scaling the complexity of the model class under invariances, we believe that the complexity will reduce as the reviewer correctly mentioned, but the exact scaling highly depends on the notion of the complexity used in the problem. We believe our dimension counting formula could be used as a way to compactly quantify this complexity, and have potential applications to future works.
- Question: "Assumptions on G"
- Answer: Since the results in the paper hold generally for arbitrary Lie groups, if one only knows the invariances according to a smaller group $G'$ then the upper bound is still valid and is reduced to the one with $G'$ replaced instead of $G$. We will add explanations about this to the next version of the paper.
- Question: "How much ..."
- Answer: Thanks for mentioning this comment. We cannot expect the gain to be true for "any" distribution. For example, think of a distribution supported on only a submanifold of the original manifold, then the problem is effectively learning on a smaller manifold and all the volume/dimensions must be corrected accordingly. However, if the sampling distribution has full support, with an upper bound or lower bound on its density (compared to the uniform distribution), then the same rate is achievable within constant factors. Since the main goal of this paper was to quantify the gain of "invariance" we just focused on the uniform sampling for simplicity while it's not restrictive. We will add these explanations to the paper in the next version. it's | Summary: In this work the authors theoretically study how encoding invariances into models improves sample complexity. They approached the problem from differential geometric viewpoint, rather than common strategy of using invariant polynomials. Since the problem is algorithm and model dependent, the authors considered kernel-based algorithms, as neural networks in certain regimes behave like so. The obtained results generalize and greatly expand previous state-of-the-arts. Hence, the results provide a reduction in sample complexity that was not possible with previous assumptions. The paper also shows how these results transfer to popular invariant models, such as DeepSets, GNNs, PointNet, and SignNet.
Strengths: - The paper discuss one of the major challenges in machine learning, i.e., sample complexity.
- This work provides theoretical results on how incorporating invariance in model can improve sample complexity. Their results are more general and can achieve better bound.
- Well written.
Weaknesses: - None
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - None
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: - Theoretically intense, so for some readers, might be hard to follow.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the respected reviewer for their positive review and appreciation of the theoretical results. For the next version, we will pass the paper and make it more readable to address the issue mentioned in the limitations section. Indeed, we are planning to add figures to the paper to make the problem and contributions more visible.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I want to thank the authors for the rebuttal. I still think the paper addresses an important problem and the solution proposed is theoretically sound. I appreciate the authors intent to add figures in the main paper to help explaining the problem and contribution better. I will like to keep my rating as strong accept. | Summary: This article investigates the sample complexity gained from encoding invariances into learning models. The article focuses on the study of kernel ridge regression on compact manifolds for functions that are invariants to a group action on the manifold. The main result of this article (Theorem 3.1) gives an upper bound of the excess population risk for functions living in the intersection of Sobolev spaces and the set of G-invariant square-integrable functions on the manifold. The analysis shows the importance of two terms: the volume of the quotient space, and the effective dimension of the quotient space. This result quantifies the exact sample complexity gain from invariances for kernel regression on compact manifolds for an arbitrary Lie group action. Moreover, the authors prove that the KRR estimator is minimax optimality. The proofs of the results make use of new results in the field of differential geometry.
Strengths: - Although heavily technical, the article is well written and the proven results were extensively discussed.
- The newly proved results may be beneficial for the community of kernel methods on manifolds.
Weaknesses: To be honest, I didn't have enough time to check the proofs thoroughly. Given that the contribution is purely theoretical, I believe it will be better suited for a math-oriented journal, where a rigorous scrutiny of the proofs can be ensured.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: -
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the respected reviewer for their helpful comments. Here we provide a response to the questions/weaknesses mentioned by the reviewer.
- Question: "To be honest ... can be ensured."
- Answer: We appreciate the reviewer for mentioning this important comment. While we cannot put the proofs into the paper (for page limit), we have provided a brief proof sketch in the main body of the paper (Section 3.1), as well as a more detailed proof sketch in Appendix A. We hope that the outline of the proof we provided is useful for the reader.
Also, many theoretical papers appear in ML venues and we believe that theoretical papers have a place and are useful in the ML community (although this paper is applicable to kernel methods on manifolds, it is also applicable to kernel approximations of neural networks (e.g., in lazy regimes)). We hope that our theory can help better understand the effects of group invariances in the learning problem. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Statistical Analysis of Quantum State Learning Process in Quantum Neural Networks | Accept (poster) | Summary: This paper describes a formalization and experimental verification of the thesis that a quantum state $\vert\phi\rangle$ is the local minimum in the process of the QNN training. The paper is well written and provides all necessary support documents for its understanding. This work is an extension and is complementary to the work on the observed plateaus in the learning landscape during the training of QNN.
The most interesting finding is that the probability that a state $\vert\phi\rangle$ being the local minimum is inversely proportional to the number of qubits and proportionally growing with the depth of the QNN (number of layers in the network). This seems to be in opposition with the original work "Barren plateaus in quantum neural network training landscapes" where the learnability decreases with the number of qubits in the state.
The formal description seems to uphold the hypothesis.
Strengths: - Problem description and formalization
- The experimental verification of the observed phenomenon
Weaknesses: - The main weakness is the significance of the result. While the result is interesting and proven, it is a bit expected. In particular, the result shows us that with increasing embedding i.e. the number of parameters the representational power increases while it also vanishes with increasing number of qubits. So the first conclusion is not surprising and the second conclusion seems to follow the previous works. However, more importantly, because this papers concerns an unknown state, a conclusion should be drawn if this effect can be avoided at all. Because independently of this initial setting an unknown state can occur.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What are the more formal conclusions beyond a simple better initialization or problem aware initialization? Because your paper concerns a quite serious issue a p@rediciable discussion should be provided.
- Is there a ratio between the depth $d$ and $n$ so that the occurrence of the local minima is minimized? What is the order of such ratio?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: - The paper should be better discussed as for the consequences and unique conclusions. While there is a considerable amount of explanation I feel the authors failed to provide details on how to avoid the observed effect
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and recognition of our work as interesting and technically sound. A detailed response to the reviewer's comments and questions is provided below in a point-by-point manner.
>$\textbf{Comment 1:}$ ``The main weakness is the significance of the result. While the result is interesting and proven, it is a bit expected. In particular, the result shows us that with increasing embedding i.e. the number of parameters the representational power increases while it also vanishes with increasing number of qubits. So the first conclusion is not surprising and the second conclusion seems to follow the previous works. However, more importantly, because this paper concerns an unknown state, a conclusion should be drawn if this effect can be avoided at all. Because independently of this initial setting an unknown state can occur.''
$\textbf{Re 1:}$ Thanks for the comments. In general, our no-go theorem can be regarded as a no-free-lunch (NFL) theorem in quantum state learning tasks, which implies that the probability of avoiding local minima in learning an unknown state always vanishes exponentially with the qubit count, while only grows polynomially with the circuit depth. Although this conclusion seems partially intuitive, we emphasize that the formal statement and rigorous proof is non-trivial because we prove that the theorem holds for any initial state and we develop a complete mathematical toolkit to accomplish the proof (see ``subspace integration'' in Appendix A). Moreover, our results are distinct from previous works since our results place crucial theoretical limits for adaptive training methods, which is beyond the scope of barren plateaus [1]. Finally, we have discussed that prior information can indeed help to reduce the local minimum phenomenon in Section 3.3 (line 240-247). We will make this point more clear in the revised version of our manuscript.
[1] Jarrod R. McClean, Sergio Boixo, Vadim N. Smelyanskiy, Ryan Babbush, and Hartmut Neven. Barren plateaus in quantum neural network training landscapes. Nature Communications, 9(1):1–7, mar 2018.
>$\textbf{Comment 2:}$ ``What are the more formal conclusions beyond a simple better initialization or problem aware initialization? Because your paper concerns a quite serious issue a prediciable discussion should be provided.''
$\textbf{Re 2:}$ Thanks for this good question. Our results establish a rigorous limit for training methods without prior information, especially for those beyond the reach of barren plateaus such as simple good initial guesses and plain adaptive methods. Hence, our results suggest that a problem-inspired QNN architacture taking advantage of prior knowledge of the target state is vitally necessary, as discussed in Section 3.3. Specifically, an example of prior knowledge from quantum many-body physics is the tensor network states [1] satisfying the entanglement area law, which lives only in a polynomially large space but generally can not be solved in two and higher spatial dimensions by classical computers. Other examples include the UCCSD ansatz [2] in quantum chemistry which utilizes the fact that the Hamiltonians usually contains few-body interactions, and the QAOA ansatz [3] in combinatorial optimization which makes use of the quantum adiabatic evolution. The ways of leveraging prior information are diverse and dependent on specific problems, necessitating further research in the future. We will enhance the clarity of this aspect in the revision.
[1]. Felser, Timo, Simone Notarnicola, and Simone Montangero. Efficient tensor network ansatz for high-dimensional quantum many-body problems. Physical Review Letters 126.17:170603, 2021.
[2]. Jonathan Romero, Ryan Babbush, Jarrod R McClean, Cornelius Hempel, Peter J Love, and
Alán Aspuru-Guzik. Strategies for quantum computing molecular energies using the unitary
coupled cluster ansatz. Quantum Science and Technology, 4(1):014008, 2018.
[3]. Farhi, Edward, Jeffrey Goldstone, and Sam Gutmann. A quantum approximate optimization algorithm. arXiv preprint arXiv:1411.4028 2014.
>$\textbf{Comment 3:}$ ``Is there a ratio between the depth $d$ and $n$ so that the occurrence of the local minima is minimized? What is the order of such ratio?''
$\textbf{Re 3:}$ Thanks for this constructive question. According to the scaling $\mathcal{O}(N^22^{-N}D^2/\epsilon^2)$ established in our manuscript, the desired critical value of the ratio $\lambda=N/D$ indeed exists if we rewritten the scaling as $\mathcal{O}(\lambda^22^{-D\lambda}D^4/\epsilon^2)$ and fix the depth $D$. Through a straightforward differentiation, one can find the critical value of the ratio $\lambda_c$ is of order $\mathcal{O}(D^{-1})$, which can also be seen from the original scaling where there is a maximum at $N_c$ of order $\mathcal{O}(1)$. This reflects the fact that the linear growth of training parameters in each layer with the qubit count can rarely improve the exponential suppression from the lack of prior information about the target state which lives in the exponentially large Hilbert space.
---
Rebuttal Comment 1.1:
Comment: Thank you for answers, I have no further questions | Summary: The authors present a no-go theorem that reveals the limitations of learning unknown quantum states using QNNs, even with high-quality initial states. They prove that the probability of avoiding local minima decreases exponentially with the number of qubits but grows polynomially with circuit depth. The curvature of local minima is determined by the quantum Fisher information and a loss-dependent constant. These findings provide insights into the role of prior information and the scalability of QNNs, impacting their learnability and effectiveness.
Strengths: The work explores the trainability of quantum neural networks. The theoretical findings give the limits on the learnability of QNN in general cases which provide some insight into the development of QNN in future studies.
Weaknesses:
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In this work, it focus on learning pure states through QNN, and whether the statement is also true for the mixed states.
2. Whether the no-go theorem is also true for other loss functions, such as using another metric as the distance in loss?
3. In the numerical experiments, how to calculate the probability $Pr_{\mathbb{T}}[LocalMin(\theta^*,\epsilon)]$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations:
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's recognition of our work as a technically solid paper with good impact. Below is out point-by-point response to the reviewer's questions.
>$\textbf{Comment 1:}$ "In this work, it focus on learning pure states through QNN, and whether the statement is also true for the mixed states.''
$\textbf{Re 1:}$ Thanks a lot for this highly practical question. In the main text, we indeed focused on the case of pure states for the sake of simplicity. Understanding the learning of pure quantum states is central to quantum state learning.
However, we want to remark that the mathematical tools we developed in Appendix A can be also applied to the scenario of mixed or noisy output states, leading to similar conclusions. For example, suppose that the output state of the QNN is $\rho(\boldsymbol{\theta})$ and the target state is $|\phi\rangle$. The loss function can be defined as the fidelity distance $\mathcal{L}(\boldsymbol{\theta})=1-\langle\phi|\rho(\boldsymbol{\theta})|\phi\rangle$. Utilizing Lemmas S3 and S7, similar results can be carried out by calculating the subspace Haar integration. We shall add this generalization in the revised version of our manuscript. Nevertheless, when the output state and the target state are both mixed states, our theoretical tools can not be directly applied for the following reasons. First, the Bures fidelity of two mixed states $F(\rho,\sigma)=\operatorname{tr}\sqrt{\rho^{1/2}\sigma\rho^{1/2}}$ is hard to measure accurately on quantum devices. Second, for two mixed states, it is subtle and unclear to define the orthogonal decomposition like in Eq. (4), i.e., decompose the target state into the learned component and the unknown component, which may be left for future research.
>$\textbf{Comment 2:}$ "Whether the no-go theorem is also true for other loss functions, such as using another metric as the distance in loss?''
$\textbf{Re 2:}$ This is a very good question. Our theorem is indeed also true for certain other loss functions, such as the energy loss function (the expectation value of a Hermitian operator, or say the local loss function) in variational quantum eigen-solvers or combinatorial problems as shown in Appendix C. However, extending the results to other loss functions, such as alternative distance metrics, is not as straightforward and requires careful case-by-case proof utilizing the mathematical tools we developed in Appendix A, because the bound analysis for the Hessian matrix depends on the specific choice of loss functions. Nonetheless, here we can provide a general statement of the insight behind the rigorous results. The loss value below the average level actually indicates that the current parameters are relatively good. At this point, moving in a random optimization direction is likely to be less beneficial than staying in place, suggesting the presence of a local minimum. The probability of avoiding such a local minimum should be proportional to the ratio of the number of selected directions (training parameters) over the total number of all possible directions (Hilbert space dimension). Thus, the probability should in principle decay exponentially with the number of qubits, while grows polynomially with the number of training parameters.
>$\textbf{Comment 3:}$ "In the numerical experiments, how to calculate the probability $\operatorname{Pr}_{\mathbb{T}} [\operatorname{LocalMin}(\boldsymbol{\theta}^* , \epsilon)]$?''
$\textbf{Re 3:}$ Thanks for this careful question. In general, we numerically estimate the desired probability $\operatorname{Pr}_{\mathbb{T}} [\operatorname{LocalMin}(\boldsymbol{\theta}^* , \epsilon)]$ by sampling and counting the frequency of local minima. To be specific, we first choose a parameter point $\boldsymbol{\theta}^*$ randomly to obtain the QNN output state $|\psi^*\rangle=|\psi(\boldsymbol{\theta}^*)\rangle$ and specify the desired overlap $p$ and the error tolerance $\epsilon$. Then, we generate a Haar-random state $|\psi^\perp\rangle$ within the orthogonal complement of $|\psi^*\rangle$ as the unknown component and perform a superposition according to Eq. (4) to obtain a single target state sample $|\phi\rangle$. Then, for each sample, we calculate the gradient and Hessian matrix to check whether they satisfy the condition of local minimum in Eq. (5). After generating $200$ samples, we count how many times the condition of local minimum is satisfied and regard the frequency as an estimation of the probability. The detailed source code used in numerical experiments is available in the supplementary materials for your convenience. We will also describe this clearly in more details in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response that helps me better understand, I have no further questions. | Summary: The paper studies parameterized quantum circuits (aka QNNs). These architectures face trainability issues (e.g. barren plateaus) as the number of qubit grows, and several approaches have been explored to mitigate these. The paper analyzes these strategies using the task of training a circuit to transform |0> input state into a desired output state, and provides theoretical and numerical evidence of difficulty of training the circuit.
Strengths: The paper adds new, original result to an important, open problem of training QNNs. The result provide scaling law of the probability of avoiding local minima irrespective of techniques such as special initialization strategy. Importantly, the result explicitly involves the precision parameter, which is one of crucial differentiators between classical networks and QNNs that necessarily use quantum measurement.
Theoretical results are followed with extensive numerical simulations confirming the results in practice.
Weaknesses: The paper is focused on a specific loss, specific task, which leads e.g. to a specific type of dependence of local minima on parameter count (e.g. discussion in lines 206-209 on pg. 6). It is not fully clear how insights from this task translate to other losses/tasks.
The result showing that the number of local minima decreases with the expressibility of the circuit is in line with previous work, but those earlier results are not discussed in detail in the discussion. E.g., Larocca et al. Theory of overparametrization in quantum neural networks. 2021 [Ref 57] is cited in the introduction but not in discussion.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Discussion in 3.3 mentions complementarity of the abundance of local minima and barren plateaus on the expressibility axis. How do recent results on mitigating barren plateaus via architectural choices instead of initialization (e.g. Wang et al., ICLR'23) affect this understanding?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: There does not seem to be negative social impact of this theoretical research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful for the reviewer's recognition of our work as a novel and technically solid paper with good impact. A detailed response to the reviewer's comments and questions is provided below in a point-by-point manner.
>$\textbf{Comment 1:}$ ``The paper is focused on a specific loss, specific task, which leads e.g. to a specific type of dependence of local minima on parameter count (e.g. discussion in lines 206-209 on pg. 6). It is not fully clear how insights from this task translate to other losses/tasks.''
$\textbf{Re 1:}$ Thanks a lot for the comment. In the main text, we focus on quantum state learning tasks with the fidelity loss function of pure states for the sake of simplicity and ease of understanding. However, the mathematical tools we developed in Appendix A can be directly applied to other scenarios tasks. First, we show similar results for the energy loss function (the expectation value of a Hermitian operator, or say the local loss function) in variational quantum eigensolvers or combinatorial problems in Appendix C. Second, utilizing Lemmas S3 and S7 in Appendix A, similar results can be carried out for the case of mixed states or noisy states by calculating the subspace Haar integration. The loss function can be written as $\mathcal{L}(\boldsymbol{\theta})=1-\langle\phi|\rho(\boldsymbol{\theta})|\phi\rangle$ where $\rho(\boldsymbol{\theta})$ is the noisy output state of QNN and $|\phi\rangle$ is the target state. This generalization will be added in the revised version of our manuscript.
Extending the results to other loss functions, such as alternative distance measures between quantum states, is not as straightforward and requires more careful proof. However, here we can provide a general statement of the insight behind the rigorous results above. The loss value below the average level indicates that the current parameters are relatively good. At this point, moving in a random optimization direction is likely to be less beneficial than staying in place, suggesting the presence of a local minimum. The probability of avoiding such a local minimum should be proportional to the ratio of the number of selected directions (training parameters) over the total number of all possible directions (Hilbert space dimension). Hence, the probability should decay exponentially with the number of qubits, while grows polynomially with the number of training parameters.
>$\textbf{Comment 2:}$ ``The result showing that the number of local minima decreases with the expressibility of the circuit is in line with previous work, but those earlier results are not discussed in detail in the discussion. E.g., Larocca et al. Theory of overparametrization in quantum neural networks. 2021 [Ref 57] is cited in the introduction but not in discussion.''
$\textbf{Re 2:}$ Great thanks for drawing our attention to the literature that we had overlooked in the discussion. Indeed, we agree that the reduction of local minima with the increasing expressibility here bears some resemblance to Ref. [1]. Both of them reflect the notion that high-dimensional spaces can aid optimization. However, due to differences in settings, in Ref. [1], the reduction of local minima occurs after a computational phase transition at $M_c$ upon reaching over-parameterization. In contrast, our work implies that this reduction occurs from the beginning of increasing expressibility in an average sense. We will include this discussion in the revision.
[1] Martin Larocca, Nathan Ju, Diego García-Martín, Patrick J. Coles, and M. Cerezo. Theory of overparametrization in quantum neural networks, 2021.
>$\textbf{Comment 3:}$ ``Discussion in 3.3 mentions complementarity of the abundance of local minima and barren plateaus on the expressibility axis. How do recent results on mitigating barren plateaus via architectural choices instead of initialization (e.g. Wang et al., ICLR'23) affect this understanding?''
$\textbf{Re 3:}$ Thanks for the very insightful question. The complementarity of barren plateaus and local minima discussed here is based on two assumptions, respectively. (a) The QNN is randomly initialized and deep enough to form a unitary $2$-design, so that the Haar integration can give rise to exponentially vanishing mean value and variance of gradients. (b) No prior information is known about the target state except for the loss value $\mathcal{L}^*<\mathcal{L}_c$, so that the subspace integration can give rise to a locally minimal landscape with a probability exponentially close to $1$. A high-overlap initial guess may break assumption (a) but not (b), so it would still encounter local minima, as we discussed in our paper. On the other hand, a good choice of circuit architecture which takes advantage of the specific prior information of the target state, such as the symmetry of the transverse field Ising model considered in Ref. [1], could simultaneously break both assumptions (a) and (b). Thus, problem-inspired architectural choices as in Ref. [1] are likely to not only mitigate barren plateaus, but also reduce the local minima caused by the lack of prior information. We agree that the recent progress on symmetric pruning scheme will shed new lights on the study of QNN. In the revised version, we will elaborate more discussions on recent related papers.
[1] Wang, X., Liu, J., Liu, T., Luo, Y., Du, Y. and Tao, D., 2022, September. Symmetric Pruning in Quantum Neural Networks. In The Eleventh International Conference on Learning Representations (ICLR 2023).
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, I have no further questions. | Summary: This paper investigates the learnability of the QNN in the task of quantum state learning from a statistical perspective. The paper develops a no-go theorem that proves that when the loss function value is lower than a critical threshold, the probability of avoiding local minima decreases exponentially with the number of qubits, while only increasing polynomially with the circuit depth. Moreover, the paper conducts some numerical experiments to validate the proposed theorem.
Strengths: 1. The paper studies a novel research problem, namely the limitation of quantum neural networks in quantum state learning tasks, and analyzes the influence of loss function value information on training difficulty from a statistical perspective, which has certain value for understanding and improving the principles and methods of quantum machine learning.
2. It provides a rigorous and quantitative theoretical analysis showing that when the loss function value is below a critical threshold, the probability of avoiding local minimum decay exponentially with the number of qubits, but only grows polynomially with the circuit depth, revealing the tradeoff between learnability and scalability of QNNs.
Weaknesses: 1. The theoretical analysis is only applicable to pure state learning tasks, and in fact, mixed states or noisy states may be encountered in quantum machine learning, so the conclusions and methods may need further generalization and verification.
2. Numerical experiments only use one kind of QNN but do not consider other possible circuit structures and parameterization methods, the results may have certain biases and limitations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why choose the ALT structure as the research study in the numerical experiment? ALT in fact provably does not suffer from the vanishing gradient problem, and does this affect the experimental results as well as the theoretical proof?
2. The abstract mentioned that "the results hold for any circuit structure", and is there any theoretical or experimental proof of this?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper discusses the limitations and addresses them.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's positive assessment on the correctness, novelty and value of our work. We also thank the reviewer for the helpful feedback. Below is our point-by-point response to the comments and questions.
> $\textbf{Comment 1:}$ ``The theoretical analysis is only applicable to pure state learning tasks, and in fact, mixed states or noisy states may be encountered in quantum machine learning, so the conclusions and methods may need further generalization and verification.''
$\textbf{Re 1:}$ Many thanks for raising this highly practical issue. In the main text, we focused on the case of pure states for the sake of simplicity and ease of better understanding quantum state learning. However, the mathematical tools we developed in Appendix A can be directly applied to the scenario of mixed states or noisy states. For example, suppose that the output state of the QNN is $\rho(\pmb{\theta})$ and the target state is $|\phi\rangle$. The loss function can be defined as the fidelity distance $\mathcal{L}(\pmb{\theta})=1-\langle\phi|\rho(\pmb{\theta})|\phi\rangle$. Utilizing Lemmas S3 and S7, similar results can be carried out by calculating the subspace Haar integration. We shall add this generalization in the revised version of our manuscript.
> $\textbf{Comment 2:}$ ``Numerical experiments only use one kind of QNN but do not consider other possible circuit structures and parameterization methods, the results may have certain biases and limitations.''
$\textbf{Re 2:}$ Many thanks for this helpful comment! Theoretically, our theorem hold for any circuit structure since the proof does not involve specific QNN structures (see our response to the question below for detailed explanation). However, to make the numerical experiments more convincing independently, we agree with your suggestion and will conduct additional experiments using different and common circuit structures and add them in the revised version of our manuscript. Thanks for the suggestion again and we believe considering other possible circuit structures will strengthen our manuscript.
> $\textbf{Comment 3:}$ ``Why choose the ALT structure as the research study in the numerical experiment? ALT in fact provably does not suffer from the vanishing gradient problem, and does this affect the experimental results as well as the theoretical proof?''
$\textbf{Re 3:}$ Thanks a lot for this good question. We choose the ALT structure because it is one of the most extensively used and studied ansatz in variational quantum algorithms [1]. To the best of our knowledge, the ALT ansatz does have better trainability than the hardware-efficient ansatz used in the original paper of barren plateaus. Shallow ALT circuits indeed do not suffer from the vanishing gradient problem. However, if one consider deep circuits to extend the expressibility of QNNs, random QNNs of the repeated-layer type will in general approximate unitary 2-designs given a sufficiently large depth [2], including the ALT ansatz. Thus, the deep ALT ansatz will also suffer from the vanishing gradient problem. Putting aside the points on ALT mentioned above, the specific circuit structure does not affect our theoretical proof (see our response to the next question for detailed explanation), though it may slightly affect the experimental results due to the difference of trainability.
[1]. Nakaji, Kouhei, and Naoki Yamamoto. Expressibility of the alternating layered ansatz for quantum computation. Quantum 5:434, 2021.
[2]. Jonas Haferkamp. Random quantum circuits are approximate unitary t-designs in depth $o(nt^{5+o(1)})$. Quantum, 6:795, 2022.
> $\textbf{Comment 4:}$ `` The abstract mentioned that "the results hold for any circuit structure", and is there any theoretical or experimental proof of this?''
$\textbf{Re 4:}$ This is a very insightful question. To provide a clear explanation to this question, we would like to start by reviewing how previous results on QNN trainability depends on the circuit structure. For example, in the original paper of barren plateaus [1], one of the hypotheses is that the random initialized QNN approximates a unitary 2-design so that the Haar integration in the calculation could be carried out safely. This means that their conclusion only applies to random initialized deep QNNs while shallow ones are not subject to such limitations. On the other hand, there is also some literature that does not make any assumption on the circuit structure and initialization of parameters [2], where the ensemble comes from the randomness of unknown learning targets, as in the setup of the Hayden-Preskill thought experiment [2]. The setting in our paper resembles the latter. We suppose that no more information about the target state is known except the measured loss value, so that we can take averages over the orthogonal complement space reasonably. Our theoretical proof does not involve any specific circuit structure and the only relevant hyper-parameters are the qubit count, the circuit depth and the value of quantum Fisher information. Therefore, our results naturally hold for any circuit structure.
[1] Jarrod R. McClean, Sergio Boixo, Vadim N. Smelyanskiy, Ryan Babbush, and Hartmut Neven. Barren plateaus in quantum neural network training landscapes. Nature Communications, 9(1):1–7, mar 2018.
[2] Zoë Holmes, Andrew Arrasmith, Bin Yan, Patrick J. Coles, Andreas Albrecht, and Andrew T. Sornborger. Barren plateaus preclude learning scramblers. Physical Review Letters, 126(19), sep 2020.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed reply that solved my concerns. I have no more questions and adjusted the score (5->6). | Rebuttal 1:
Rebuttal: Dear PC,
We want to express our sincere gratitude to the Program Committee for their hard work in shaping the conference's scientific program. We would also like to thank all the reviewers for recognizing our work as a novel and technically solid paper and recommending acceptance of our submission.
We appreciate the reviewers' time and efforts in reviewing our work and providing constructive feedback. We have addressed all the comments and questions raised by the reviewers in the rebuttal.
Thank you for considering our work!
Yours Sincerely,
Authors of Paper 11169. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces a new statistical analysis for training variational quantum circuits (e.g., in terms of quantum neural networks), where the alternating-layered ansatz (Nakaji et al.; ALT), also referred to as the entanglement circuit for quantum ML in Chen et al. 2019 [4], has been characterized as a general quantum state learning task.
In general, the presentation flow is quite good, including a proper introduction to the $l_p$ norm and Dirac notations. The authors have made considerable efforts to guide the readers from the existing learning definition in vector-to-vector mapping to standard encoding based parameterized quantum state learning.
Although some recent works on error analysis in quantum circuit learning [1] and classical encoding circuits [2,3] are unfortunately omitted, the paper actually provides a careful and detailed review of related work in the appendix.
In general, while the theorem is neat, moving from elaborating fidelity loss to fisher information based bound analysis, this paper also conducts a solid local minima analysis. Despite the fact that learnability is considered a no-go perspective and some related work (e.g., [1]) is missing, I believe the theoretical findings and its supporting numerical results conducted good takeaways to the community.
In general, I tend to accept this paper.
***
**References**
1. "Theoretical error performance analysis for variational quantum circuit based functional regression." J Qi et al. npj Quantum Information 9.1 (2023):. Nature
2. "Quantum Circuit Learning," K. Mitarai et al., Physical Review A 98.3 (2018): 032309.
3. "Quantum machine learning in feature hilbert spaces, M. Schuld, Physical Review Letters"
4. "Variational Quantum Circuits for Deep Reinforcement Learning" 2019
Strengths: - The paper provides a clear characterization of the curvature of local minima, which is important to understand the sensitivity of output state with respect to QNN (VQC learning) parameters.
- It provides quantitative limits on good initial guesses related to no free lunch (NFL) theories and adaptive methods for improving the learnability and scalability of QNNs.
- Good presentation quality.
- The paper suggests that no single QNN is universally the best-performing model for learning all target quantum states. This introduces additional complexity for practical applications as it may necessitate more structured QNN architectures and innovative optimization tools.
Weaknesses: - the ensemble setting in Appendix A. 1 is not clear on the motivation of using model ensemble.
- despite the results, there is a level of uncertainty remaining as the exact scaling of QNN depth needed to form a subspace 2-design is not very clear
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. the ensemble setting in Appendix A. 1 is not clear on the motivation of using model ensemble.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: - The no-go theorem, while important for understanding the limitations of QNNs, might be a potential barrier to the application of quantum neural networks in real-world scenarios.
- The paper implies that significant future progress will be needed, potentially borrowing insights from the field of deep learning, to overcome the limitations of current QNNs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's recognition of our work as technically solid, well written, and valuable to the community. We are also grateful to the reviewer for drawing our attention to the literature that we had overlooked. We will ensure to include these references in the revised version of the manuscript. The following is our point-by-point response to your comments and questions.
> $\textbf{Comment 1:} $``The ensemble setting in Appendix A.1 is not clear on the motivation of using model ensemble.''
$\textbf{Re 1:}$ We thank the reviewer for this comment on the ensemble setting. The motivation of our ensemble setting originates from the empirical observation that the probability of encountering local minima when running a variational quantum algorithm depends on the loss value (i.e. the fidelity in quantum state learning tasks). Especially for adaptive methods which are usually not covered by common trainability analyses, the training process often tends to stagnate when a certain loss value is reached. This phenomenon motivates us to study the learnability at a certain loss value. In other words, we want to explore the average learnability among the quantum states $|\phi\rangle$ which have the same fidelity $p$ with the current learned state $|\psi\rangle$, i.e., $|\langle\phi|\psi\rangle|=p$, where our ensemble naturally arises. Graphically, this ensemble forms a circle on the Bloch sphere centered $|\psi\rangle$ as shown in Fig. 1(b) in the manuscript. Mathematically abstracting and generalizing this set of ``quantum states with fixed-fidelity'' leads to the ensemble setting introduced in Appendix A.1.
> $\textbf{Comment 2:} $``Despite the results, there is a level of uncertainty remaining as the exact scaling of QNN depth needed to form a subspace 2-design is not very clear.''
$\textbf{Re 2:}$ Thanks for this insightful comment. As mentioned in the outlook section, although it is well known that a local random quantum circuit of polynomially depth forms an approximate unitary 2-design [1], it remains an open question what the scaling of the depth is to constitute a subspace 2-design. But we can still provide some qualitative analysis. First, the scaling is at most exponential in the number of qubits because the Haar measure over the whole Hilbert space can directly induce a subspace 2-design by constraining the space. Moreover, whether a random QNN can form a subspace 2-design strongly depends on the desired loss value. If the desired loss value is too close to the theoretical minimum, it will be extremely hard for a random QNN to cover those states with such a loss value so that the depth needed will be very large. On the other hand, if the loss value is near its average level, e.g. $1/d$ for the fidelity loss function, the required depth will be relatively smaller since there are many states concentrated into this range.
Here we specially clarify that this specific depth scaling will not affect our conclusion in the manuscript since our ensemble takes average over random target states instead of random QNNs. We mention this issue in the outlook section only because if this depth scaling is known, it would allow us to directly apply our mathematical tools to the setting of random QNNs.
[1] Jonas Haferkamp. Random quantum circuits are approximate unitary t-designs in depth $o(nt^{5+o(1)})$. Quantum, 6:795, 2022. | null | null | null | null | null | null |
When is Agnostic Reinforcement Learning Statistically Tractable? | Accept (poster) | Summary: The authors study the problem of sample complexity in finite horizon MDPs from an agnostic perspective. More specifically, the term agnostic refers to the following setting: given a (finite) policy class $\Pi$, they aim at understanding the number of samples that are required to output a policy that is $\epsilon$-optimal w.r.t. the best policy within $\Pi$ is.
To study the problem, they introduce the concept of *spanning capacity*, a novel complexity metric that depends solely on $\Pi$ and is independent of the MDP dynamics.
Then, the authors study two interaction settings (i.e., generative model and online setting) and derive the following results:
- Generative model setting. The authors show that the spanning capacity describes the learning complexity up to an $H \log(|\Pi|)$ factor. This is proven by deriving lower and upper bounds that are explicitly dependent on the spanning capacity.
- Online setting. In this case, the results can be summarized as follows:
- The spanning capacity fails at describing the complexity of online RL. Indeed, the authors present a lower bound that shows that a superpolynomial-in-the-horizon number of samples are still needed for agnostic learning (i.e., it cannot be lower-bounded by any polynomial function of the spanning capacity).
- By restricting the set $\Pi$ to "sunflower" policies, provably efficient learning is possible. In this sense, the authors propose POPLER, an algorithm whose complexity scales with the spanning capacity and characteristic that depends on the structure of $\Pi$.
Strengths: Provably efficient learning is a significant and open problem for RL agents, with a longstanding history of different methods and approaches. The authors thoroughly review existing works (Appendix A) and properly contextualize their results within the field.
In this sense, novel ideas, such as the ones proposed by the authors, are of interest to the NeurIPS community that is interested in RL theory.
More specifically, the notion of spanning capacity is, to the best of my knowledge, original and of potential interest.
Indeed:
1) As the authors show in Proposition 3, this metric recovers, in worst-case scenarios, existing upper and lower bounds available in the literature. Furthermore, in more favorable settings, it is significantly smaller.
2) Furthermore, as the author shows in Theorem 1 and 2, it describes the min-max sample complexity (up to H*log(\Pi) factors) in the generative model setting. Although, due to this mismatch, the problem is still open, the progress remains significant.
In the online interaction setting, this metric turns out to be not descriptive enough (i.e., Theorem 5, a superpolynomial-in-the-horizon number of samples are still needed for agnostic learning). Although, in some sense, this is a drawback (see weaknesses below), this negative result can potentially provide insights to guide future research in metrics that properly describe provably efficient agnostic RL in online settings.
Finally, (minor) by carefully restricting the set of policies, the authors are able to show that the spanning capacity can be used to upper-bound the complexity in the online setting as well. This adds an additional (but minor, see below why) point in favor of the spanning capacity.
On the clarity. Overall, the paper is a nice read, and it is relatively easy to follow (except a minor suggestion, see below).
Weaknesses: **Online RL**
As commented above, the authors show that the spanning capacity turns out to be not descriptive enough for providing polynomial bounds on the sample complexity of agnostic RL in an online interaction setting. Nevertheless, the metric is in itself something that the authors propose. For this reason, one might object that this is a potential drawback of the metric that is *designed* by the authors.
A main question, in this sense, remains open: Is there any metric (e.g., a generalization of the spanning capacity) that characterizes both scenarios?
**On sunflower policies: online RL under restrictive assumptions on $\Pi$**
Currently, this point seems the major weakness of the presented work. Indeed, I find their definition quite involved, and it is unclear if it is the result of an artifact of the analysis rather than something necessary to reach polynomial sample complexity in the online setting. The fact that a lower bound is missing reinforces this doubt. I invite the authors to comment on this point.
**On the clarity**. I would suggest the authors to improve the clarity on line 206-213, maybe by introducing some figures. Indeed, I still miss this natural interpretation of the spanning capacity.
**Minors**:
- The term "surprising" to refer to the mismatch between online RL and the generative model setting might be mitigated. Indeed, the generative model setting is intuitively easier.
- It is remarked multiple times that other works make stronger assumptions that are rarely satisfied in practice (e.g., realizability). Nevertheless, stronger theoretical results (\epsilon-optimality) are usually achieved under these assumptions. This should be stated more clearly in my opinion.
- line 176 typo: "loogmk"
- The definition of the set of MDPs seems improper. To the best of my understanding, state, and action spaces are fixed over this class. In this sense, e.g., M^sto is not the set of all MDPs with horizon H, but the set of all MDPs with horizon H and state-action spaces given by (S,A)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) I invite the authors to discuss the relationship between their work and problem-dependent learning complexity analysis (e.g., "Optimistic PAC RL: the instance-dependent view", Tirinzoni et al., ALT 2023).
2) See the point on sunflower policies in the weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors discuss the limitations of their work as open questions in the conclusion section.
Potential negative societal impact: the paper deals with foundational research on the sample complexity of RL. I don't see a direct path to negative applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and time. Below we respond to your points and questions.
**A Metric that Characterizes Both Scenarios.** To the best of our knowledge, there is no metric that captures the complexity of agnostic PAC RL in both generative and online RL settings. We introduced and investigated spanning complexity because:
- It is the right complexity measure in the generative model setting.
- Since online RL is at least as hard as RL with a generative model, spanning complexity also lower bounds the sample complexity in online RL.
- It is closely related to the notion of coverability, which was introduced in prior work [1] for realizable value-function based RL.
We view this as an important question for future research.
**Sunflower Property.** We address this concern, along with other questions/points on the sunflower property, in a joint response to all the reviewers.
**A Natural Interpretation of the Spanning Capacity.** In the final version, we will include a clearer exposition of the spanning capacity, as well as figures that illustrate it.
**Separation between Online RL and Generative Model.** In light of our results in Section 4, it was natural to conjecture that spanning capacity also characterized online RL. Indeed, this was true for many nontrivial policy classes we considered.
We agree with the reviewer that the generative model is intuitively easier, since one can simulate online RL with a generative model. However, separation results between the generative model and online RL are exceedingly rare, and in this sense we view this separation as ``surprising''. In fact, the only policy classes for which we can show a separation are the nonexplicit constructions from Theorem 3 based on the probabilistic method (specifically, see our Lemma 4).
**Definition of Set of MDPS.** You are correct, we will clarify this in our revision.
**Comparison to Problem-Dependent Learning.** We thank the reviewer for bringing [2] to our attention. Please find below a summary of the key differences:
- [2] studies instance-dependent guarantees for tabular RL. Their bounds depend on the suboptimality gaps as well as reachability probabilities. Furthermore, their bound in Theorem 2 has a linear dependence on $|\mathcal{S}|$, the state space size. In contrast, our work focuses on the large state space setting where we do not want any extraneous dependence on the state space size. We also do not investigate instance-dependent bounds. It would be interesting to see if POPLER can be adapted to achieve refined instance-dependent bounds.
- Algorithmically, the work [2] uses an optimism-based approach. In contrast, POPLER is based on reachable state identification. In this sense, POPLER is actually more similar to MOCA [3], which studies instance-dependent PAC RL for tabular (again, note that we work in the more general large state space setting, which necessitates technical innovations for efficient exploration).
We will add a citation to [2, 3], and a more detailed comparison, in the final version of the paper.
[1] Tengyang Xie, Dylan J. Foster, Yu Bai, Nan Jiang, Sham M. Kakade. "The Role of Coverage in Online Reinforcement Learning".
[2] Andrea Tirinzoni, Aymen Al-Marjani, Emilie Kaufmann. "Optimistic PAC Reinforcement Learning: the Instance-Dependent View."
[3] Andrew Wagenmaker, Max Simchowitz, Kevin Jamieson. "Beyond No Regret: Instance-Dependent PAC Reinforcement Learning."
---
Rebuttal Comment 1.1:
Title: Ack
Comment: I thank the authors for their in-depth rebuttal. I have no further questions for the authors.
I am raising my score to 7 as many concerns that I raised have been addressed. I think this submission is of interest to the community. Nevertheless, I am still a bit skeptical about the arguments on the sunflower properties. | Summary: This paper studies the minimax sample complexity of learning the best policy within a given policy class $\Pi$, i.e., the best sample complexity of learning $\argmax_{\pi\in \Pi}V^\pi$ in the worst-case MDPs. As a motivation, Proposition 1 shows that without future assumptions on the structure of $\Pi$, the best policy is not PAC-learnable in the worst case MDPs. Hence, this paper proposes a new complexity measure, spanning capacity, that counts maximum number of reachable states by policies in $\Pi$ for all deterministic MDP. With generative models, this paper shows that the spanning capacity characterizes the minimax sample complexity of learning $\Pi$. For online RL, this paper requires an additional structural assumption called sunflower property and proves corresponding upper bounds.
Strengths: This paper studies a novel aspect of PAC learning in MDPs (called agnostic reinforcement learning) --- learning the best policy within a policy class with no modeling assumptions for the dynamics or value function class. The agnostic reinforcement learning problem could be a new approach toward RL with function approximations since the realizability assumptions on the complicated dynamics/value function class are no longer needed, and it is indeed unclear whether those assumptions hold for practical environments.
This paper proves novel upper bounds agnostic reinforcement learning setting by introducing a new complexity measure called the spanning capacity. With generative models, this paper proves that the spanning capacity is necessary and sufficient for agonistic reinforcement learning. This is morally a strong result albeit the simplicity of its proof. This paper also proves results for the classic online RL setting with additional structural assumptions on the underlying MDP.
This paper is well-written --- exposition of the results is clear with the help of concrete examples. The results and assumptions in this paper are well motivated by corresponding lower bounds (Proposition 1 and Theorem 2).
This paper mainly focuses on the minimax complexity of agonistic RL. While this perspective eliminates the dependence on the realizability assumptions on the underlying MDP, it is possible that the resulting complexity is over-pessimistic. It would be interesting to have an instance-dependent complexity measure that depends on some property of the underlying MDP.
Weaknesses: The sunflower structure in Section 6 lacks necessary motivations and seems a bit out of nowhere. I understand that this assumption helps extend the IS technique to the online RL setting. However, it’s unclear to me whether this assumption is necessary/nature for agonistic RL. Intuitively, why/when should we expect the sunflower structure to hold?
[Minor] The computation complexity of the algorithm is linear in the size of the policy class. Is it possible to have some efficient implementation with the help of some standard computation oracles?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Alg. 1 takes as input the set $S_\pi$ and $\Pi_{core}$ defined by Def. 3. They depend on the underlying MDP (because the partial trajectory is generated by the MDP, if I understand correctly). Is the algorithm assumed to have the knowledge of $S_\pi$ and $\Pi_{core}$? If not, how should the algorithm compute $S_\pi$ and $\Pi_{core}$?
- In Theorem 3, the spanning capacity is $H^{O(\ell)}$, while the sample complexity lower bound is $\epsilon^{-\ell}$ with $\epsilon=H^{O(1)}$, which means that the sample complexity lower bound is polynomial in the spanning capacity. Then why is the spanning capacity not sufficient to characterize the minimax sample complexity?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors adequately addressed the limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and time. We respond to your questions and points below.
**Sunflower Structure Lacks Motivation.** We address this concern, along with other questions/points on the sunflower property, in a joint response to all the reviewers.
**Computational Complexity.** Yes, the reviewer is correct that our algorithm POPLER will have computational complexity which depends polynomially on $|\Pi|$, which is prohibitively large for practical RL scenarios. Getting computationally or oracle-efficient algorithms for agnostic RL is a fascinating direction for future research.
**The sets $\mathcal{S}\_\pi$ and $\Pi\_{\mathrm{core}}$.** We apologize for any confusion. We would like to correct the reviewer here: the sets $\\{\mathcal{S}\_\pi \\}\_{\pi \in \Pi}$ and $\Pi\_{\mathrm{core}}$ do not depend on the underlying MDP dynamics, but only on the policy class $\Pi$ (as well as $\mathcal{S}, \mathcal{A}, H$). The partial trajectories in Definition 3 can be arbitrary as long as they are consistent with the policy.
We can assume that POPLER takes as input $\\{\mathcal{S}\_\pi \\}\_{\pi \in \Pi}$ and $\Pi\_{\mathrm{core}}$ because these sets can be computed before by enumerating over all possible choices, and picking the ones which optimize the bound in Theorem 4. This point will be clarified in the final version.
**The Sample Complexity Lower Bound.** We thank the reviewer for pointing out this issue. This can be fixed by expanding the range of acceptable $\epsilon$ and $\ell$. Here is an updated version of Theorem 3.
**Theorem 3.** *Fix any sufficiently large $H$. Let $\epsilon \in (1/2^{O(H)}, O(1/H))$ and $\ell \in \\{2, 3, \dots, H\\}$ such that $1/\epsilon^\ell \le 2^H$. There exists a policy class $\Pi$ of size $O(1/\epsilon^\ell)$ with $\mathfrak{C}(\Pi) \le O(H^{4\ell + 2})$ and a family of MDPs $\mathcal{M}$ with state space $\mathcal{S}$ of size $2^{O(H)}$, binary action space, and horizon $H$ such that: for any $(\epsilon, 1/8)$-PAC algorithm, there exists an $M \in \mathcal{M}$ for which the algorithm must collect at least $\Omega(\min \\{ 1/\epsilon^\ell, 2^{H/3}\\})$ online trajectories in expectation.*
To interpret this result, let us pick $\epsilon = 1/2^{\sqrt{H}}$ and $\ell = \sqrt{H}$. Then we get the following corollary, which demonstrates that the sample complexity for learning this class cannot be a polynomial function of $\mathfrak{C}(\Pi)$, $\epsilon$, and $\log |\Pi|$.
**Corollary.** *For any sufficiently large $H$, there exists a policy class $\Pi$ with $\mathfrak{C}(\Pi) = 2^{O(\sqrt{H} \log H)}$ such that for any $(1/2^{\sqrt{H}}, 1/8)$ PAC algorithm, there exists an MDP for which the algorithm must collect at least $2^{\Omega(H)}$ online trajectories in expectation.*
We will include this correction in the final version of the paper.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thank you for the response. While the evidence about the necessity of the sunflower property partly addressed my concerns, I still think this paper could benefit a lot from establishing somewhat rigorous/provable claims about it. Given that my score is already weak accept, I will keep my score as it is. | Summary: This paper studies conditions on which agnostic reinforcement learning is statistically tractable. The paper introduces a new concept of complexity measure called spanning capacity, which sorely depends on the policy class. The authors studies in what cases the sample complexity of agnostic RL can be polynomial to spanning capacity.
The contributions include 1) for generative model, the authors show spanning capacity is a necessary and sufficient complexity measure for agnostic RL, 2) for online model, the authors show spanning capacity is insufficient: they prove a lower bound superpolynomial to the spanning capacity of policy class. 3) the authors propose a strong assumption of the policy class called sunflower structure, and propose an algorithm that is statistically efficient to spanning capacity under the sunflower structure assumption.
Strengths: - Clarify: the paper is very well-written. I appreciate the intuitions and examples given in the main text. In addition, the proofs in the appendix are well organized and easy to follow.
- Novelty and Significance: While prior work has studied the sample complexity for RL problems, what assumptions are sufficient or necessary for statistically efficient is still not clear. The concept spanning capability is new and reasonable to me.
- Technical correctness: I didn’t find any technical errors in the proofs included in the appendix, but I admit I was not able to go through all the details in the appendix.
Weaknesses: - The (K,D)-sunflower structure is an overly strong assumption and not reasonable enough. It seems to only work when there is a small subset of policies (so K is small) that covers most states (so D is small).
- Although Algorithm 1 is sample efficient and only needs to collect polynomial trajectotires, the algorithm itself is not efficient: the algorithm needs to traverse all policies in $\Pi$ to find the policy that can reach states which cannot be reached by $\Pi_{\text{core}}$. The complexity of Algorithm 1 is at least $O(|\Pi|)$.
- For Theorem 4, it seems the sample complexity is about $O({1}/{\epsilon^4})$, which is not optimal on the order of $\epsilon$. This also implies the sample complexity will be larger than $\widetilde{\mathcal{O}} (\min\{A^H. |\Pi|, HSA\}/\epsilon^2)$ when $\epsilon$ is small enough.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - in Definition 2, how is the second equation established? It is not clear to me how to make the inf on $\mu$ to the sum on $(s,a)$ pairs.
- It seems coverability coefficient is always no larger than the span capability. Is coverability a sufficient complexity measure for generative model? Moreover, I note that the authors claim coverability is insufficient for online RL. However, considering that spanning capability is also insufficient for online RL, where is the key difference.
- in appendix Lemma 2(1), what does $N$ means. The authors have not introduced this notation in Section E.
One minor error: in appendix Line 865 and 867, it should be $j\in [2^{2H}]$ instead of $j\in [2^{H}]$.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Theoretical work. No limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and time. Below we respond to your questions and points.
**The Sunflower Property is Overly Strong.**
We address this concern, along with other questions/points on the sunflower property, in a joint response to all the reviewers.
**Regarding Definition 2.**
The equivalence in the second equation follows by setting $\mu\_h(s,a) \propto \max\_{\pi \in \Pi} d^\pi\_h(s,a)$. This result was established in the prior work [1] in their Lemma 3. We refer the reviewer to look at [1] for a detailed proof and more intuition.
**Is Coverability Sufficient for Generative Model?**
We do not know, but we conjecture that one can show that coverability itself is insufficient when $\mathfrak{C}(\Pi)$ is large, i.e., one cannot adapt to ``easy'' instances. It may be possible to provide such a lower bound using low-rank MDPs (for example, see the lower bound construction in [2]).
A related point: for online RL, we can actually replace $\mathfrak{C}(\Pi)$ by the smaller coverability coefficient in the theorem. However, in this setting, we need to make the additional sunflower property assumption.
Understanding when we can get guarantees in terms of coverability (or similar instance-dependent measures) is an exciting direction for future work.
**Lemma 2 (1)** This is a typo. Here, $N$ denotes the size of $\Pi^{(\ell)}$.
**Minor Error.** Yes, you are correct. This mistake is made in a couple of places, and we will correct it.
[1] Tengyang Xie, Dylan J. Foster, Yu Bai, Nan Jiang, Sham M. Kakade. "The Role of Coverage in Online Reinforcement Learning".
[2] Christoph Dann, Yishay Mansour, Mehryar Mohri, Ayush Sekhari, Karthik Sridharan. "Agnostic Reinforcement Learning with Low-Rank MDPs and Rich Observations".
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response to my questions. My major concerns have been addressed. Given the score indicates accept, I meantain this score. | Summary: This work studies agnostic learning in RL, and seeks to characterize when, given some policy class $\Pi$, it is possible to learn an $\epsilon$-optimal policy in $\Pi$ regardless of the MDP. They propose a novel complexity measure—the spanning capacity—which depends only on the policy class $\Pi$ (i.e. is independent of the underlying MDP) which they show is both a necessary and sufficient measure of learnability when the learner has generative access to the MDP. In the fully online setting, however, it is shown that the spanning capacity is no longer a sufficient measure of capacity. To characterize efficient learning in the online setting, they introduce the notion of a ``sunflower’’ policy class, and show that if a policy class has bounded spanning capacity and is a sunflower class, then efficient learning is possible in the online setting as well.
Strengths: 1. I believe this work makes an interesting and novel contribution to the RL literature. There has been much interest over the last several years in developing general complexity measures that characterize when efficient learning in RL is possible, but existing work has typically made stronger assumptions—for example, access to a model or value function class where realizability holds (i.e. the true model/value function is in the class)—than this work, which does not assume realizability, and simply considers the question of finding the best policy in some set of policies.
2. The results in this work show that there is a formal separation between the generative model setting and online access setting in policy learning framework considered here. To my knowledge, such a result has not been previously known, and is an interesting observation.
3. In the generative setting, the characterization given by the spanning capacity is very clean: for any policy class $\Pi$, efficient learning is possible if and only if the spanning capacity is small. While this is minimax over MDPs (see below), it is not minimax over policy classes—it applies to any policy class.
4. The spanning capacity is a very intuitive measure of complexity and admits a clean interpretation.
5. The paper is very well-written and reads very nicely.
Weaknesses: 1. While the spanning capacity does provide a clean characterization of the learnability of a given policy class, it is a very worst-case measure over problem instances. It could be the case that the spanning capacity of a given policy class is very large, implying that efficient learning is not possible in general with the given policy class, but where on many MDPs efficient learning with the given policy class is still possible. In particular, for the spanning capacity to be large, there only needs to exist a single deterministic MDP for which it is difficult to find the optimal policy in class on—there could be many MDPs where it is very easy to find the optimal policy in class on, but this is not captured by the spanning capacity. In contrast, in practice we are typically interested not just in the policy class but the interaction between the policy class and the underlying dynamics of the MDP, as this characterizes how feasible it is to actually learn a good policy for a given problem. Due to this worst-case nature, for settings with large state or action spaces, the examples given of policy classes with bounded spanning capacity are very simple and not representative of the sorts of policy classes that would actually be used (in theory or practice).
2. While it is shown that the sunflower condition is a sufficient condition for learning in the online setting, it is not clear it is a necessary condition (and furthermore, the scaling on $(K,D)$ given in Theorem 4 is likely not tight). Some discussion of this would be helpful.
3. In principle, ignoring computation cost, I believe the optimal values of $K$ and $D$ for Theorem 4 could be computed a priori (before interacting with the MDP). This would be helpful to clarify around Theorem 4 (in particular on lines 296-298). Since POPLER takes as input $K$ and $D$, the bounded stated in Theorem 4 cannot be optimized for $K$ and $D$ after the fact, but can be optimized if the optimal $K$ and $D$ are computed before running POPLER (I found this somewhat unclear from the discussion on 296-298).
4. I understand that the main contributions of this paper are statistical, but it would still be helpful to comment on the computational efficiency of the proposed algorithms.
5. Several additional papers that should be cited: [1] also considers agnostic RL (albeit with stronger assumptions on the realizability of the model class). [2,3] are additional works on RL with function approximation that are generally relevant to this work.
6. Minor typo: line 103 should read “access” not “saccess”.
[1] Wagenmaker, Andrew, and Kevin G. Jamieson. "Instance-dependent near-optimal policy identification in linear mdps via online experiment design." Advances in Neural Information Processing Systems 35 (2022): 5968-5981.
[2] Foster, Dylan J., Noah Golowich, and Yanjun Han. "Tight guarantees for interactive decision making with the decision-estimation coefficient." arXiv preprint arXiv:2301.08215 (2023).
[3] Zhong, Han, et al. "GEC: A Unified Framework for Interactive Decision Making in MDP, POMDP, and Beyond." CoRR (2022).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. The DEC [4] is a recently proposed measure of complexity that applies in the realizable setting. While the setting considered here does not assume realizability, and so is not directly comparable to the DEC, some comparison with the DEC would be helpful. In particular, one could apply the DEC to the model class $\mathcal{M}$ the set of all MDPs with no more than $S$ states, $A$ actions, and horizon $H$ (and with the decision space set to the given policy class $\Pi$). One would get realizability for free in this setting, so the characterization given by the DEC could be compared to the spanning capacity. How would the DEC scale with this choice of $\mathcal{M}$ for the policy classes presented on lines 200-204?
2. Are there examples where $S$ and $H$ are both large, the policy set is reasonably expressive (e.g. it contains an $\epsilon$-optimal policy for a reasonable class of MDPs), and the spanning capacity is bounded? For example, say we have some featurization $\phi(s,a) \in \mathbb{R}^d$, and our class $\Pi$ is defined as $\Pi = ( \pi^w \ : \ w \in \mathcal{W} )$ for $\pi^w(s) = argmax_{a} \phi(s,a)^\top w$ and $\mathcal{W}$, for example, a cover of $\mathbb{R}^d$. Can the spanning capacity be shown to scale polynomially with $d$ in this case, or is the best that can be shown still the bounds given in Proposition 3? It is known that smoothed versions of such policy classes are sufficient for settings such as linear MDPs when $\mathcal{W}$ is an $\epsilon$-cover of $\mathbb{R}^d$ (that is, such a policy class can be shown to contain an $\epsilon$-optimal policy class for any linear MDP), so if the spanning capacity is bounded by $poly(d)$ in this case (even though $|\Pi|$ scales exponentially in $d$) would provide a compelling large state-space example where spanning capacity scales reasonably.
[4] Foster, Dylan J., et al. "The statistical complexity of interactive decision making." arXiv preprint arXiv:2112.13487 (2021).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and time. Below we respond to your questions and points.
**Spanning Capacity is Worst-Case.**
We agree with the reviewer that spanning capacity is a worst-case notion, much like classic learning theoretic quantities like VC dimension and Littlestone dimension.
In fact, understanding complexity measures that depend on the model class (and thus implicitly the policy class corresponding to optimal policies for those models) is already an active area of research in the RL theory community (see our Appendix A for several representative papers). On the other hand, our work considers the other extreme when the learner only has access to a policy class. This approach, to the best of our knowledge, is not well-studied in the RL theory community, and our work is the first to consider (worst-case) *structural* assumptions on the policy class.
Understanding the complexity of restricting to both specific model classes and policy classes is an important direction in RL theory, and we see our work as taking a step towards that direction.
**Unclear if Sunflower Property is Necessary.** We address this concern, along with other questions/points on the sunflower property, in a joint response to all the reviewers. Furthermore, we note that we did not focus on optimizing the polynomial dependence on $D, K$, and $1/\epsilon$ in our Theorem 4.
**On $(K,D)$ in Theorem 4.** We apologize for any confusion. The reviewer is correct that leaving computation aside, the optimal choices of $K$ and $D$ can be computed a priori (before running POPLER) by explicitly enumerating over all possible choices for $\Pi\_\mathrm{core}$ and $\\{ \mathcal{S}\_\pi \\}_{\pi \in \Pi}$ and finding the best values of $K$ and $D$ that optimize the bound in Theorem 4. We will clarify this in the final version.
**Computational Complexity of Our Algorithms.** Given $K$, $D$, $\Pi\_{\mathrm{core}}$, the sets $\\{\mathcal{S}\_{\pi}\\}\_{\pi \in \Pi}$, our algorithm POPLER has running time that scales polynomially with $|\Pi|$ as well as $S, A, H$, which we agree is prohibitive for large scale RL problems appearing in practice. However, as the reviewers also noted, our focus in this paper is to understand the statistical complexity of agnostic RL, which is the first step towards getting practical algorithms in agnostic RL and to the best of our knowledge has not been explored before. Getting computationally or oracle-efficient algorithms for agnostic RL is a fascinating direction for future research.
**Additional References.** We thank the reviewer for the additional references, which will be incorporated into the final version.
**Relationship to DEC.** Let $\mathrm{Dec}(\mathcal{M}; \Pi)$ denote the DEC of the model class $\mathcal{M}$ consisting of all stochastic MDPs (with state space S and action space A) and decision class $\Pi$. Let $\mathfrak{C}(\Pi)$ denote the spanning capacity of $\Pi$. Further, for the sake of comparison, assume that $\epsilon = O(1)$ and ignore constant and $H$ dependent factors. On the upper bound side, we have that $\mathrm{Dec}(\mathcal{M}; \Pi) \leq \mathfrak{C}(\Pi)$. This is because:
- As suggested by Xie et al. 2022, $\mathrm{Dec}(\mathcal{M}; \Pi)$ is upper-bounded by the worst-case coverage of any MDP in $\mathcal{M}$ with decisions limited to $\Pi$. (see Section 6.1 in their paper for details).
- As proved in our Lemma 1, $\mathfrak{C}(\Pi)$ is equal to the worst-case coverage of any $\mathcal{M}$ with decisions limited to $\Pi$.
- The two results together imply that the DEC is upper bounded by the spanning capacity.
On the lower bound side, it is not clear if $\mathfrak{C}(\Pi) \leq \mathrm{Dec}(\mathcal{M}; \Pi)$. A direct comparison of our results with those for Foster et al. 2021 with the model class $\mathcal{M}$ and decision space $\Pi$ is not fruitful. On the one hand, our Theorem 2 suggests that the lower bound for Agnostic RL scales as $\mathfrak{C}(\Pi)$. On the other hand, Foster et al. 2021 obtain an upper bound of $\mathrm{Dec}(\mathcal{M}; \Pi) \log(|\mathcal{M}|)$ samples. Comparing the two establishes $\mathfrak{C}(\Pi) \leq \mathrm{Dec}(\mathcal{M}; \Pi) \log(|\mathcal{M}|)$ which is a vacuous comparison as $\log(|\mathcal{M}|) \propto |\mathcal{S}|$ could be prohibitively large for the set of all stochastic MDPs.
**Examples of Expressive Policy Classes with Bounded Spanning Capacity.** Since spanning capacity is a worst-case measure, many natural examples do not admit policy classes with bounded spanning capacity. For the class of ``linear policies'' that the reviewer asked about, from the prior works, we know that:
- Linear MDP assumption + linear policy class: it is possible to find an $\epsilon$-optimal policy in $\mathrm{poly}(d)$ samples (see Theorem 3.1 of [1]).
- Linear policy class: if we don't assume some form of realizability (either of the dynamics class or value functions), there is an exponential in $H$ lower bound (see Theorem 4.3 of [2]).
In fact, we can show, more generally, that any policy class with large threshold dimension (a learning theoretic dimension which is qualitatively equivalent to Littlestone dimension) must also have a large spanning capacity. This recovers the exponential in $H$ lower bound in [2] for linear policies. We will add more details discussing this in the final version.
[1] Chi Jin, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan. "Provably Efficient Reinforcement Learning with Linear Function Approximation."
[2] Simon S. Du, Sham M. Kakade, Ruosong Wang, Lin F. Yang. "Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning?"
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: I would like to thank the authors for their detailed response to my questions. I believe the majority of my concerns have been addressed, and would encourage the authors to include the answers they have given here in the final version. While I think there are still certain shortcomings to this work (lack of necessity of sunflower property, very worst-case nature of spanning capacity), I believe it does make an interesting and novel contribution to the RL literature, and will raise my score to a 7. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and valuable feedback. Since concerns about the sunflower property were raised by all the reviewers, please find a shared response below. If you have any further concerns, please let us know, and we would love to discuss further.
**Necessity of the Sunflower Property.** While we show that the sunflower property is sufficient (in addition to bounded spanning capacity), we do not know if the sunflower property is also necessary for agnostic PAC RL in the online interaction model. However, we believe that it is necessary and have *strong evidence* to support this belief:
- All the explicit examples of policy classes that we considered in Section 3, and which are agnostic PAC learnable in online RL, satisfy it. To the best of our knowledge, we do not know of any policy class that is agnostically PAC learnable in online RL but does not satisfy the sunflower property.
- The *only* policy class we know of that is not agnostically PAC learnable in online RL despite having bounded spanning capacity (the policy class from Theorem 3) violates the sunflower property. Furthermore, even the construction in Theorem 3 is *nonexplicit* (based on a probabilistic argument) so we think finding an explicit construction would be nontrivial.
Exploring the necessity of the sunflower property, and/or establishing the necessary conditions on $\Pi$ for agnostic learnability in online RL is an exciting direction for future work.
**Further Intuition on the Sunflower Property.** At a high level, the sunflower property captures the intuition that there exists a small set of policies $\Pi\_{\mathrm{core}}$ that
when executed can cover most of the trajectories that are be explored by other policies in $\Pi$. In particular, each policy $\pi \in \Pi$ can only deviate from $\Pi\_{\mathrm{core}}$ on a small set of states $\mathcal{S}\_\pi$. Intuitively, the sunflower property allows the learner to extrapolate data collected by executing a small set of policies $\Pi\_{\mathrm{core}}$ to estimate other policies in $\pi \in \Pi$. Informally speaking, a shared structure similar to what is captured by the sunflower property seems to be crucial in order to avoid a linear dependence on $\Pi$. In the prior works that make further assumptions on the model dynamics, e.g. in Bellman-Eluder classes, etc. such shared structure was enforced via *modeling assumptions on the MDP*; our work explicitly aims to avoid making any MDP modeling assumptions.
Finally, we also remark that, even with the sunflower structure, the problem is challenging as the number of leaf states $\\{\mathcal{S}\_\pi\\}\_{\pi \in \Pi}$ could be very large (in fact, it can scale linearly with $\Pi$ for some of our examples). Our algorithm performs non-trivial and a novel exploration approach using policy-dependent Markov Reward Processes in order to figure out the relevant leaf states and achieve sample complexity that only scales with $D$ and $K$. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Optimal and Fair Encouragement Policy Evaluation and Learning | Accept (poster) | Summary: There exist (many) healthcare programs where it is impossible to compel individuals to take treatment; rather, the problem of solving for an optimal policy takes the form of providing beneficial services and recommendations to patients. This paper formalizes a causal framework for these cases and develops statistically improved estimators and robustness checks for the setting of algorithmic recommendations with sufficiently randomized decisions. The contributions of this paper are both theoretical and empirical in nature.
Strengths: I believe that this paper is clearly written. While I struggled to understand the work in this paper, I believe that my challenges were due to not being an expert in this field. In fact, I have learned something about causality from reading this paper.
The problem that this paper is attempting to address is very interesting to me, and the empirical results on a variety of applications illustrated the use and significance of this paper’s contributions.
Weaknesses: I have not identified any core weaknesses in this paper.
There’s an unattributed quote block on line 33.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: N/A
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Comment: Thanks for the review! The quote block is our own emphasis. | Summary: The authors study the problem of providing treatment recommendations in systems where those receiving recommendations posses the ability to dissent or deviant from the suggested course of action. Within these types of problems the authors develop a method for providing treatment recommendations which can both account for this ability in recomendees and adhere to notions of fairness in terms of successful treatment adoption. The authors provide several theoretical results characterizing the problem space and their approach as well as experimentally demonstrating its efficacy. On top of this the authors adapt the classic two-player cost-sensitive learning optimization technique found in prior works to their problem and introduce several interesting additions to account for the additional considerations of their problem.
Strengths:
- The problem studied in the paper is important and helps bridge the gap between works on treatment recommendation which ignore autonomy of the recomendee and the real-world in which humans will inevitably express dissatisfaction or apathy towards recommendations.
- The paper is well written and the authors clearly outline their approach, relevant background, and possible limitations.
- The authors provide a mix of theoretical results (characterizing both the problem and priorities of their approach) and experimental results.
- The assumptions made in the authors model are reasonable. The least standard assumption appearing to be Assumption 2, which may not always hold in practice (as the authors point out), but this assumptions is a good jumping-off point for the type of problem the authors wish to investigate. Moreover, in most “reasonable” domains I would expect that the majority of the effect of the recommendation is its ability to alter treatment adoption rates.
- The modifications to the traditional reductions approach are interesting and provide some useful new ideas from a technical standpoint.
Weaknesses: - The experimental results are missing baseline comparisons. For example, what would happen if we assumed a 100% compliance rate in terms of recommendations; applying such an approach would provide information on how much we gain by considering agent autonomy.
- It would be informative to see some type of running time analysis or discussion on how easy it is to solve the objectives presented on line 219. For example, computing a best response amounts to cost sensitive learning at each round, which I would expect is extremely costly unless only a few epochs of cost sensitive learning are performed at each round. Although this approach is heavily inspired by (and similar to) the method in Agarwal et al., which is very fast in practice, it is hard to see how much extra compute is required as a result of the difference between the two approaches. The authors approach feels more similar to adversarial training which is known to be a quite expensive and scale poorly with larger models and feature spaces (but perhaps I am wrong about it). Any clarification would be appreciated!
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weakensses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the summary and assessment of strengths!
We respond to questions/limitations/weaknesses.
1. Baselines:
Good question. Note that we actually plot the full curve of objective value and constraint reduction in Fig. 2. To your question about what happens if we wrongly assume a 100% compliance rate: optimizing over an unconstrained policy class, the policy decision is the same, assuming monotonicity (that recommending treatment only makes treatment more, not less, likely). Therefore, assuming 100% compliance is equivalent to an unconstrained policy (i.e. \lambda = 0). Although we increase the objective function, we incur great disparities in treatment across groups, because we wrongly assumed 100% compliance.
However, the optimal policy value could indeed be different when we optimize over a *constrained* policy class, even without any _fairness_ constraints. It depends on the magnitude of the “compliance weights” in addition to $\tau=\mu_1(x)-\mu_0(x)$ and the exact functional form. For example, consider a simple two-dimensional case with linear decision rule. Suppose for $x_1 > 0, \tau = -1, x_1<0, \tau=0$ (good reduction in costs when $x_1>0$) but compliance is different: for $ \{x_1>0, x_2 > 0\}$, we have that $p_{1\mid 1}-p_{1\mid 0} = -\epsilon$ but for the other subregion, $ \{x_1>0 , x_2 < 0\}$, we have that $p_{1\mid 1}-p_{1\mid 0} = 1$. Then the optimal encouragement-aware linear decision rule treats $ \{x_1>0 , x_2 < 0\}$, but wrongly assuming 100% compliance treats $x_1>0$ which does worse! We will add this simple example to illustrate what can go wrong.
2. Running-time analysis or discussion:
Good point; we will add this discussion in the appendix.
The approach we suggest only has twice the computational complexity of the method of Agarwal et al. because our two-stage variance reduced approach essentially runs their method twice. That’s because policy learning can be reduced to cost-sensitive learning, which is the key computational oracle used in Agarwal et al. That also means that improvements in cost-sensitive learning, such as using stochastic gradient techniques can equivalently be used here and reduce computational complexity. Computationally, one iteration of the approach can be as difficult as solving a mixed-integer program (aka solvable in practice but without poly-time guarantees), or can be solved computationally efficiently with stochastic gradient descent on a surrogate-loss relaxation; both of these are common approaches in practice.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for the clarifications. My concerns have been addressed. Mentioning the complexity and runtime with respect to methods such as that of Agarwal et al. would help contextualize the efficiency of the method. Based on my reading of the paper, I expected the method to be quite intractable at scale, but based on the rebuttal this is not the case.
After reading the the other reviews and the authors' responses, I will maintain my original score. | Summary: The paper addresses consequential decision making situation where we have a decision-making policy that outputs decisions (referred to as recommendatinos) R, individuals who may adhere to the decisions and realize a treatment T, which then leads to an outcome Y. The problem arises, when individuals do not adhere to the decisions (i.e., follow the recommendations) and thus do not realize a treatment T, thus not realize any outcome Y. The decision making policy does not have direct control over treatment, but can only provide decisions (recommendations for treatment). Fainress comes into play, when amongst those poeple that received a positive decision (recommendation) certain social groups are less likely to adhere to the decision and thus realize an outcome. The paper suggests a method to learn a decision making policy that assigns recommendations to individuals as to fulfill certain fairness criteria in the treatment realized, e.g., reducing the disparity in treatment. The authors present a case study on "failure to appear before court" data (PSA-DMF).
Strengths: The paper studies an interesting problem, that can be relevant to the ML community working on consequential decision making). In the classical loan example, we generally assume that an individual decided to be granted a loan will utilize it, overlooking the possibility that some individuals might choose not to realize the loan. When individuals from certain demographics systematically underutilize positive decisions, this has fairness implications that are important to be considered, when designing decision-making policies.
Weaknesses: While I believe the problem tackled is interesting, I strongly believe the paper is not yet ready for publication. The paper lacks structure and clarity. I am enlisting some important points below.
On structure:
1. Section 1: The last paragraph of the introduction summarizes the contribution. It would help, if the authors could point to the specific sections, where these contributions are made and provide an overview of the structure of the paper.
2. Section 3: There is no background section. The paper would benefit from a background section to understand the introduced method better. For example, I do not find the Neyman-Rubin potnetial outcomes framework introduced that the work claims to use (ln. 95-96). There is also no introduction to policy value functions, doubly robust optimization/policies.
3. Section 3: The introduction of notation and concepts may benefit from introducing concepts as definitions, e.g., the cost function. There is also a lack of introduction of notation, e.g., what $p_{t|r}$ refers to.
4. Section 3: The mathematical definitions / equations do not carry any equation numbers. The work could benefit from adding equation numbers, such that certain formulas or definitions are easier accessible.
5. Section 4: This section could benefit from an introduction, what this section is about. I am not understanding, why this is called "Method" and then section 5 also introduces a method ("We now introduce general methodology to handle ...", ln 214)
6. Finally, there is no discussion/limitations/outlook section. The paper would benefit from a section that summarizes the work, discusses its contribution and limiations and provides an outlook.
Clairty: I find it, in general, very difficult to follow the authors in their main idea. I am sharing a selection of difficulties I had with respect to clarity:
1. Section 1: I am missing clarity in the pipeline of decisions, why is human in the loop that makes the final prescription a problem? I understood the problem is that people, even if they receive a prescription, may not comply with it. So I understood the problem to be rather on the subject side that receives the treatment. Could you clarify?
2. Section 2: The related work section cites work relevant to the field. However, I am missing clarity in how the previous work relates to the current work. There is a list of work and description of it, but often I am missing how this relates. For example, work [37] is cited twice in line 62 and then again in line 65, I am not understanding that split. In line 72 there is also a mentioning of "supervised release", without explaining it. You aslo write ln. 81 "a different line of work studies counterfactual risk assesment, which models a different concern", how is that different?
3. Section 3: There is an introduction of social groups "regarding fairness, we will be concerned about disparities in utiltiy and treatment benefits across different groups", but then I find "utility" and "treatment benefits" not explained.
3. Section 3: While assumption 2, 4, 6 are addressed in lines 126 ff. I am missing an explanation of the rest of the assumptions.
4. Section 4 & 5: I am failing to follow the story and propositions. The paper would benefit from a clear explanation in words of i) what the proposition says / what the formulas express and ii) why it is important for the remainer/goal of the paper.
5. Section 6: I am not understanding the motivation or story of the case study in the experimental evaluation. I am missing a description of the experimental setup.
7. Section 6: The figures are missing an explanation what the different parameters are, e.g., $\tau$ etc.
6. A general point, I find the usage of the word "recommendation" difficult in the context of fair decision making (and here the authors use binary decisions as far as I understand). This is because there is a different field of fairness literature that concerns recommendations in teh context of ranking and that is different from decision making.
In addition I have the following comments/concerns about citations:
1. I am missing citations in the introduction. For example, in line 38ff. "a common strategy is to conduct an intention-to-treat analysis"; in line 47 "previous work in algorithmic accountability primarily focuses on auditing recommendations".
2. I looked up citations [20, 21] and did not found "the well-understood notion of non-compliance/non-adherence". Can you point me to this one? Also I found [21] to be a presentation on causal inference, not a (published) paper or book, is this correct? I am not sure about the quality of that citation.
About formatting:
1. line 33 ff. is differently formated than the text below, I am not understanding why.
2. Margin violations, at proposition 2, 5, and 6
Typos:
* ln. 85 "don't" -> do not
* ln. 108 do you mean $\mathcal{T} = \{0,1\}$ instead of $\mathcal{T} \in = \{0,1\}$
* ln. 129 a blank to much after X
* ln. 291 "arbitrarily"
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: There might be parts of their methodology or approach I simply did not understand due to the lack of explanation.
1. Section 3: I find the equations in line 112 not introduced, what is $e_r$, what is $\mu_{r, t}$ and what do they mean, what will we need them for?
2. Section 4: line 176 "since algorithmic recommendations are deterministic functions of covariates" - what do you mean by this? Decision policies can be probabilistic [1, 2], as far as I am concerned. Or am I missing the point?
3. Why are you using doubly robust estimation? From the text, I am not understanding why this is necessary. Also it may be worth stating, why you do not simply use inverse propensity scoring.
4. In Section 4 and 5 I am failing to undestand the fairness understanding present in the work. Could you point me to where this is defined and detailled?
5. What are you assumptions in the experimental section as to how individuals realize treatments?
[1] Bechavod, Yahav, et al. "Equal opportunity in online classification with partial feedback." Advances in Neural Information Processing Systems 32 (2019).
[2] Kilbertus, Niki, et al. "Fair decisions despite imperfect predictions." International Conference on Artificial Intelligence and Statistics. PMLR, 2020.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 1 poor
Limitations: A discussion or outlook section is missing. The authors do address limitations of some of their assumptions in lines 126
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We respond point-by-point below. Note many of these points are *already* in the paper.
1. Adding section numbers to our narrative description is straightforward and we will do so.
2. No, our background is split between the related work and problem setup.
We introduce policy value functions in lines 113-118. Re: assumptions, see response to Reviewer QSPP, weakness 2. We will go into greater description in the appendix, and include the following sentences about the standard DR estimator:
> “For complete background we describe estimation improvements in standard causal inference in estimating the average treatment effect. The celebrated doubly-robust estimator for the ATE is [Robins and Rotnitzky 1995]: $$V_{DR}(\pi) = \sum_{t \in \mathcal{T}} E[ \pi(t\mid X)( \mu_t(X) + \mathbb{I}[T=t] (Y-\mu_t(X)) / e_t(X)) ] $$ Doubly robust estimators use both the outcome model and propensity score model and enjoy 1) two chances at model misspecification: only one of the propensity score and outcome model need to be well-specified and 2) rate double-robustness: we only require the product of the MSE convergence rates to achieve $n^{-\frac 12}$ consistency. Finally, one last perspective on doubly-robust estimators is that they are control variate estimators. This last perspective is closest to the developments in our paper. Looking at the form of the doubly-robust estimator, it is the sum of a zero-mean term to the regression adjustment estimator. “
3. Sec. 3 is a succinct formal problem setup. We will add,
> “The cost function puts recommendations, treatments, and causal outcomes in the same “currency”, for example, a common cost basis to compare utility of, for example, the final purchase value if someone uses a 10 dollar off coupon vs. the cost of someone using treatment, e.g. the 10 dollar discount.”
4. Adding equation numbers is straightforward. But we *already* have assumption/proposition labels, e.g. line 112, line 173, 177, 247, 255, 292.
5. No, we *already* have an introduction there in lines 146-153: we summarize that we study two regimes, causal identification and estimation. We will add a summary sentence.
6. We discuss limitations throughout. See response to 2WiP.
Clarity:
1. Algorithmic auditing looks at disparities in $P(R=1|A=1)-P(R=1|A=0)$ instead of $P(T=1|A=1)-P(T=1|A=0)$. This can be an issue whether it is the subject not complying with the treatment, or another decision-maker. We describe this in lines 46-48 (fairness constraints should be on realized treatments, not algorithmic recommendations) and lines 118-120 (the optimal decision rule can be different).
2. No, after most works mentioned, we exactly describe how our work is similar and different. See line 65, 66-68; where we introduce a related work, we contrast with “but (without/not), however, rather than, our focus is instead on”. Counterfactual risk assessment still assesses fairness of the high-risk/low-risk labels (recommendations), not the realized decisions.
3. We provided multiple examples of utility of outcomes vs. benefits from treatment access alone in the introduction in 25-31, 36-49.
4. The other assumptions are standard in causal inference, as we mentioned. See response to Structure 2).
5. We already did this. Every proposition has a sentence right before it describing the result in words: “we discuss causal identification (Prop. 1), we characterize a threshold solution.. (Prop. 2)”. E.g. lines 163-164 say that we use Prop. 2 to obtain the the next proposition. We can make these transitions longer.
6. Lines 261-287 and lines 293-300 (interpretation of fairness/treatment costs) describe the story (summarized for space constraints). We will add a few sentences:
> Judges can choose to detain, (unconditionally) release, or release with supervision (supervised release) defendants when they are arrested before their trial. … lines 262 - lines 267 … Supervised release is an example of the second regime: recommendations are made with a human in the loop who makes the final decision, and we are concerned about disparities in outcomes and treatments.
7. See line 285. $\tau = \mu_1(x) - \mu_0(x)$. Will add.
8. We use the terms “algorithmic recommendation, recommendation for treatment, encouragement/recommendation” to avoid confusion.
Citations:
1. Many papers audit fair binary classification in a consequential domain such as social services (with caseworkers) [1], healthcare [2], criminal justice [3] where decisions are not automated.
[1] Chouldechova, Alexandra, et al. "A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions." FAT*, 2018.
[2] Pfohl, Stephen R., Agata Foryciarz, and Nigam H. Shah. "An empirical characterization of fair machine learning for clinical risk prediction." Journal of biomedical informatics 2021.
[3] Stevenson, Megan T., and Jennifer L. Doleac. "Algorithmic risk assessment in the hands of humans." (2022).
2. No, [21] is a book: Hernán MA, Robins JM (2020). Causal Inference: What If. Boca Raton: Chapman & Hall/CRC. See 22.1.
Pages 12-15 of [20] discuss encouragement designs related to Appendix D.1.
Questions:
1. No, we said what they are on line 112: recommendation ($e_r$), treatment propensity ($p_{t\mid r}$) and outcome ($u_{t}$) models for causal identification and estimation.
2. line 175 says these are from deterministic binary classifiers (as common in the fairness literature).
3. Doubly robust estimators are standard important estimators in causal inference that reduce variance in estimation. See our response to your structure 2.
4. Equation (1) has the main fairness constraint we use as an example in the main text. After line 218 we refer to more detail in appendix B.2.1 and B.2.2. We described what fairness constraints are relevant in lines 36-29.
5. We said this in lines 272-275 but can add numbers. The ones relevant to treatment realizations are Assumptions 3 (unconfoundedness), 4 (responsivity) and 6 (overlap).
---
Rebuttal Comment 1.1:
Comment: Thank you again for your concerns. We want to follow up on the rebuttal. We understand that you may have a busy schedule, but we would greatly appreciate it if you could take a moment to review our response and provide any additional feedback.
If you find our response useful, please consider updating your score? We hope we have thoroughly clarified that many of these points are in the manuscript, and are absolutely not fundamental flaws, though we will be absolutely sure to be absolutely clear about these.
Lastly, if there are any specific areas of concern that we can address or provide additional clarification on, please do not hesitate to let us know. | Summary: This paper focuses on fair optimal decision rules, enhancing statistical estimators, and robustness checks for algorithmic recommendations with randomized decisions. It introduces a two-stage procedure with a complexity bound for optimizing within a constrained policy class, ensuring less conservative out-of-sample fairness constraint satisfaction.
Strengths: 1. The paper addresses an interesting topic, considering the randomness caused by humans in the loop and providing fairness guarantees. The technical aspects of the paper are robust and well-founded.
2. The paper explores two settings: one where R is randomized and satisfies overlap, and the other where R is deterministic but does not satisfy overlap. Comprehensive results are presented for both settings.
Weaknesses: 1. The fairness constraint considers the expectation of treatment but not recommendations which are the outcome of the algorithm. The reason for this is not clearly explained.
2. The assumptions made in the paper are not adequately cited or explained. While the author states that most assumptions are standard, some of them appear to be quite strong, such as assumption 5, which requires a strong decomposable linear function, and it's unclear if it can be satisfied in general.
3. There are some minor points that need clarification:
a. In Assumption 6, the symbol '\leq' may need to be changed to '\geq.'
b. Line 110 defines a cost function, but it seems like it should be referred to as a utility function, as indicated in line 157.
c. The conditions of each proposition are not clearly stated within the propositions, which causes some confusion, considering the numerous assumptions and settings.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: In proposition 2, L looks complicated. Is it explainable?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I may raise my score if the questions/weaknesses are properly explained.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review! Below we make some clarifications, which we hope will help clear up that these are not weaknesses of the paper. We will add these explanations to clarify.
Weaknesses
- 1: See lines 40-46 which discuss scenarios where the fairness constraint should be on the expectation of treatment but not recommendation. Line 288 also quotes a report citing this motivation in the case study.
Algorithmic auditing on the recommendation does not actually guarantee that there will not be disparities in realized treatment decisions, e.g. who actually finally redeems an offer or signs up for health insurance or receives preventive/punitive services. In a perfect world where treatment decisions by human decisions were optimal, there would be no difference in auditing recommendations vs. treatment decisions. But they are not, and this can be a central source of disparities; see e.g. discussion of administrative burden/cognitive burden affecting marginalized populations the most, making it more difficult to sign up for beneficial services, or due to discriminatory behavior of human decision makers. Imposing fairness constraints on treatment is a stronger guarantee of actual reductions in disparities.
- 2. We will add an extra citation to causal inference textbooks here [21] (Hernan and Robins’ Causal Inference) and Imbens and Rubin [1] for the standard assumptions 1,3,6.
[1] Imbens, Guido W., and Donald B. Rubin. Causal inference in statistics, social, and biomedical sciences. Cambridge University Press, 2015.
Overall we propose to add the following sentences to the main text to further explain standard causal inference assumptions (we also included this in the rebuttal to all authors):
> In the standard Neyman-Rubin potential outcomes framework, individuals have a random vector of potential outcomes $Y(t)$ (indexed by treatment levels), but in observational data, under Assumption 1 (consistency and stable unit treatment values assumption) we only observe outcomes masked by the actual treatment assignment, aka $Y_i=Y(T_i)$, and under Assumption 3 (unconfoundedness) treatment is as-if randomized conditional on covariates. Assumption 3 is satisfied by design in randomized trials and otherwise is an assumption about the data-generating process, i.e. human decision-makers weren’t basing treatment decisions on unmeasured confounders unavailable in our dataset.
Also please note that in lines 128-142 we spend a lot of time explaining the non-standard assumptions specific to our paper.
Finally, Re: Assumption 5: note that this doesn’t impose any functional form restrictions on the conditional cost function given covariates, i.e. $\mathbb{E}[c(Y)\mid T=1,X]$ is unrestricted. This just means that we can’t directly handle counterfactual cost functions, e.g. we rule out cases like $c(r,t,y) = T * Y(1-T)$. This is because of the fundamental problem of causal inference that the joint distribution of $(Y(1),Y(0))$ is unidentified and we don’t know what the counterfactual distribution of $P(Y(1) | T=0,X)$ is.
For example, standard settings where firms get utility on the causal outcomes (i.e. revenue gained from a customer redeeming a 10 dollar off coupon and spending it) and some cost on the use of treatment (i.e. paying the 10 dollar discount) satisfy Assumption 5.
So, Assumption 5 is relatively mild; it says that we have separate cost functions for treatment outcomes and causal $Y$ outcomes. Ultimately A5 is not very restrictive; we can handle nonstandard classification-type constraints/disparities by first applying the identification argument as in prop. 7 of the appendix and treating this as a covariate-conditional treatment cost.
Thanks for your question on this; we’ll add this to the paper.
3. Thanks; that’s a minor typo, will be updated to $\nu_r, \nu_t \geq 0$. We use cost/utility interchangeably (difference is in sign). We list the assumptions together but will add which assumptions each prop. depends on. Prop 1,2,3 depend on A1-5. Prop 4 depends on A1-6. Prop 5,6 depend on A1-5 (no overlap). We will also clarify that all the propositions depend on A1-5 and what changes in different regimes is whether we assume overlap (A6) or not.
Questions:
- "L looks complicated... is it explainable?"
Yes, it's explainable. Regression adjustment identification for causal inference (in Prop. 1) says that the optimal policy is to treat if $(\mu_1(X)-\mu_0(X)) (p_{1 \mid 1}(X)-p_{1 \mid 0}(X)) < 0$ (when outcomes are costs) if we didn't have any constraints. Think of this as a compliance-weighted CATE (conditional average-treatment effect), i.e. we weight $\tau$ (CATE =$\mu_1(X)-\mu_0(X)$) by the compliance effect of recommendations $ (p_{1 \mid 1}(X)-p_{1 \mid 0}(X))$. This is the unconstrained case. If we do have constraints, then Lagrange duality says that the optimal solution is given by the optimal unconstrained solution with an additional $\lambda$-multiplier on the constraint violations. The term $\left(p_{1 \mid 1}(X, A)-p_{1 \mid 0}(X, A)\right)\frac{\lambda}{p(A)}(\mathbb{I}[A=a]-\mathbb{I}[A=b])$ is just the integrand of the fairness constraint $\mathbb{E}[T(\pi) \mid A=a]-\mathbb{E}[T(\pi) \mid A=b]$ (by iterated expectations). So the $\lambda$ penalizes constraint violations. The term multiplying it is the contribution of a datapoint to estimating the constraint violation. This characterization of optimization over $\lambda$ is via Lagrange duality.
We'll add this to the paper, thanks!
---
Rebuttal Comment 1.1:
Comment: Thank you again for your questions. We want to follow up on the rebuttal. We understand that you may have a busy schedule, but we would greatly appreciate it if you could take a moment to review our response and provide any additional feedback.
If you find our response useful, please consider updating the score to reflect the improvements made to the manuscript?
Lastly, if there are any specific areas of concern that we can address or provide additional clarification on, please do not hesitate to let us know. | Rebuttal 1:
Rebuttal: Thanks to all the reviewers for feedback and suggestions. We are encouraged that the reviewers find that we provide comprehensive theoretical/empirical results in tackling an important problem!
We respond point-by-point below and remark on some common points here. We believe these very minor few sentences will improve clarity along these points. Thanks to the reviewers for identifying opportunities to clarify further.
- 2wiP and Cr9c note we don’t have a separate limitations paragraph. We propose to add the following concluding paragraph (which reiterates main limitations we discussed in the text, but explicitly labels them as such):
> “*Conclusion and Limitations*: In summary, we provide theoretical characterization of fair encouragement designs with a human-in-the-loop, algorithms, and empirical demonstration. On the sociotechnical side, the limitations of our work include that interpreting any additional constraints implemented in our framework as improving fairness will depend on the context. On the technical side, our methodology is especially tailored to Assumption 6, about extrapolating responsivity to algorithms from the training data to the deployment environment. Although we can develop robustness checks to the violation of Assumption 6, this means we are really operating in one regime of “human-AI” collaboration and this method is not necessarily appropriate in all settings. Empirical work in different applications is required to verify the appropriateness of this assumption. Interesting directions for future work include algorithms that can handle intermediate settings, or use limited online learning to assess validity of assumptions.”
- Reviewers QSPP and Cr9c note that although we take care to carefully discuss non-standard assumptions specific to our paper, in the main text we don’t explain standard causal inference assumptions (consistency/SUTVA/unconfoundedness). This is a good point: assumptions are absolutely central to causal inference and they should be explained. As mentioned in specific reviewer response, we will add the following explanation of assumptions to the main text:
> "In the standard Neyman-Rubin potential outcomes framework, individuals have a random vector of potential outcomes $Y(t)$ (indexed by treatment levels), but in observational data. Under Assumption 1 (consistency and stable unit treatment values assumption) we only observe outcomes masked by the actual treatment assignment, aka $Y_i=Y(T_i)$, and under Assumption 3 (unconfoundedness) treatment is as-if randomized conditional on covariates. Assumption 3 is satisfied by design in randomized trials and otherwise is an assumption about the data-generating process, i.e. human decision-makers weren’t basing treatment decisions on unmeasured confounders unavailable in our dataset."
This is somewhat boilerplate for causal inference in general, which is why it wasn’t in the main text before; so adding this extra explanation doesn’t change anything about the paper.
Finally, Cr9c notes that there are opportunities to further include background material for readers unfamiliar with causal inference. Upon reflection, we recognize the writing is dense to fit our comprehensive results and we omit some background for readers unfamiliar with causal inference. But also in our response to Cr9c we note many areas where we have in fact included the information in question. We can certainly add additional background material on standard settings in the appendix, because we want to be as clear as possible to all audiences. Thanks to Cr9c for noting opportunities to improve clarity for audiences new to causal inference. On the other hand, multiple reviewers (h3Xk and xiA7) explicitly acknowledge the paper is well-written, and xiA7 even acknowledges the writing was already useful for someone new to causality. While we think adding additional emphases can help, we don’t at all think that the line edits that we included in our comprehensive response to Cr9c point to major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations”.
We thank reviewers for the questions and noting opportunities for minor clarifications to further improve clarity of the camera-ready paper and we are absolutely confident that the minor adjustments we have laid out will do so in a camera-ready. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors characterize optimal and resource fairness-constrained optimal decision rules, and develop a doubly-robust estimator for the optimal decision rules.
Strengths: I'm not very familiar with the areas of doubly-robust policy learning and am unable to assess the paper adequately.
Weaknesses: The authors have not discussed the limitations of their work.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See **Weaknesses** section.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See **Weaknesses** section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the review! See global rebuttal where we propose to reiterate the limitations we discuss throughout the paper (e.g. our discussion right after assumptions) in a new concluding paragraph at the end.
> “In summary, we provide theoretical characterization of fair encouragement designs with a human-in-the-loop, algorithms, and empirical demonstration. On the sociotechnical side, the limitations of our work include that interpreting any additional constraints implemented in our framework as improving fairness will depend on the context. On the technical side, our methodology is especially tailored to Assumption 6, about extrapolating responsivity to algorithms from the training data to the deployment environment. Although we can develop robustness checks to the violation of Assumption 6, this means we are really operating in one regime of “human-AI” collaboration and this method is not necessarily appropriate in all settings. Empirical work in different applications is required to verify the appropriateness of this assumption. Interesting directions for future work include algorithms that can handle intermediate settings, or use limited online learning to assess validity of assumptions.” | null | null | null | null | null | null |
DNDesign: Denoising is All You Need for Protein Inverse Folding | Reject | Summary: In this work, the authors present DNDesign, a denoising training module atop inverse folding networks (IFNN). The folding physics learning plug-in module (FPLM) is trained following score-matching with noise added to the protein backbone. It also contains five operations, including summation, cross-attention, and gated attention, that integrate the features from FPLM to IFNN. Experimental results show the PiFold with FPLM achieves superior performance on CATH 4.2 and 4.3, when compared to previous IFNNs. Besides, the work introduces a fixed backbone conservation analysis based on potential energy changes to evaluate the performance of IFNNs.
Strengths: 1. The protein inverse folding problem that the paper investigates is an emerging domain to apply deep learning techniques.
2. The fixed backbone conservation analysis based on potential energy changes leverages physical prior to evaluate the performance of IFNNs.
3. The idea of utilizing denoising to IFNNs is also connected to physical prior which is expected to boost the performance on inverse folding problems.
Weaknesses: 1. The improvement of DNDesign compared to the original PiFold may not be significant.
2. The paper writing needs to be improved. Some notations and technical details are not clearly explained.
3. DNDesign is claimed to be a plug-in for IFNNs. However, it is only tested with PiFold.
Please see "Questions" for more details.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Major questions:
1. The authors mention proving denoise training is equivalent to learning the direction of energy minimization as a contribution. However, such relationship has been proven in previous works regarding molecules [1,2] and crystals [3]. In this work, the authors apply a similar denoise training strategy to biomolecules. Though the authors cite some of the works. The connection between this paper and previous works should be highlighted.
2. In section 2.2, the authors mention that the side chains can be reconstructed by backbone and residue types, which can be inaccurate. The side chains have degrees of freedom that may not be fully reconstructed from the backbone and residue.
3. In experiments, the authors only apply the proposed FPLM to PiFold and show improvements. How can FPLM be integrated into other IFNNs and how will FPLM affect their performance?
4. Another concern is that the improvement from FPLM may not be significant. In terms of recovery rate, the improvements shown in Table 1 are less than 1%. Besides in Appendix A.1, FPLM shows even worse performance in PP and SR on TS50. And it is worse in SR on TS500 though slightly better in PP. Also in Appendix A.2 for multi-chain sequence design, FPLM shows better performance in 2 out of 4 models though achieving better averaged performance. Further validation of the gain of leveraging the proposed method can be helpful.
5. Also, why apply denoise as an individual plug-in module instead of directly pre-training IFNNs via denoising? Have the authors applied the latter strategy by any chance?
6. In section 5.3, the authors use Rosetta as an oracle in evaluating the potential energy. How accurate is Rosetta in evaluating the energy? The authors are encouraged to discuss the uncertainty.
7. From my perspective, the title may overstate the contribution of the work. The proposed method is a plug-in to existing IFNN but not a general architecture that is "all you need".
Minor questions:
1. In line 35, what is "DIFM"?
2. In line 84, $n_i$ is not defined in the following Eq 1.
3. In Figure 1, the authors may consider denoting the side chains as $R_A$, $R_B$, and $R_C$ as they can be different.
4. In line 198-199, the definition of local frames $g$ is different from line 84. And SO3 vector $\bf r$ is not specified.
In line 233, what is "DENN"?
Reference
[1] Zaidi, S., Schaarschmidt, M., Martens, J., Kim, H., Teh, Y.W., Sanchez-Gonzalez, A., Battaglia, P., Pascanu, R. and Godwin, J., 2022. Pre-training via denoising for molecular property prediction. arXiv preprint arXiv:2206.00133.
[2] Liu, S., Wang, H., Liu, W., Lasenby, J., Guo, H. and Tang, J., 2021. Pre-training molecular graph representation with 3d geometry. arXiv preprint arXiv:2110.07728.
[3] Xie, T., Fu, X., Ganea, O.E., Barzilay, R. and Jaakkola, T., 2021. Crystal diffusion variational autoencoder for periodic material generation. arXiv preprint arXiv:2110.06197.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have discussed potential limitations in Appendix B.1.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | null | Summary: This work proposes DNDesign, a denoising-enhanced protein fixed backbone design method that effectively captures the protein energy landscape. By integrating denoising training and a plug-in module, DNDesign demonstrates its ability to generate promising protein sequences based on pre-designed structures.
In the inverse folding experiments, the method outperforms all the baseline methods. It also compares the diversity with the baseline, which demonstrates that the method can generate diverse suggestions for a design protein.
update: I keep my score.
Strengths: The authors propose DNDesign, which enables the inverse-folding model to capture the deep understanding of folding physics that previous models do not fully exploit.
They prove how DNDesign learns folding physics directly from data and the method improves the state-of-the-art model on various protein sequence design benchmarks.
The authors further make a fixed backbone conservation task based on potential energy change from newly generated sequences. The analysis proves that DNDesign generates energetically favorable sequences. It is highly likely to work in real wet-lab experiments.
Weaknesses: Figure 2 is a bit confusing. I think the 'noisy protein' and 'protein' should be a protein backbone without side chains?
The energy function and energy based distribution part is a little confusing to me. The distribution (force) is purely learning from the data? Or, the distribution is initialized with some force field, e.g. Rosetta. If the energy is purely learned from the training data, I think the submission should have some experiments to demonstrate that the energy is meaningful.
some typos, e.g. line 158, DEDesign.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In Table 1, the PiFold paper report 51.66% recovery ratio, while this submission report 49.49% accuracy for PiFold. What's the difference between the original PiFold settings and this paper's PiFold settings?
2. In Fixed backbone conservation study, the authors measure the Rosetta energy. I wonder whether the authors can also calculate the AlphaFold scores, which is measured in ProteinMPNN. It can further show the generated protein sequences is of high quality.
3. The denoising step sounds time-consuming. Could the authors provide a comparison of the inference time of PiFold and the proposed method?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | null | Summary: This paper proposed to use denoising diffusion probabilistic model and score-based model to solve the protein inverse folding problem, e.g., generate amino acid sequence given a protein backbone structure. Specifically, this paper used denoising diffusion model to generate unstable protein structure with higher dynamic energy and then used score-based model to learn the physical dynamic forces which drive a protein structure from unstable state to stable one. The learned physical forces are incorporated into a graph-based attention network to predict amino acid sequence. Experiments are conducted on CATH 4.2 and 4.3 datasets and compared with various approaches, showing superior performance (though not by a large margin) over the compared methods.
Strengths: (+) Using score-based model to learn physical dynamics in protein folding is novel, informative and reasonable.
(+) Code has been submitted along with the manuscript submission.
Weaknesses: (-) The paper is not well organized and written, and the logic between each sections/subsections is not obvious and hard to follow.
(-) The texts in Figure 2 are too small to see them clearly with ease.
(-) The experimental results in Table 1 is a little bit marginal compared with PiFold, especial for NAR type.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Method part:
1. In Figure 3 (left), what the relation among G_{H, 1}, G_{H, 2} and G_{H, 3}? Is G_{H, 2} transformed from G_{H, 1} driven by folding dynamic forces?
2. How to ensure the perturbated backbone structures using Eq.2 - Eq.4 are realistic? I.e., such perturbated structures (or similar ones) do exist in nature?
3. Subsections in Section 4.4 seem to be disconnected. What's the relation between "denoising training" (Line 207, DDPM) and "Learning folding physics through denoising learning" (Line 213, score-based model)? Actually DDPM is a concrete/special case of score-based model, I'm not sure how the authors train the two models (Eq.5 and Eq.8) at the same time in a single DNDesign model.
4. What's the exact meaning of (1) (2) (3) (4) in Line 229?
5. Could the authors provide a more detailed caption for Figure 2 to summarize the workflow of the proposed model?
Experimental part:
6. In Table 4 in Appendix A.4, how to compare the results with and without "Learning folding physics through denoising learning" (Line 213 in the main text)?
7. Is it possible to provide some qualitative results? I.e., given a protein backbone structures visualization, show the predicted sequences and the ground-truth sequences?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors did not mention any limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | null | Summary: This paper combines the denoising pretraining technique with protein inverse folding models, achieving competitive results to baselines. The denoising pretraining has been proven to be effective in molecule and protein representation learning. Therefore, the rediscovery of the phenomenon in protein design is to some extent straightforward.
Strengths: - The paper is clearly written. The presentation is good.
- The methodology is reasonable and convincing.
- The experimental results are positive and support the authors' claims.
Weaknesses: - The innovation seems to be limited. As denoising pretraining has been proven to be effective in molecule [1] and protein [2,3] representation learning, its success in protein design is straightforward.
- The denoising techniques and model design are largely proposed by existing work, which further limits the innovation of this paper
- Compared to the PiFold baseline, the improvement is marginal.
[1] Zaidi, Sheheryar, et al. "Pre-training via denoising for molecular property prediction." arXiv preprint arXiv:2206.00133 (2022).
[2] Huang, Yufei, et al. "Data-Efficient Protein 3D Geometric Pretraining via Refinement of Diffused Protein Structure Decoy." arXiv preprint arXiv:2302.10888 (2023).
[3] Zhang, Zuobai, et al. "Physics-Inspired Protein Encoder Pre-Training via Siamese Sequence-Structure Diffusion Trajectory Prediction." arXiv preprint arXiv:2301.12068 (2023).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1: How do you use the predicted structure data of AlphaFold2? Have you clustered and partitioned these data according to sequence or structure similarity?
Q2: How does the noise scale affect the model? How do the authors adjust the $beta$ and $alpha$ parameters?
Q3: What are the differences of your denoising strategy against previous works[2,3]?
[2] Huang, Yufei, et al. "Data-Efficient Protein 3D Geometric Pretraining via Refinement of Diffused Protein Structure Decoy." arXiv preprint arXiv:2302.10888 (2023).
[3] Zhang, Zuobai, et al. "Physics-Inspired Protein Encoder Pre-Training via Siamese Sequence-Structure Diffusion Trajectory Prediction." arXiv preprint arXiv:2301.12068 (2023).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | null | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Conformalized matrix completion | Accept (poster) | Summary: This paper addresses the problem of uncertainty quantification in matrix completion by developing a distribution-free method for predictive inference. The authors propose a novel approach based on the conformal prediction framework, aiming to overcome the limitations imposed by stringent model assumptions such as low-rankness of the underlying matrix and the light-tailed noise. Their method, referred to as Conformalized Matrix Completion (CMC; Algorithm 1), can be combined with any arbitrary matrix completion algorithm to provide confidence intervals for the imputed matrix entries. Here, the “confidence” is measured with respect to the randomness inherent in the measurements of the matrix entries, assuming a randomized probabilistic measurement model.
To support their proposed method, the authors also present an intuitive explanation of the method and a theoretical guarantee. They start by giving an intuitive exposition of how the (weighted) exchangeability plays a crucial role in constructing confidence intervals within the conformal prediction framework (Section 3.1). Thereafter, they present Theorem 3.2, which establishes a theoretical guarantee on the average coverage of the confidence intervals, which serves as a key theoretical result of this paper. Additionally, the authors illustrate their analysis by describing two examples of the missingness patterns (i.e., the random observation models) in Section 3.3. Through this illustration, they explicitly evaluate the rate of the expected “weight gap” as expressed in Eq. (6), ultimately demonstrating satisfactory coverage of the constructed confidence intervals.
In Section 4, the authors report the results of their numerical simulations. These simulations serve two purposes: (1) comparing the performance of the proposed conformalized method against a model-based baseline (Sections 4.1.1 & 4.2), and (2) investigating the stability of the proposed method when the oracle knowledge of the measurement probability for each entries is not available (Section 4.1.2).
Strengths: One of the primary strengths of this paper lies in its adaptation of the conformal prediction framework to address the challenge of uncertainty quantification in matrix completion. Unlike previous approaches in the literature, this method does not heavily rely on specific model assumptions. This combination of conformal prediction and matrix completion is a significant and original contribution, making the paper stand out in its field.
Another notable strength is the careful organization of the paper, which effectively presents the essential components of the study to support the authors' main claims. Section 2 provides a concise overview of the problem setup and evaluation metric. The proposed method is then described in detail, accompanied by an intuitive explanation based on the concept of exchangeability in Section 3.1. The main theoretical result is presented neatly in Section 3.2, and to further enhance the understanding, concrete examples of the missingness patterns are provided in Section 3.3. The paper is further strengthened by the inclusion of numerical study results in Section 4 and a comprehensive discussion in Section 5. As a result, the paper forms a clear and self-contained report of the study, ensuring that the readers can grasp the main ideas and findings with ease.
Overall, the adaptation of conformal prediction, the clear organization of the paper, and its ability to convey the study's main contributions effectively are key strengths of this research work.
Weaknesses: While the paper exhibits notable strengths, there are a few areas that could benefit from further improvement and clarification. I have identified three main concerns that warrant attention.
Firstly, the authors frequently emphasize that their proposed method is "distribution-free" and does not rely on “any” assumptions about the underlying matrix (e.g., in line 60). However, it is important to explicitly acknowledge that the proposed method and analysis do depend on the probabilistic random observation model, which is necessary for the conformal prediction framework. While this limitation does not appear to be critical, it would be beneficial to provide a careful clarification of the assumptions and limitations to avoid potential confusion.
Secondly, although the authors highlight the limitations of existing uncertainty quantification approaches (lines 24 - 30), there is a lack of comparisons between the proposed method and these previous approaches. Consequently, it remains unclear whether and how the proposed method surpasses these existing methods. Including such comparisons would enhance the clarity and strengthen the argument for the superiority of the proposed approach.
Additionally, the issue of heavy-tailed noise is briefly mentioned by the authors (lines 32 - 33), who suggest that their method is less sensitive to the noise tail (lines 32-33, lines 257-261). However, this point is not adequately addressed beyond the comparison of the vanilla ALS method to the CMC-ALS method in Figure 1-(c), where the latter seems to be slightly better than the former, but still remains overly conservative; the difference in performance between the ALS and the CMC-ALS does not appear to be substantial (approximately 9x worse vs 6x worse than the oracle). To improve the cohesiveness of the paper, it would be beneficial to either elaborate further on this point or consider removing it if it does not significantly contribute to the main findings.
In summary, while the paper possesses strengths, it would benefit from addressing these weaknesses. Clearer clarification of assumptions and limitations, comparative analyses with existing methods, and a more comprehensive discussion of the performance in the presence of heavy-tailed noise would enhance the overall quality and cohesiveness of the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I suggest the authors consider providing a clarification of the assumption on the measurement model, as mentioned in the first point of the "Weakness" section. This clarification would help readers better understand the specific assumptions and limitations of the proposed method.
2. It would be valuable if the authors could compare their method with the approaches mentioned in the second paragraph (lines 24-30) by conducting simulations or other appropriate means. Such comparisons would enable a clearer assessment of how the proposed method outperforms or differs from existing methods.
3. I am curious about the degree to which the proposed CMC method relies on the uncertainty estimate \hat{s}. Many matrix estimation algorithms only provide point estimates for the entries without accompanying uncertainty estimates. In Algorithm 1, when uncertainty estimates are unavailable, the authors set \hat{s}_{ij} = 1 by default (line 3). However, this default choice can be problematic. For instance, consider two scenarios: (1) estimating M and (2) estimating 10M. In scenario 2, the uncertainty should be ten times larger than in scenario 1, but the default choice cannot account for this. It would be beneficial if the authors could address this issue and discuss potential alternatives or adjustments to handle situations where uncertainty estimates are not provided.
4. In Section 3.3, it might be helpful for readers to have either the upper bound for the expected weight gap or the resulting lower bound for average coverage presented as explicit corollaries.
5. The caption of Figure 2 does not provide sufficient information to understand what is being compared. It would be helpful to revise the caption to clearly indicate the elements being compared or provide a brief description of the comparison being made.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: While the authors outline potential future research directions, they do not explicitly address the limitations of their proposed method and approach. It would be beneficial if the authors could include a brief discussion of the technical limitations, such as the assumptions made in their approach. Additionally, providing insights into potential negative impacts when applying the method in real-world applications would further enhance the paper's practical relevance. However, it should be noted that as this paper primarily focuses on theoretical aspects, addressing these limitations in detail may not be deemed critical.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful comments and feedback! Our replies below address the individual points raised in your review.
- In response to the 1st Weakness (“Firstly, the authors frequently emphasize that their proposed method is "distribution-free" “),
Thanks for the suggestions. We will add clarifications about the assumptions and limitations in the revision.
- In response to the 2nd Weakness (“Secondly, although the authors highlight the limitations of existing uncertainty quantification approaches…“),
In fact, in our experiment, we did compare our conformal methods with the existing uncertainty quantification approaches (lines 24 - 30). But we forgot to add references. In the updated manuscript, we will add corresponding references to existing approaches.
- In response to the 3rd Weakness (“Additionally, the issue of heavy-tailed noise is briefly mentioned…“),
Thanks for the suggestion on this heavy-tailed setting. In the appended pdf [Figure 1(l)], we further show the ratio between the obtained length and the oracle length for both als and cmc-als. Although the heavy-tailed problem is indeed a hard problem in terms of statistical inference, we can see that cmc-als actually corrects the conservativeness a lot over als. Moreover, in this heavy-tailed setting, we can see from the error bars that als is much more variable than cmc-als.
- Question 1 is addressed in the response to the 1st Weakness, above.
- Question 2 is addressed in the response to the 2nd Weakness, above.
- In response to Question 3,
Thank you for this question - this is an important point and we will be sure to explain more clearly in our revision. The values $\hat{s}\_{ij}$ capture the estimated relative, rather than absolute, noise levels across the different entries. Rescaling the entire matrix $\hat{s}$ by a constant (i.e., replacing $\hat{s}\_{ij}$ with $\hat{s}^\prime_{ij} = c\cdot \hat{s}\_{ij}$, for the same value c for all (i,j) does not change the outcome of the procedure; running with $\hat{s}^\prime\_{ij}$'s instead of $\hat{s}\_{ij}$'s would result in a quantile value $\hat{q}^\prime = \hat{q}/c$, and the resulting prediction intervals would therefore be identical. This means that, if we set $s_{ij}=1$ for all (i,j), this ensures that all the entries will be given prediction intervals of equal width, and can therefore be interpreted as assuming that the entries all have equal variance (but we are not assuming that the entries have variance equal to 1).
- In response to Question 4,
Thank you for the suggestion! The detailed result for Example 3.3.1 is shown in Section A.3.1 in the appendix and the detailed result for Example 3.3.2 is shown in Section A.3.2. To improve clarity, we will write them as explicit theorems and add pointers to them in the main text.
- In response to Question 5,
Thanks for the suggestions. We will add more details into the figure captions for Fig 1, 2,3
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their efforts in providing a rebuttal and clarifications to address my concerns and questions. With confidence in the authors' commitment to further refining the manuscript in preparation of the camera-ready version, I reaffirm my initial moderately positive assessment. | Summary: This paper utilizes conformal inference techniques to address uncertainty quantification in the matrix completion problem. The authors present a novel method for constructing prediction sets for the missing entries estimated by any given matrix completion algorithm, employing a simple data hold-out strategy. The underlying assumption is that the data matrix $M$ is fixed, while randomness arises from the missingness of the matrix entries.
Specifically, each index is assumed to have a distinct probability of being observed, independent of the others. The authors demonstrate that when the missingness probabilities are uniform across all entries, the data become exchangeable, allowing for the direct application of standard conformal inference methods.
In cases with heterogeneous missing probabilities, the problem becomes more intricate and aligns with the conformal inference under the covariate-shift framework introduced by Tibshirani et al. (2019). To tackle this scenario, the authors propose applying the method developed by Tibshirani et al. (2019) and employ data-driven estimates of the missingness probabilities derived from parametric models.
The validity of the proposed method is established through both theoretical arguments leveraging existing results and empirical evaluation using synthetic and real data experiments. These experiments effectively demonstrate the efficacy and practical applicability of the proposed approach.
Strengths: 1) This paper is both interesting and original, as it contributes to connection between conformal inference and matrix completion. The authors successfully present a reasonable missingness model and introduce a principled framework to apply weighted split conformal methods.
2) The practical usefulness and potential impact of this paper are noteworthy. Until recently, uncertainty estimation in matrix completion was relatively unexplored, and prior methods were limited due to their heavy reliance on assumptions that often do not hold in practice. Therefore, the assumption-lean approach presented in this paper holds substantial promise and has the potential to significantly impact related fields.
3) The paper is well-written, and the proposed method is clearly and effectively explained.
Weaknesses: 1) Technical novelty and originality of theoretical contributions. While this paper seems to rely heavily on the results of Tibshirani et al. (2019) and Barber et al. (2022), the relationship between these works could be explained more clearly. In particular, it would be beneficial to clarify which aspects of the proofs are novel and which can be considered as special instances of prior work.
2) Thoroughness of the numerical experiments. While the proposed method appears promising in theory, the numerical experiments lack convincing evidence. For example, in the four settings depicted in Figure 1, it would be desirable to observe under-coverage in $\texttt{als}$ when the signal-to-noise ratio is low or when the incoherence condition is violated, even with an oracle rank. However, both $\texttt{cmc}$ and $\texttt{als}$ exhibit similar performance when the rank is correctly chosen, with the advantages of $\texttt{cmc}$ primarily stemming from tuning the hypothesized rank. Further investigation and more comprehensive experiments would be useful to establish the illustrate of the proposed method.
3) Realism of real-data experiments. The real data application may not provide significantly more information than a synthetic experiment since the authors begin with the full matrix $M$ and manually sample missing entries using a logistic missingness model. However, it remains unclear whether this logistic model accurately represents practical scenarios. Designing experiments based on more realistic missingness patterns would enhance the informativeness of the results. Additionally, it is expected that the benchmark $\texttt{als}$ would exhibit under-coverage in some heterogeneous settings, but the authors did not compare it to $\texttt{als}$. By neglecting this comparison, the practical relevance of the work is not made as clear as it could be.
4) Limited empirical evaluation. The authors only consider the ``average coverage rate" as defined in Equation (3), which provides a relatively weak coverage guarantee. It would be helpful if the numerical experiments also included appropriate conditional coverage metrics to provide a more comprehensive evaluation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) Usefulness of the one-shot shortcut. The paper introduces a one-shot weighted conformal approach to reduce computational costs. However, it is important to consider potential issues with this relaxation. Firstly, the presence of extremely small probabilities $p_{ij}$ for some test points may lead to excessively large odd ratios $h_{ij}$, resulting in overly conservative predictions that may not be useful for other test points. Secondly, it is not entirely clear why the proposed algorithm without the one-shot relaxation would be prohibitively computationally expensive. Since the weight $w_{ij}$ has a simple form, it should not significantly increase evaluation time in practice.
2) Improvement of technical notation. Some technical details require clarification. For example, in line 157 on page 5, the odd ratio is defined as $(1-p_{ij})/p_{ij}$, which poses a problem when $p_{ij}=0$ for some index $(i,j)$. Additionally, in the Supplement Material, lines 464-467, the variables $\mathbf{Z}$ and $\mathbf{W}$ are not explicitly defined, making it more difficult to verify the proof.
3) Missing citation. In line 233-237, the benchmark prediction sets are mentioned, which are believed to follow from the asymptotic results in Chen et al. (2019). However, the citation for this reference is missing. Including the appropriate citation will provide proper attribution and give readers an opportunity to explore the referenced work.
4) Consideration of alternative methods. There are other matrix completion algorithms available that provide uncertainty quantification, such as matrix completion with Gaussian Copula (Zhao et al, 2020). It would be beneficial for the authors to implement additional benchmarks using these alternative methods to strengthen the validity and robustness of the data experiments.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The main limitations of this paper concern its technical novelty compared to prior work on conformal inference and the depth of its empirical evaluations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful comments and feedback! Our replies address the individual points raised in your review.
- [Weakness 1] Thank you for the question. The main novelty of our work is in reformulating the matrix completion problem as an instance of weighted exchangeability, for which weighted conformal prediction (WCP) can be applied (Lemma 3.1). Secondly, many existing theoretical results using WCP either assume that the true weights w are known, or require ||\hat{w}-w|| to be small with high probability; this can be quite loose, and we have been able to work with an error term that measures only the expected value of the gap. We will explain the novelty and contributions more clearly in our revision.
- [Weakness 2] We set the hypothesized rank r=8 in these two settings. [Figure 1(j)] We modify Setting 2 (line 243) with d=400 and p=0.1. The coverage rate of als drops to 0.74 while cmc-als has the desired guarantee. [Figure 1(k)] We modify Setting 4 (line 246) by replacing t_{1.2} with Cauchy(0,1) for the entries. als does not have coverage guarantee while the coverage of cmc-als is exact.
- [Weakness 3] Real data: In the pdf [Figure 1(e)], we consider the sales dataset where the missingness occurs in the way that: For weekdays, each entry is observed with pr=0.8. On weekends, as the stores are likely to be operated by less experienced interns or are less frequent to report the sales data, each entry is observed with a lower probability, e.g. 0.8/3. Moreover, as there could be a subgroup of stores that are less frequent in sales reporting, 200 stores are randomly sampled, for which pr=0.8/3. We use the logistic model with k=5 to estimate p_{ij}.
In the existing literature with theoretical validity guarantee, the implementation of als is not designed for heterogeneous sampling. To compare with als, we estimate p_{ij} via the uniform sampling model.
In the pdf [Figure 1(h)], we use the setting same as Figure 2(f) with hypothesized r=8 and observe a coverage gap of 1% for als. When the noise is adversarial [Figure 1(i)], the cmc-als has a coverage gap of 1% due to the estimation error of w while the coverage of als drops to 0.86.
- [Weakness 4] In the pdf [Figure 1(f)], we evaluate the local performance of cmc-als when conditioning on (p_{ij} = p_0), i.e. a subpopulation determined by the value of sampling probability. We use the setting same as Figure 2(b) with the hypothesized r=12 and p_0 ranging from 0.1 to 0.9 with a step size of 0.1. The conditional coverage is approximated via kernel smoothing: with the indicators A_{ij} for coverage, we use the weight K_{ij} = \phi_{p_0,h}(p_{ij}) and calculate the conditional coverage by (\sum A_{ij} K_{ij})/(\sum K_{ij}). Here \phi_{\mu,\sigma} is the density function of \calN(\mu,\sigma^2) and h=0.05.
When p_0 increases from 0.2 to 0.8, the conditional coverage increases and stays around 0.9. At p_0=0.1 and 0.9, the coverage varies much, which is due to the small effective sample size for the edge values of p_{ij}. Moreover, for uniform sampling, since the rows & columns are generated i.i.d., there are no meaningful subgroups of the data to condition on.
- [Question 1] Thanks for this question – we realize now that our explanation in the paper was not clear. While there is a small computational benefit, indeed as the reviewer points out, it is not really significant. The real benefit is that the one-shot method produces a much more interpretable answer: the final output of the method is of the form on line 168 for a single value \hat{q} – that is, the same scaling or inflation factor is applied across the entire matrix.
We would like to pause to clarify why we would NOT want to have different rescaling factors \hat{q}_{ij} at different entries. In fact, it may appear initially that this output would be more meaningful – we might have different levels of uncertainty at different locations (i,j), so would it not be beneficial to choose \hat{q}_{ij} adapting to the inflation of an entry (i,j)?
The answer to this is that, without the one-shot simplification, the different values of \hat{q}_{ij} are actually unrelated to the issue of higher or lower uncertainty about our estimates of \hat{M} and \hat{s} at this entry. They only have to do with differences in the weights vectors - an entry (i,j) would have a higher \hat{q}_{ij} based solely on its (estimated) weight \hat{h}_{ij} that reflects its probability of being sampled. Thus, the \hat{q}_{ij}'s are not actually reflecting a meaningful notion of local noise/uncertainty and in our opinion the method is more interpretable with the simplification.
With that said, of course we do not want to propose a method that is needlessly conservative. To that end, we have carried out a simulation to verify that the two versions give essentially the same performance in the pdf [Figure 1(g)]. The setting is the same as Figure 2(b) with hypothesized r=12.
- [Question 2] Thanks for pointing this out. We assume that the p_{ij}'s are nonzero in order for WCP to be well defined. We apologize for omitting to state this explicitly and will add this assumption to our revision.
Notations Z & W – we apologize for the confusing notations in the appendix. The matrix W consists of entries W_{ij} defined on line 2 in Algorithm 1. The matrix Z consists of Z_{ij} which are the indicators of non-missingness, i.e. Z_{ij} = 1{(i,j) is observed}.
- [Question 3] The reviewer is correct. For model-based inferential results, we use results from Chen et al. (2019). We will add this reference in place.
- [Question 4] Thanks for this reference. This paper gives theoretical guarantees under the Gaussian copula assumption for the data, and thus does not provide an alternative mechanism for distribution-free theory. We will add the paper to our discussion. Unfortunately, due to space constraints, we cannot add an additional experiment to compare, but we expect to see a loss of coverage when the model assumption is strongly violated.
---
Rebuttal Comment 1.1:
Comment: I appreciate your responses to my inquiries! The primary concerns that were causing confusion for me have been effectively resolved. While I continue to hold some reservations about the extent of technical innovation, which has somewhat constrained my confidence in assigning an exceedingly high rating, I acknowledge that the paper is accurate, intriguing, and holds practical value. As a result, I've decided to adjust my score from 5 to 6. Thank you! | Summary: This paper presents a distribution-free method for constructing prediction intervals in the matrix completion problem, where randomness only arises from the sampling of observed entries. The approach utilizes weighted conformal prediction and establishes a lower bound on the probability of each unobserved entry being included in the prediction interval. The lower bound relies on the estimation error of sampling probabilities, which can be negligible if accurately estimated.
Strengths: * Overall, the paper is clearly written.
* The utilization of weighted conformal prediction in the matrix completion problem is quite novel.
* The study shows that the resulting prediction interval performs well with well-estimated sampling probabilities.
Weaknesses: * Incorrect estimation of sampling probabilities may significantly degrade the quality of the prediction interval.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: * Are there any experimental results when the sampling probabilities are completely misspecified, e.g., when the heterogeneous missingness is misspecified as uniform sampling?
* I'm curious about the impact of the estimated values of $\hat{s_{ij}}$'s. Does misspecification of these values have a substantial negative impact on the results? Are there any empirical findings regarding this?
* Is there a rationale for the estimation of theta at Line 237? If there is a reference concerning this, could you provide it?
* Since the constructed prediction interval is for M, rather than M*, wouldn't it be appropriate to employ an approach for approximately low-rank matrices in experiments? With an approximately low-rank matrix completion approach, how does the estimation of theta change?
* What does Line 277 mean?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful comments and feedback! Our replies below address the individual points raised in your review.
- In response to the Weakness and to the 1st bullet point under Questions (“Are there any experimental results when the sampling probabilities are completely misspecified…”),
In the attached pdf, we conducted simulations in the following 4 settings:
- Setting [a] [Figure 1(a)] The underlying missingness follows the rank-one model where $p_{ij} = a_i b_j$, both $a_i$ and $b_j$ are generated i.i.d. from Unif(0.2,1). The noise is adversarial (line 273). But we estimate p_{ij} via the one-bit matrix completion based on the logistic model (working model with hypothesized rank k=5).
- Setting [b] [Figure 1(b)] The underlying missingness follows the logistic model as in Example 3.3.2 with $k^*=5$ and we adopt the adversarial noise on line 273. But $p_{ij}$ is estimated under the assumption of uniform sampling (working model), i.e. $\hat{p}_{ij}$ = $\hat{p}$ (line 239).
- Setting [c] [Figure 1(c)] Same as Setting [a] except that we use the random noise on line 276.
- Setting [d] [Figure 1(d)] Same as Setting [b] except that we use the random noise on line 276.
From the results, we can see that when the noise is generated following line 276 (i.e., random noise model), where the values of entries are independent of the sampling probabilities, the misspecification of the sampling model only slightly affects the coverage. Moreover, when the noise is generated in the adversarial way, where the values of entries depend on $p_{ij}$’s, we can see that the coverage with a misspecified sampling model is lower than the target level, but is above 0.85 in practice, which depends on the divergence between the true and the working sampling models. We will add this experiment to the appendix.
- In response to the 2nd bullet point under Questions
In fact, $\hat{s}_{ij}$ doesn’t affect the validity of the prediction intervals. Similar to the estimate $\hat{M}$, they are both initial estimates. Regardless of their accuracy, our conformal methods help correct them and offer provable predictive coverage.
- In response to the 3rd bullet point under Questions (“Is there a rationale for the estimation of theta at Line 237?”):
Eq [19] in the paper “Inference and uncertainty quantification for noisy matrix completion” published in PNAS provides the rationale for this $\hat{\theta}$. In words, it can be viewed as the asymptotic variance of the model-based estimate when the model is correctly specified.
- In response to the 4th bullet point under Questions (“Since the constructed prediction interval is for $M$, rather than $M^*$...”):
When we use model-based estimate ALS, we used the prediction intervals for M (i.e., low-rank matrix plus noise), instead of confidence intervals for $M^*$. So our comparisons are fair in this sense.
- In response to the 5th bullet point under Questions (“What does Line 277 mean?”)
We apologize for any confusion. We will reword this in the revision. To clarify:
The matrix ($p_{ij}$) is drawn by setting $log[ p_{ij}/(1-p_{ij}) ] = \langle a_i,b_j \rangle$ where vectors $a_i$, $b_j$ are drawn as in lines 264–266. To draw the sigma’s, we can think of first drawing an independent copy of p – say, ($p^\prime_{ij}$) – and then setting $\sigma_{ij} = 1/2p^\prime_{ij}$ for each (i, j). Equivalently, we are drawing new $a_i$, $b_j$ vectors, and then defining $\sigma_{ij}$ by taking $log[(1/2\sigma_{ij}) / (1-1/2\sigma_{ij}) ] = \langle a_i,b_j\rangle$. | Summary: This paper proposes to use conformal prediction for uncertainty quantification of matrix completion. The proposed conformalized matrix completion offers provable predictive coverage regardless of the accuracy of the low-rank model. Empirical results on simulated and real data demonstrate that cmc is robust to model misspecification while matching the performance of existing model-based methods when the model is correct.
Strengths: Strength 1. This paper studies conformal prediction for matric completion with interesting theoretical and algorithmatical findings.
1.1) Through conformal prediction, this paper proposes distribution-free confidence intervals for the completed entries in matrix completion. Benefiting from the use of conformal prediction, the validity is free of any assumption on the underlying matrixs and holds regardless of the choice of estimation algorithms. By proving the (weighted) exchangeabiity of unobserved and observed units when they are (non-uniformly) sampled without replacement from a finite population.
1.2) A provable lower bound for the coverage rate is provided when the sampling mechanism is unknown.
1.3) A one-shot conformalized matrix completion approach is proposed for higher computational efficiency.
Strength 2. Experiments on both synthetic and real data suggest the effectivenss of the proposal.
Weaknesses: Weakness 1: One weakness of the proposed uncertainty quantification method for model calibration is that the probability bounds it provides may not be very precise for specific observation models and optimization algorithms. While the method is advantageous in that it is not limited to any particular model or algorithm, there is a possibility that the uncertainty estimates can be too broad for certain scenarios. This potential issue should be acknowledged and addressed in the research.
Weakness 2: Another weakness is that the proposed approach does not assume any specific structure on the underlying matrix. As a result, it may not effectively utilize any existing structure in the matrix. To overcome this limitation, the proposed method can be extended to vector completion problems, which could exploit the matrix structure more efficiently. The authors should consider discussing this point and its potential implications.
Weakness 3: The paper could benefit from improved clarity in its explanations. It would be more reader-friendly if the authors provided more intuitive explanations for the newly introduced quantities, such as the quantity labeled as (6) in the paper. Enhancing clarity in the presentation of the research would greatly improve its accessibility to readers.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please refer to the "Weaknesses".
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The limitations have not been adequately addressed in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful comments and feedback! Our replies below address the individual points raised in your review.
- In response to Weakness 1 and Weakness 2,
Thank you for these points. These questions are inherent to the conformal prediction framework, and are not specific to our specific method or to the specific setting of matrix completion. Our replies are therefore general, and apply to any implementation of the conformal framework to any prediction problem:
- To reply to Weakness 1, the conformal prediction method is designed to provide uncertainty quantification to ANY base estimation algorithm – it is indeed true that the resulting intervals may be very wide if the base algorithm is a poor estimate, but this is not a drawback of conformal. A useful analogy is using cross validation to estimate the error of a model. If we fit a model that actually has very high out-of-sample error, and then cross validation estimates a very high risk for this model, that is a success of CV (i.e., we have correctly identified that this model does not generalize well), not a failure. Analogously, if conformal prediction is applied to a poor estimation model and consequently gives a very wide prediction interval, that is a success of conformal (i.e., we have correctly identified that, when building a valid prediction interval around this particular \hat{M}, we need to make it very wide if we want to achieve coverage at a certain level), not a failure. This view of conformal can be captured by describing conformal as a “wrapper method” – its job is to provide guard rails around any base algorithm (i.e., any procedure that we use to produce \hat{M}), and it’s up to the analyst to choose a good base algorithm.
- To turn to Weakness 2, now we can see that actually the nature of conformal as a wrapper method, addresses this concern from the referee. While conformal itself does not use any model or property for the matrix, the base algorithm that produces \hat{M} is free to use ANY prior knowledge about structure in the matrix (and we do not need to worry if our assumptions are exactly correct!). For example, in our implementation of cmc-als, the base algorithm to produce \hat{M} is the als algorithm, which (implicitly) assumes that M follows a signal-plus-noise structure, M* + (iid noise), with incoherent and low rank M*. If this assumption is correct or approximately correct, our estimator \hat{M} will be excellent and conformal will provide a precise prediction interval – this is why cmc-als is competitive with als in terms of interval width, in the setting where the assumptions of the model hold. The benefit of cmc (and conformal in general) is that it maintains validity even if the assumptions of the base algorithm are wrong (as we see in our simulations for misspecified settings). We did not discuss these ideas extensively in the paper because this interpretation of the conformal framework is broadly applicable to all the conformal prediction literature and is not specific to our setting. However, we will add a bit more about these ideas into our revised paper to give more context to readers who are less familiar with the conformal prediction framework.
- In response to Weakness 3,
Thanks for the suggestions on presentation. Lines 179 - 182 explain intuitions for the newly defined $\Delta$. In words, $\Delta$ quantifies the estimation error in $\hat{w}$ w.r.t.~the oracle weight $w^*$. The smaller $\Delta$, the better the estimation of the weights, and hence the smaller the coverage gap from the nominal level $1-\alpha$.
We will go through the paper and add more explanations for other quantities to improve clarity.
---
Rebuttal Comment 1.1:
Title: Response to the reply
Comment: Thanks for the clarification for my concerns.
- My first two questions are inherent to the conformal prediction framework, and are not specific to our specific method or to the specific setting of matrix completion. I think the authors' response is reasonable.
- The clarification about $\delta$ makes sense. | Rebuttal 1:
Rebuttal: We are grateful to all the reviewers for their helpful feedback, comments, and suggestions on our manuscript. In the comments below, we have replied to each reviewer’s points individually.
The attached pdf contains additional simulation results and details of each setting are stated in the rebuttal.
Pdf: /pdf/2a2732cd62d859e40af418fb0b7b947351b4994b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
An Efficient End-to-End Training Approach for Zero-Shot Human-AI Coordination | Accept (poster) | Summary: The authors propose an Efficient End-to-End training (E3T) approach for zero-shot human-AI coordination. E3T uses a mixture of an ego policy and a random policy as a simple way of training a diverse policy that is still capable of coordination. Unlike prior population based approaches, E3T does not require training populations of agents and thus is more efficient. The authors also propose a partnering module that models the agent’s partner’s actions, enabling improved zero-shot human collaboration. The experiments show a clear improvement over a range of prior methods, testing with both proxy and real human partners.
Strengths: - The idea is straightforward and easy to employ, and directly addresses a meaningful challenge (efficient zero-shot coordination with humans). The combination of an ego policy and a random policy brings together the strengths of self play for training a coordination policy, whilst also incorporating diversity through the random policy to prevent overfitting to a specific partner.
- The inclusion of a partnering module makes sense and shows a clear advantage empirically.
- The experiments are thorough and show comparisons against a range of prior approaches, including experiments with real people, as well as detailed ablations of the proposed method.
Weaknesses: - The method seems like it would be sensitive to $\varepsilon$. While 5b) shows that evaluation for one task, it would be interesting to see of all tasks (i.e. do we need to tune $\varepsilon $ for each task separately?)
- nit: missing epsilon in line 7 in the algorithm
- nit: line 232 has a typo — "Assume the *learned*..."
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Based on figure 6, the overall accuracy of action predictions is pretty low (less than 60%) — it’s surprising that the partnering module is still helpful even when accuracy is not very high. Why do the authors think this is?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are clearly discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the time and effort reviewer DWEk has invested in reviewing our paper, and we appreciate that you concur with the main advantages of our method: (1) simplicity of the method (2) the reasonableness and effectiveness of introducing the partner modeling module (2) thorough experiments and superior performance over existing methods. We have provided detailed explanations and clarifications to resolve your concerns regarding experiments, and thus respectfully hope you can consider the response to the final decision.
> Q1: The method seems like it would be sensitive to . While 5b) shows that evaluation for one task, it would be interesting to see of all tasks (i.e. do we need to tune for each task separately?)
Thank you for your question. Ablation studies of epsilon (in the range of [0.1, 0.3, 0.5, 0.7, 0.9]) on the other 4 layouts are shown in Figure 2. of the rebuttal material. In general, the epsilon can be set small when the layout space is not large, e.g. smaller epsilon for the Forced Coordination layout can achieve better performance because the activation range is narrow and the partner's coordination ability is more important than behavior diversity in this case. For layouts with big space, we normally set the epsilon larger. For example, our method achieves higher rewards with epsilon=0.5 when coordinating with human proxies in other layouts, and with epsilon=0.3 or 0.5 when coordinating with other AI baselines.
> Q2: nit: missing epsilon in line 7 in the algorithm
>
> nit: line 232 has a typo — "Assume the *learned*..."
Thanks for pointing out these typos, and we will correct them in the revision following your suggestions.
> Q3: Based on figure 6, the overall accuracy of action predictions is pretty low (less than 60%) — it’s surprising that the partnering module is still helpful even when accuracy is not very high. Why do the authors think this is?
Thank you for your question. Although the partner modeling module can not exactly predict the partner's action(from 6 actions), it has a relatively high prediction accuracy (less than 60%) compared to a random prediction (with a probability of 1/6, less than 20%). Therefore, we think the partner modeling module can potentially reason about the behavior patterns of unseen partners. Figure 6 shows that coordination performance improves as prediction accuracy increases.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response and additional ablations. I think adding those experiments and the clarifications to my questions above to the draft will strengthen the paper. I retain my original score. | Summary: This paper proposes E3T, an method to train agent for zero-shot coordination with humans. The main contribution in E3T is that, in contrast to population-based approaches, it can be trained in a single stage, significantly reducing training time. E3T also includes a model to predict the next action of the partner agents, given that trajectory. By conditioning on that action, the coordination agent can adapt to different partner behaviors according to their observed trajectories. The method is tested on Overcooked against both zero-shot coordination agents and real humans, showing improved performance and lower training time than competitive zero-shot coordination methods.
Strengths: Originality
- The proposed method is a simple extension to MEP, with the adition of the partner action prediction module, which have been explored before. Despite highly relying on these 2 components the method is simple and provides gains in training time and performance.
Quality:
- The paper provides thorough experiments showing that the proposed method can coordinate with several zero-shot coordination agents in different tasks and layouts. The proposed method shows better training time than competitive approaches, as well as improved coordination results. More importantly, it shows significant improvements over baselines when coordinating with real humans.
- Clear and simple to implement method, with comparisons with main coordination approaches and analysis of the entropy and reward bounds of the proposed method.
Clarity:
- The paper provides a clear overview of the main methods for zero-shot coordination and their limitations. The proposed method is clearly explained, and section 4.1 provides a clear intution for the design choices, building from the limitations of self-play appraoches such as MEP to motivate different design decisions.
- The explanation of the algorithm, figures and pseudocode provide a clear udnerstanding of the proposed method.
- Clear analysis of the results, and ablations to understand the importance of the two components
Significance:
- Building agents that can coordinate with other agents in a zero-shot manner is an important problem, and as the paper points out, the two stage training of existing approaches make learning highly inefficient. Proposing a zero-shot coordination coordination agent that can be trained in a single stage is an important problem. Overcooked is a very simplified coordination setting, but one that showcases some of the challenges in zero-shot coordination, and thus showing improved performance and training time resuls is a significant result.
Weaknesses: - Figure 5.b seems to indicate that the value of epsilon has a strong effect on the final performance and changes for different kinds of tasks, making it potentially hard to find a good parameter for novel tasks. How is that epsilon chosen in the other experiments?
- Figure 5.a shows that partner modeling has a significant effect in the coordination performance. Given the importance of that module, which is compatible with the other proposed baselines, it would be worth testing whether adding that into them would improve their performance results, even if training time would still be higher.
- The mixture policy looks a lot like a self-play policy where a temperature to add noise to the policy is added. What are the main differences to that, if the partner modeling module is eliminated?
- Propositions 1 and 2 are sound, and show that both the entropy and performance of the mixture policy are lower bounded by random and ego policy counterparts, safe for some terms depending on the mixture parameter and action space size. Despite the analysis being sound I am not sure about the value they add that was not known before. The entropy is equal to that of the random policy when \eps is 1, and decreases as we decrease \eps (increasing C_1). Similar analysis can be done for Proposition 2, meaning that the ego and random policies serve as upper bounds. How do propositions 1 and 2 help?
- Having access to the partners action and state pairs is a big assumption that may not hold in more complex environments.
- It seems strange that an ego policy that is trained with actions coming from the partner action prediction model can generalize when training the partner, where a random action distribution is used at the input. As epsilon becomes smaller, this gap should increase further. Could authors comment on that?
- Related work: Authors should consider citing, and commenting on the differences with https://arxiv.org/pdf/2104.07750.pdf, which also models the partner for multi-agent collaboration.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - I am surprised at Fig 4.a results, particularly on th efact that E3T coordinates better with self-play than self-play itself. This seems really counterintuitive, could authors comment on that?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the time and effort reviewer dKj6 has invested in reviewing our paper, and we appreciate that you concur with the main advantages of our method: (1) simplicity and high training efficiency (2) thorough experiments and superior performance over existing methods (3) clarity of motivation and presentation. We have provided detailed explanations and clarifications to resolve your concerns regarding experiments, and thus respectfully hope you can consider the response to the final decision.
> Q1: How is epsilon chosen in the other experiments?
Thank you for your question. Ablation studies of epsilon on the other 4 layouts are shown in Figure 2. of the rebuttal material. For more details and analysis about the epsilon selection, please see our response to Q1 of reviewer DWEk.
> Q2: About adding partner modeling into other baselines
Thank you for your suggestion. We have incorporated the partner modeling module into two baselines (Self-play and the state-of-the-art MEP) as illustrated in Figure 3. (a) of the rebuttal material. These results demonstrate that the partner modeling module can further enhance the zero-shot coordination performance of baselines when they collaborate with AI baselines.
> Q3: The mixture policy looks a lot like a self-play policy where a temperature to add noise to the policy is added. What are the main differences to that, ......
Thank you for your question. In the implementation, the mixture partner policy $\pi_p=\epsilon\pi_r +(1-\epsilon) \pi_e$ is constructed by adding noise $\pi_r$ (a uniform distribution) to the self-play policy $\pi_e$ with a temperature $\epsilon$. Then, the partner action is sampled from this mixed distribution. This method is inspired by the idea of population-based methods that partner policies should be both skilled in coordination and diverse behaviors. Our method simplifies previous population-based methods as a single-staged framework and an end-to-end training approach, so it can improve training efficiency by 9x compared to the state-of-the-art MEP.
> Q4: Propositions 1 and 2 are sound, ...... How do propositions 1 and 2 help?
Thank you for acknowledging the soundness of our propositions. Intuitively, the diversity reflected by entropy increases, and the coordination performance decreases with the random coefficient epsilon increasing. We attempt to quantitatively verify these intuitions as Propositions 1 and 2. Regarding the $\epsilon$ selection, we rely on empirical analysis to balance the coordination ability and behavior diversity as explained in Q1.
> Q5: Having access to the partners action and state pairs is a big assumption that may not hold in more complex environments.
In more complex environments, we do not have the ground truth of partner actions, but only the historical observations of the ego policy are available, thus it is worth developing an advanced model to predict the partner action from changes in the observed environment. In the Overcooked environment, agents can observe the global state (this is the environmental setting). We also empirically verified that partner action can be predicted from the sequence of ego states on Coord. Ring layout. As shown in the table below, the prediction accuracy from only the state sequence is comparable to that from the state-action sequence. Note that the accuracy of random prediction is around 0.17.
||Prediction accuracy|Coordination reward with human proxy|
|-|-|-|
|state-action| 0.54(0.04)|143.3(8.3)|
|state| 0.50(0.01) |139.3(6.8)|
> Q6: It seems strange that an ego policy that is trained with actions coming from the partner action prediction model can generalize when training the partner, where a random action distribution is used at the input. ......
The motivation of the asymmetric input setting, with predicted partner action distribution for ego policy and random action distribution for the partner policy during training, comes from two considerations. Firstly, the random action distribution introduces extra randomness into the partner policy, which in turn improves partner diversity. Secondly, this is consistent with the real-world command that AI agents should adapt to human partners' behavior, while different humans may have various cooperative patterns and are not necessary to try to fit AI agents.
In addition, we have empirically verified that asymmetric input is helpful to zero-shot coordination as illustrated in Figure 3. (b) of the rebuttal material, where the performance of our method with asymmetric input slightly outperforms the model that uses the partner modeling for both ego and partner policies on most layouts.
> Q7: About differences with the mentioned related work
Thank you for your recommendation. Our work and this paper both infer agents' intention to enhance multi-agent coordination. The intention inference purposes are different in the two works. This paper infers each agent's visual attention region and designs additional rewards to incentive all agents to focus on the same elements of the environment to reduce the cost of multi-agent exploration. Our work infers partner actions from historical contexts, which helps to adapt well to unseen partners with different coordination patterns by reflecting on predicted partner actions. We will include this work in the related work of the revision.
> Q8: About E3T coordinates better with self-play than self-play itself
Thanks. I think you are referring to the coordination results between baselines shown in Fig 4. b of the main paper, where each baseline has 5 different random seeds. Self-play policies may fall into some specific conventions during training and thus fail to cooperate with other self-play policies trained **by different random seeds** during testing. By contrast, E3T with the mixture partner policy and the partner modeling module can adapt well to unseen self-play policies, so it is reasonable E3T can achieve good performance when collaborating with self-play.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for the rebuttal. It addressed all my questions. Moreover, the experiments where only the state is shown relax the assumptions in the original paper, and still offer high coordination rewards. I am thus changing my rating to accept. | Summary: This paper proposes a simple end-to-end training mechanism for zero-shot coordination. Existing works in the ZSC literature often make use of population-based training, where a large pool of policies is trained to play well against itself while maintaining a certain form of diversity to prepare for an unknown online partner. This work proposes that sufficient diversity can be achieved by simply adding epsilon-greedy noise into the self-play policy. Performance is further improved by an additional partner modeling module, which takes the past trajectory of the partner and predicts its next action. Experiments against human players and various baselines in the Overcooked environment show some improvements.
Strengths: - Simplicity of method, where only a single policy is learned. This is much more efficient than population-based training approaches.
- Experiments are performed against various agents, including human proxy, actual human players, and some baselines.
- Code is available in the submission.
Weaknesses: - The limited novelty. The method is a simple combination of existing techniques (epsilon-greedy, partner modeling, etc). Besides ablations, a more detailed analysis would provide more insight as to why this works.
- The Overcooked environments are out-of-date. It is required to further evaluate the method on more complex Overcooked environments[1], e.g., more recipes, and more layouts.
[1 ]Wu, Sarah A., et al. "Too Many Cooks: Bayesian Inference for Coordinating Multi‐Agent Collaboration." Topics in Cognitive Science 13.2 (2021): 414-432.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Are there any details that differ from those in [2], e.g. the human proxy partner or the reward function?
- Can the proposed methods be used in more challenging scenarios, e.g., more players, mixed games, or partial observation?
[2] Strouse, D., McKee, K., Botvinick, M., Hughes, E., and Everett, R. Collaborating with humans without human data. Advances in Neural Information Processing Systems, 34: 14502–14515, 2021.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: This paper will be improved by evaluating on more complex environments and showing the scalability of the proposed methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the time and effort reviewer kGQD has invested in reviewing our paper, and we appreciate that you concur with the main advantages of our method: (1) simplicity and high training efficiency (2) experiments are performed against various agents. We have provided detailed explanations and clarifications to resolve your concerns regarding the insights and experiments, and thus respectfully hope you can consider the response to the final decision.
> Q1: The limited novelty. The method is a simple combination of existing techniques (epsilon-greedy, partner modeling, etc). Besides ablations, a more detailed analysis would provide more insight as to why this works.
Thank you for acknowledging the simplicity of our method, which indeed contributes to its higher training efficiency compared to other approaches. However, we'd like to respectively clarify that our method is more than just a straightforward combination of existing techniques. **In fact, we are motivated by the idea of population-based methods that partner policies should be both skilled in coordination and diverse behaviors**. To achieve this, we are the first to decompose the population training objectives of MEP into two policies: one being a simple copy of the ego policy for coordination skills, and the other being a random policy for diverse behaviors.
Although this decomposition appears similar to epsilon-greedy, the underlying motivation and induction process are entirely distinct, and we believe our decomposition is experimentally grounded as results show and the simplicity may benefit or inspire more subsequent works based on population-based methods. Besides ablations, we have also provided a theoretical analysis of partner modeling in Proposition 3, which indicates that smaller partner action prediction error leads to better coordination performance.
> Q2:The Overcooked environments are out-of-date. It is required to further evaluate the method on more complex Overcooked environments[1], e.g., more recipes, and more layouts.
Thank you for sharing this work with us. Although our paper and [1] both conduct experiments on Overcooked, our contributions differ in their focus. Our paper aims to achieve human-AI zero-shot coordination **without using human data,** while [1] whereas proposes a hierarchical planning method that can quickly complete a recipe. Due to the baselines of [1] being based on model-based dynamic programming, they do not support human-AI zero-shot coordination.
We'd like to try to run our method in the environment you suggest, but it is challenging to transfer our codebase and compare zero-shot coordination baselines on the new Overcooked environments[1] within a limited time. In our main paper, we use the same Overcooked environment as human-AI zero-shot coordination baselines (including BCP[3], FCP[2] you mentioned, and MEP), and we have tested our approach on 5 different layouts, which cover the three open, partially passable, and challenging forced coordination scenarios in [1]. These layouts are not as simple as imagined. Here are some analyses of the complexity of these layouts.
| | *Cramped Room* | *Asymmetric Advantages* | *Coordination Ring* | *Forced Coordination* | *Counter Circuit* |
| -- | -- | -- | -- | --- | -- |
| State space | $5.3 \times 10^7$ | $1.2\times 10^{14}$ | $6.8\times 10^{10}$ | $2.0\times 10^9$ | $5.8\times 10^{16}$ |
With consistently superior performance than baselines in these layouts in the Overcooked environment, we believe our evaluations are sufficient, and we also would like to add the discussion and differences with [1] in the final version of our paper, as per your suggestion.
To further verify the effectiveness of our method and to explore new coordination challenges beyond Overcooked, we alternatively evaluate our method on the Google Football environment following your suggestions. The details and the results are shown in the R1 of our "global" response above.
[1 ]Wu, Sarah A., et al. "Too Many Cooks: Bayesian Inference for Coordinating Multi‐Agent Collaboration." Topics in Cognitive Science 13.2 (2021): 414-432.
[2] Strouse, D., McKee, K., Botvinick, M., Hughes, E., and Everett, R. Collaborating with humans without human data. Advances in Neural Information Processing Systems, 34: 14502–14515, 2021.
[3] Carroll, M., Shah, R., Ho, M. K., Griffiths, T., Seshia, S., Abbeel, P., and Dragan, A. On the utility of learning about humans for human-ai coordination. *Advances in neural information processing systems*, 32, 2019.
> Q3: Are there any details that differ from those in [2], e.g. the human proxy partner or the reward function?
Thank you for your question. We follow the same overcooked environment and human proxies as BCP[3]. The reward function of our work is the same as FCP[2], which gives 20 points for each successful delivery. We also use the same layouts and recipes as FCP. The state representations and the training data for human proxies of FCP[2] are different from that of BCP and our work, so we implemented the FCP[2] based on our codebase for a fair comparison with our method. We would like to publish the implementation in the future.
> Q4:Can the proposed methods be used in more challenging scenarios, e.g., more players, mixed games, or partial observation?
Following your suggestion, we conduct additional experiments on a three-player cooperative game of the Google Football environment. The details and the results are shown in the R1 of our "global" response above.
In summary, we provided additional experiments on a more complex environment following your suggestions and explain the motivation of our method and the difference from the previous work. We believe our response sufficiently addresses your concerns regarding the experiments and novelty, and thus respectfully hope you can reconsider your final decision. If you have further concerns, please feel free to respond to us and we would like to discuss them with you.
---
Rebuttal 2:
Title: Providing additional experiment results on the Google Football game
Comment: Dear Reviewer kGQD,
We really thank you for your valuable comments on improving our work. Here, we would like to add more experiment results and a more detailed analysis of a 3-player Google Football game.
As per your suggestion, we have conducted additional experiments on the Google Football environment's "3 vs 1 with Keeper" layout, which is a three-player cooperative game with 19 actions in the discrete action space. The three players have shared rewards and they cooperate to gain high rewards. We train policies via self-play and our methods. To test the zero-shot coordination ability of these models, we follow the experimental setting of Other-Play [1]. In this setting, **the policies cooperate with unseen teammate policies that were trained with the same algorithms but different random seeds, without any prior interaction.** This ensures that the policies have not encountered their teammates during training. In the table below, we report the mean and standard error of the winning rates of scoring a goal.
| | Training performance | Test performance(Zero-shot coordination across different random seeds) |
| ------------------------------------- | -------------------- | ------------------------------------------------------------ |
| Self-Play | 0.87(0.05) | 0.03(0.01) |
| E3T(Mixture Policy) | 0.79(0.04) | 0.70(0.01) |
| E3T(Partner Modeling) | 0.89(0.05) | 0.24(0.05) |
| E3T(Mixture Policy + Partner Modeling) | 0.87(0.02) | 0.84(0.02) |
As shown in the table above, policies trained by the self-play fail to cooperate with policies from different training seeds, because self-play policies may fall into some specific coordination conventions during training. Our proposed mixture policy can increase the behavioral diversity of teammates, allowing the ego policy to encounter different coordination patterns during training. Consequently, our method with the mixture policy can train a policy that can adapt well to policies independently trained from different training seeds. Moreover, the partner modeling module can further enhance the zero-shot coordination performance by enabling the ego policy to respond to the predicted teammates' action distributions. We note that we set the epsilon as 0.1 for this environment (a large epsilon, i.e., more randomness, would result in policies failing to obtain rewards on this task) and the partner modeling module uses the historical observations of the ego policy to predict the action distributions of all 3 players.
In summary, our method combining the mixture partner policy and partner(teammate) modeling module can also improve the zero-shot coordination ability in the complex football environment, with 3 cooperative players and a large action space with 19 discrete actions in it.
[1] Hu, H., Lerer, A., Peysakhovich, A., and Foerster, J. “other-play” for zero-shot coordination. In International Conference on Machine Learning, pp. 4399–4410. PMLR, 2020.
We hope that our responses have solved your concerns and respectfully hope you can consider the final decision accordingly. As the author-reviewer discussion period is coming to an end, we are respectfully looking forward to discussing with you. Please let us know if you have any further questions or concerns and we are very happy to address them.
---
Rebuttal Comment 2.1:
Title: Response to Authors
Comment: Thanks for your detailed response. After reading your response, most of my concerns are addressed. So I decide to raise my score. Particularly, the results on Google Football are impressive to me. But I have another question about the experiments, how to get the keeper policy? Are they different between testing and training?
---
Reply to Comment 2.1.1:
Comment: Thank you for your question. The layout '3 vs 1 with Keeper' we used consists of three players and one keeper. The keeper policy is a built-in bot provided by the Google Football environment, and the keeper policy is the same during the training and testing phases.
Title: About the keeper policy | Summary: This paper introduces a one-stage training framework for human-AI coordination. The key insight is that partner policies should exhibit both coordination skills and diversity. However, the traditional approach of constructing a competent and diverse partner population in the first stage and training an ego policy given this population in the second stage is complex and inefficient. To address this, the author proposes to decouple the partner policy into two components: one aligned with self-play (ego policy) and another component introducing diversity (random policy). By implementing this approach, the paper significantly improves efficiency compared to the Maximum Entropy Population (MEP) method. The effectiveness of the proposed method is demonstrated through experiments conducted on five different Overcooked layout scenarios.
Strengths: 1. The paper is well-motivated and easy to follow.
2. The key idea behind the proposed method is straightforward and intuitive.
3. The experiments section of the paper is substantial, covering a range of important aspects such as zero-shot coordination with human proxy, comparisons against human and AI baselines, and a well-conducted ablation study.
4. This paper offers a sound theoretical analysis of the proposed method.
Weaknesses: Based on Figure 5, it appears that the proposed method is highly task-sensitive. Therefore, I find it less convincing to solely rely on experiments conducted on five different layouts of Overcooked. It would be beneficial for the authors to consider conducting experiments on various multi-agent reinforcement learning benchmarks, such as matrix games, MPE, Hanabi, SMAC, and others, to provide a more comprehensive evaluation of their method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The partner policy observes random actions, which implies it is unlikely to exhibit coordinated behavior. And the ego policy achieves the best performance when $\epsilon = 0.5$, which is a considerably large value. Does the ego policy adopt a more conservative approach and attempt to complete the task independently?
2. The significant performance improvement of this method compared to MEP is intriguing. It appears that this paper can be viewed as a one-staged MEP combined with partner modeling. Is the performance disparity primarily attributed to the partner modeling?
3. In the ablation study, when partner modeling is removed, what serves as the input for the ego policy?
4. Minor comments:
a) Figure 6 is difficult to comprehend. The importance of "environment steps" is not clear in the figure. I think it would be helpful to use the "accuracy of action prediction" as the x-axis, as it could better illustrate the correlation between "average reward per episode" and "action prediction".
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I appreciate that the author acknowledges the limitation of the current method in neglecting general multi-player human-AI coordination tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the time and effort reviewer dkSb has invested in reviewing our paper, and we appreciate that you concur with the main advantages of our method: (1) it is well-motivated and easy to follow (2) the substantial experiments (3) sound theoretical analysis of the proposed method. We have provided detailed explanations and clarifications to resolve your concerns regarding the experiments, and thus respectfully hope you can consider the response to the final decision.
> Q1: ....... It would be beneficial for the authors to consider conducting experiments on various multi-agent reinforcement learning benchmarks, such as matrix games, MPE, Hanabi, SMAC, and others, to provide a more comprehensive evaluation of their method.
Thanks for your suggestion. We point out that we have conducted experiments on a 100x100 matrix game as shown in Figure 4 (a) of the main paper. Our method can achieve the best performance with superior training efficiency than existing baselines. Ablation studies about the random coefficient epsilon are shown in Figure 10 (b) in the appendix, which demonstrate that E3T with $0<\epsilon<1$ can effectively explore the strategy space to find the optimal strategy and to avoid getting trapped in suboptimal solutions.
We conduct additional experiments on the Google Football environment's "3 vs 1 with Keeper" layout. We provide more details and results in the R1 of our "global" response above.
> Q2:The partner policy observes random actions, which implies it is unlikely to exhibit coordinated behavior. And the ego policy achieves the best performance when , which is a considerably large value. Does the ego policy adopt a more conservative approach and attempt to complete the task independently?
Thanks for your question. If without partner modelling, the policy tends to complete the task independently. With partner modeling, the ego policy adapts to diverse partner behaviors well during training to improve the generalization ability. The design of partner policy merging random actions makes the partner policy more diverse in its behaviors while the ego policy can adjust well to different partner behaviors. This is also consistent with the real-world scenario that the ideal AI agent should adapt to human partners' behavior, while different humans may appear in various cooperative patterns and are not necessary to try to fit AI agents.
> Q3: The significant performance improvement of this method compared to MEP is intriguing. It appears that this paper can be viewed as a one-staged MEP combined with partner modeling. Is the performance disparity primarily attributed to the partner modeling?
The performance improvement of our method attributes to both the partner modeling and the mixture partner policy. The partner modeling benefits zero-shot coordination since the ego policy can respond to the predicted partner action. The mixture partner policy exhibits behavior diversity and coordination competence by directly combing the ego and random policies. In addition, our method is a single-staged framework and an end-to-end training approach, so it can improve training efficiency by 9x compared to the population-based MEP.
We plot the performance of E3T variants and MEP in coordination with AI baselines as shown in Figure 1.(b) of the rebuttal material. These results show that E3T only with the mixture partner policy can achieve comparable performance with MEP, and the partner modeling module can further improve the coordination ability.
> Q4: In the ablation study, when partner modeling is removed, what serves as the input for the ego policy?
When the partner modeling is removed, the ego policy only depends on the state and has the form $\pi_e(a|s)$. When we use partner modeling, the ego policy also depends on the predicted partner action distribution $a_p$ and has the form $\pi_e(a|s,a_p)$.
> Q5: Minor comments: a) Figure 6 is difficult to comprehend. The importance of "environment steps" is not clear in the figure. I think it would be helpful to use the "accuracy of action prediction" as the x-axis, .......
The environment step is the number of interactions used for training checkpoints. It is useful to plot the "average reward per episode" and the "action prediction" metrics over time to present how the agents improve their coordination and partner modeling skills. We have also illustrated the correlation between these two metrics in Figure 1. (a) of the rebuttal material, where the x-axis is the "action prediction" accuracy.
---
Rebuttal 2:
Title: Providing additional experiment results on the Google Football game
Comment: Dear Reviewer dkSb,
We really thank you for your valuable comments on improving our work. Here, we would like to add more experiment results and a more detailed analysis of a 3-player Google Football game.
As per your suggestion, we have conducted additional experiments on the Google Football environment's "3 vs 1 with Keeper" layout, which is a three-player cooperative game with 19 actions in the discrete action space. The three players have shared rewards and they cooperate to gain high rewards. We train policies via self-play and our methods. To test the zero-shot coordination ability of these models, we follow the experimental setting of Other-Play [1]. In this setting, **the policies cooperate with unseen teammate policies that were trained with the same algorithms but different random seeds, without any prior interaction.** This ensures that the policies have not encountered their teammates during training. In the table below, we report the mean and standard error of the winning rates of scoring a goal.
| | Training performance | Test performance(Zero-shot coordination across different random seeds) |
| ------------------------------------- | -------------------- | ------------------------------------------------------------ |
| Self-Play | 0.87(0.05) | 0.03(0.01) |
| E3T(Mixture Policy) | 0.79(0.04) | 0.70(0.01) |
| E3T(Partner Modeling) | 0.89(0.05) | 0.24(0.05) |
| E3T(Mixture Policy + Partner Modeling) | 0.87(0.02) | 0.84(0.02) |
As shown in the table above, policies trained by the self-play fail to cooperate with policies from different training seeds, because self-play policies may fall into some specific coordination conventions during training. Our proposed mixture policy can increase the behavioral diversity of teammates, allowing the ego policy to encounter different coordination patterns during training. Consequently, our method with the mixture policy can train a policy that can adapt well to policies independently trained from different training seeds. Moreover, the partner modeling module can further enhance the zero-shot coordination performance by enabling the ego policy to respond to the predicted teammates' action distributions. We note that we set the epsilon as 0.1 for this environment (a large epsilon, i.e., more randomness, would result in policies failing to obtain rewards on this task) and the partner modeling module uses the historical observations of the ego policy to predict the action distributions of all 3 players.
In summary, our method combining the mixture partner policy and partner(teammate) modeling module can also improve the zero-shot coordination ability in the complex football environment, with 3 cooperative players and a large action space with 19 discrete actions in it.
[1] Hu, H., Lerer, A., Peysakhovich, A., and Foerster, J. “other-play” for zero-shot coordination. In International Conference on Machine Learning, pp. 4399–4410. PMLR, 2020.
We hope that our responses have solved your concerns and respectfully hope you can consider the final decision accordingly. As the author-reviewer discussion period is coming to an end, we are respectfully looking forward to discussing with you. Please let us know if you have any further questions or concerns and we are very happy to address them.
---
Rebuttal Comment 2.1:
Title: Official Comment by Reviewer dkSb
Comment: I would thank the authors for their rebuttal and additional experiments. The rebuttal effectively tackled the majority of my concerns and inquiries. As a result, I would like to raise my score to acceptance. | Rebuttal 1:
Rebuttal: We thank the time and effort reviewers have invested in reviewing our paper. We have provided detailed explanations and clarifications to resolve your concerns regarding experiments and insights. If you have further concerns, please feel free to respond to us and we would like to discuss them with you.
We have conducted further experiments and ablations to address your comments and concerns. The results are presented in the rebuttal PDF file.
**R1: Common response to Reviewers dkSb and kGQD. Additional experiments on Google Football**
To further verify the effectiveness of our method, we conduct additional experiments on the Google Football environment's "3 vs 1 with Keeper" layout, which is a three-player cooperative game with 19 actions in the discrete action space. Following the Other-Play [1], we use the coordination performance among independent training runs ("random seeds") from the same training method to evaluate the zero-shot coordination ability. We report the mean and standard error of the winning rates of scoring a goal in the table below.
| | Training performance | Test performance Zero-shot coordination across different random seeds |
| ------------------- | -------------------- | ------------------------------------------------------------ |
| Self-Play | 0.87(0.05) | 0.03(0.01) |
| E3T(mixture policy) | 0.79(0.04) | 0.70(0.01) |
These results show that the policy trained by self-play can coordinate well with itself but it fails to cooperate with policies trained with different random seeds (with different behavior preferences). However, the policy trained by our method (with a mixture policy) can cooperate well with itself and with policies trained under other random seeds. These results indicate that increasing partner diversity during training can significantly improve the generalization ability of trained ego policy.
[1] Hu, H., Lerer, A., Peysakhovich, A., and Foerster, J. “other- play” for zero-shot coordination. In International Conference on Machine Learning, pp. 4399–4410. PMLR, 2020.
Pdf: /pdf/94c280fd5e87369aae2a661b7485d8d5001d19d9.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Balancing Risk and Reward: A Batched-Bandit Strategy for Automated Phased Release | Accept (poster) | Summary: This paper deals with a problem of gradually releasing a resource to a population modeled as a risk-of-ruin constrained experiment. Namely, at every stage $t$ we have an arriving population $\mathcal{N}_t$, and we need it into a control and treatment groups $\mathcal{C}_t$ and $\mathcal{T}_t$ respectively (the treatment group is assumed to be the one receiving the resource). However, this resource allocation can potentially come at some cost, which depends on the population receiving the treatment. The goal of the problem at hand is to keep the overall such cost under a specified threshold (budget constraint), while minimizing the number of rounds/stages it takes to cover at least half of the underlying population.
As the authors nicely explain, this model has natural applications in the phase release of software products/updates.
For this problem, the authors give an adaptive Bayesian algorithm that satisfies the budget constraint with probability $1-\delta$. Namely, given any confidence $\delta$, the algorithm achieves the desired guarantees with probability $\geq 1-\delta$.
Strengths: 1) The problem seems well-motivated.
2) The paper presents both sound theoretical results and a validating experimental evaluation.
3) To my understanding (and take this with a grain of salt since I am not knowledgeable in this area) based on what the authors mention in their related work, devising a structured algorithmic way of dealing with the phase release problem in this way is novel.
Weaknesses: 1) I found the presentation a bit incomplete and difficult to follow at certain points:
a) Are the arriving subpopulations between different stages disjoint?
b) When you say that the "stopping condition" is covering half of the population, does that mean half of $\cup{t \geq T}\mathcal{N}_t$?
c) Besides defining the RRC experiment, it would be useful to formally define the whole problem, including the optimization objective.
d) It would be nice to give some further motivation behind the experiment cost in Definition 2.1. Why is this cost capturing something meaningful?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Look at the previous section.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time spent reading our paper and encouraging remarks. We appreciate reviewer's feedback on the points that might be confusing to readers and will incorporate the clarification remarks to these points in future versions of this manuscript. We would also refer the reviewer to our global responses.
>"I found the presentation a bit incomplete and difficult to follow at certain points:
a) Are the arriving subpopulations between different stages disjoint? b) When you say that the "stopping condition" is covering half of the population, does that mean half of
? c) Besides defining the RRC experiment, it would be useful to formally define the whole problem, including the optimization objective. d) It would be nice to give some further motivation behind the experiment cost in Definition 2.1. Why is this cost capturing something meaningful?"
To answer reviewer's question:
* **(a)** Yes, we assume a new batch of users arrive at each time $t$.
* **(b)** At stage $t$, $N_t$ users arrive. Ideally, the experimenter would like to assign $N_t/2$ of the users to treatment group (they will experience the new feature) and $N_t/2$ of the users to control group (they will continue to use the pre-update version of the software) at each time to yield the most power for statistical testing. However, there is a concern that, if the new feature is sub-optimal, too many incoming users will be negatively affected. Our algorithm thus starts by assigning only a small portion of users to treatment in the first stage and gradually ramp-up to an even-split (i.e., half of incoming users $N_t$) if the new feature is deemed safe. Meanwhile, if the feature is deemed not safe, the experiment could be ramped down and terminate at $m_t=0$.
* **(c)** We thank the reviewer for comments and suggestions for revision---we agree that readers will benefit from a clearer definition of the problem that highlights the optimization aspect of the approach, such as the objective. We will now briefly explain how the algorithm is designed to maximize ramp-up speed under budget constraints, which we will include in the manuscript.
At each stage $t=1,...T$, $N_t$ users will enter the experiment; the experimenter must then determine the number of users to be assigned to the treatment, denoted by $m_t\in [0, N_t/2]$. The objective is to maximize $\sum_{i=1}^T m_t$ while ensuring that the experiment satisfies the RRC experiment conditions (within budget with high probability by the end of the experiment).
Our strategy is to decompose the overall constraint (i.e., that the experiment is RRC) into a sequence of stage-wise, adaptive constraints using Theorem 3.1. Then, we solve a sequence of sub-problems: maximize $m_t$ under the stage-wise constraint for the $t$-th stage. These stage-wise constraints simplify to (11) under the Gaussian model, and we can find a maximum $m_t$ satisfying (11) by solving a simple quadratic equation defined in line 193 with coefficients in (12). In this sense, our algorithm solves a relaxed version of the original optimization problem of finding an RRC experiment that maximizes $\sum_{i=1}^T m_t$. We, however, emphasize that the way the original problem is relaxed has crucial practical implications. In particular, the stage-wise constraints of Eq. (2) in Theorem 3.1 are \textit{adaptive}: if we observe a feature is not adversely affecting user experience in past stages, the constraints for future stages will relax and thus more users can be assigned to treatment group safely, and vice versa.
* **(d)** In practice, the experiment cost is quantified with respect to a specific business metric (represented by $Y$ with our notation). For example, user engagement metrics, such as the number of clicks on the organic posts, are often picked as guardrail metrics when optimizing the ads delivery system for the website. Suppose that the treatment simply increases the number of ads on the website. In this case, the organic engagement metric would drop in the experiment, capturing the negative impact of the treatment. Our definition of the experimentation cost is trying to capture the total cost of such impact aggregated over all treatment units across all iterations. Various other business metrics could also be considered, such as the number of website visits, sales of particular products or services, or a combination of multiple metrics. The choice of such business metrics depends on the applications and is beyond the scope of this work.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have read the rest of the reviews and comments and I will keep my score as it is. | Summary: This paper considers the problem of phased releases and formulates the problem into a batched bandit.
Strengths: This paper is very well-written and the problem very motivated. It is great to see bandit algorithm to solve a real-world application rather than stay in the theory world. The algorithm is fairly simple and this paper is well-executed with theory and experiments. I am glad to see the semi-real linkedIN experiments.
Weaknesses: The algorithm is fairly simple and the theory is routine. Well for an application paper, we should not expect the algorithm to be quite complex.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: no
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: no
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude for taking the time to review our paper and for providing your encouraging remarks. We will respond to the following remark given that it is listed as weakness of the paper.
>"The algorithm is fairly simple and the theory is routine. Well for an application paper, we should not expect the algorithm to be quite complex."
The primary objective behind designing a simple algorithm was to ensure its practicality on a large scale and its user-friendly implementation for practitioners. By maintaining simplicity, our intention was to facilitate its smooth incorporation into current infrastructure, promoting widespread adoption. Tech companies often have access to parallel processing tools like Spark, which can efficiently compute moment information such as sample mean and sample variance. Leveraging these readily available computations, practitioners can effortlessly employ our algorithm without encountering significant technical hurdles.
Furthermore, the simplicity of our approach brings another important advantage: the ability to control the tails of budget spent without resorting to rare event simulation. To keep the cost of the experiment below a set budget with high probability, one typically needs to generate thousands of simulations of how the experiment might unfold, conditioned on the observations collected so far, and adjust the ramp schedule so that budget violation occurs only under a small portion (e.g. 1%) of the generated scenarios. Given the large volume of incoming users, conducting rare-event simulations would be extremely challenging and computationally expensive. Our method circumvents this issue by using only the moment information, and thus provides a practical solution without the need for complex simulations.
>"the theory is routine..."
We believe that our theoretical result Theorem 3.1 could be of interest to Bayesian bandit designs in setting where there is a need to adaptively infer an unobserved or partially observed budget whilst not depleting the budget with a high probability (see the example of clinical trials in our global response). As far as we know, this result represents a novel contribution that could be very useful to practitioners, although we acknowledge that the induction argument used to prove the theorem may not seem very challenging to theorists.
In contrast to a more routine union-bound based approach where the total budget is partitioned and allocated to each stage a priori (e.g. fix $T$ stages and allocate $B/T$ budget to each stage and apply union bound), our approach involves an adaptive breakdown of the overall risk constraint into stage-wise constraints, as demonstrated in Theorem 3.1. Notably, this adaptive breakdown
1. allows for carrying over unused budget from previous iterations to future ones,
2. enables the stage-wise constraints to adapt based on past data (e.g. the constraints will relax as the algorithm learns that the new feature is indeed safe), and
3. does not require $T$ to be fixed a priori
4. allows the experimenter to adjust stage-wise budget allocation ($b_t$) and stage-wise tolerance ($\Delta_t$) on-the-fly while maintaining the global constraint unchanged (e.g. the experimenter may reserve a portion of budget for later if they anticipate more adjustment of the released feature). | Summary: The paper presents an algorithm for conducting automated phased release strategies that balance risk and reward by controlling the risk of ruin while maximizing ramp-up speed. The authors propose a framework that models the problem as a constrained batched bandit problem and uses an adaptive Bayesian approach. The algorithm is designed to be efficient and parallelizable, and is claimed to be robust to model misspecifications.
Strengths: Originality:
The tasks and methods presented in the paper are relatively new and provide a novel approach to phased release strategies in the technology industry.
The work combines well-known techniques, such as constrained batched bandit problems and adaptive Bayesian approaches, in a unique and valuable way.
Quality:
The submission is technically sound, with the proposed algorithm built upon a solid theoretical foundation.
The claims are well supported by both theoretical analysis and experimental results, including simulations and a semi-real LinkedIn data experiment.
Weaknesses: Clarity in Section 3: The authors introduce the concept of the risk-of-ruin-constrained (RRC) experiment, but the problem definition and objectives of the proposed algorithm are not explicitly presented. In the abstract and introduction, the authors mention to balance risk control and maximize ramp-up speed. However, the paper lacks a clear explanation of how the algorithm's design specifically contributes to maximizing ramp-up speed. The algorithm's goal appears to be to output the treatment group size while satisfying the budget constraint. A clearer problem definition and a more direct explanation of the algorithm's goals would help readers better understand the problem being addressed and the significance of the proposed solution.
Maximizing ramp-up speed: In the experimental results, the authors mention that a larger budget (B) and higher risk tolerance can lead to faster ramp-ups. However, the connection between the algorithm's design and its ability to achieve faster ramp-ups is not explicitly discussed. Upon a closer examination of the paper, the authors propose a decomposition scheme in Theorem 3.1, which breaks the risk constraint into stage-wise constraints. This approach helps control the current-stage cumulative experiment cost, given past observations, and determines the treatment assignment based on the posterior inference of the remaining budget. However, the paper could further elaborate on how these stage-wise constraints or the algorithm's design contribute to optimizing ramp-up speed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In Figure 1 (b), the line plot does indeed reach close to zero and then increase. This behavior can be attributed to the algorithm adapting to the realized outcomes. When the outcome is negative, the algorithm may still use the remaining budget to explore further, rather than stopping the experiment immediately. It is possible that the algorithm is designed to learn from past observations and adjust treatment assignments accordingly, which could explain this behavior. However, it would be helpful if the authors provided more insights into the reasons behind this behavior and whether any modifications could be made to the algorithm to address this concern.
In line 172, the mention of "some prior estimate" is not specific about which parameter should be determined first. The authors could clarify this by providing examples of which parameters are typically estimated in practice, such as the treatment effect or the variance of outcomes.
In line 5 of Algorithm 1, it appears to be a typographical error. The correct values for w should be 0 and 1 (i.e., w = 0,1), as these values correspond to the treatment and control groups in the paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Revise Section 3 to provide a clearer problem definition and discuss the objectives of the proposed algorithm in more detail.
Explain the design aspects of the algorithm that contribute to maximizing ramp-up speed, and how these aspects are balanced with the need to control risk. The paper could benefit from additional experimental results that compare the proposed algorithm with baseline methods in terms of ramp-up speed under the same violation rates. This would help demonstrate the algorithm's effectiveness in balancing risk and quickly ramping up compared to other methods. The authors may consider conducting more experiments or simulations to showcase the superiority of their approach in this regard.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's careful reading of our paper. We've addressed their feedback in the responses below. We kindly request the reviewer to reconsider their acceptance opinion, taking into account of the suggested revision and the attached one-page PDF simulations that address their concerns on baseline comparison.
>"Clarity in Section 3: The authors... proposed solution."
>"Revise Section 3...more detail."
To further clarify the problem definition and the objective of the algorithm, which is to maximize ramp-up speed under budget constraints, we plan to add the following to Section 3:
At each stage $t=1,...T$, $N_t$ users will enter the experiment; the experimenter must then determine the number of users to be assigned to the treatment, denoted by $m_t\in [0, N_t/2]$. The objective is to maximize $\sum_{i=1}^T m_t$ while ensuring that the experiment satisfies the RRC experiment conditions (within budget with high probability by the end of the experiment).
Our strategy is to decompose the overall constraint (i.e., that the experiment is RRC) into a sequence of stage-wise adaptive constraints using Theorem 3.1. Then we solve a sequence of sub-problems: maximize $m_t$ under the stage-wise constraint for the $t$-th stage. These stage-wise constraints simplify to (11) under the Gaussian model, and we can find a maximum $m_t$ satisfying (11) by solving a simple quadratic equation defined in line 193 with coefficients (12). In this sense, our algorithm solves a relaxed version of the original optimization problem of finding an RRC experiment that maximizes $\sum_{i=1}^T m_t$. We, however, emphasize that the way the original problem is relaxed has crucial practical implications. See next point, as well as our global response.
>"Maximizing ramp-up speed:...ramp-up speed."
The algorithm will initialize from a conservative prior, which ensures that $m_t$ starts small in first few stages. The algorithm then observes experiment outcomes and updates the prior at each stage. Note that the stage-wise constraints are set up to be adaptive (Eq. (2) conditions on all past information $\mathcal{F}_{t-1}$ ). So, if the feature turns out to be safe, the stage-wise constraints will relax in response: note that Eq. (11), Eq. (2) under Gaussian model, will hold for larger $m_t$ if posterior treatment effect takes larger, positive value. Since we always choose the largest $m_t$ satisfying the stage-wise constraint, this leads to the ramp-up of $m_t$—--that is, more users assigned to treatment. Note that stage-wise constraints will also accounts for other posterior statistics (e.g. posterior variance, c.f. Eq. (2)) when determining the ramp size.
In short, our algorithm generates fast ramp up since it maximizes $m_t$ within stage-wise constraints that both uphold the global constraint and incorporate all information collected throughout the experiment.
>"In Figure 1 (b), the line plot...this concern."
The reviewer's assessment is accurate. In Figure 1 (b), we observe how our algorithm's response to an unfavorable released feature. Initially cautious due to conservative prior, the algorithm gradually ramps up as it update the prior. However, the budget eventually depletes due to persistent negative effect, leading to the experiment being ramped down and terminated.
We clarify that this behavior aligns with our desired objective: utilize the available budget to maximize our precision at estimating the treatment effect, even when it is negative. In other words, the algorithm is designed to allocate as many users to the treatment as the budget allows. Also note that during the release, the product team may update the feature to incorporate the early negative feedback received so far. Therefore, a sub-par release might improve after the initial few stages.
>"In line 172, the mention ... outcomes."
Based on our experience, the algorithm is robust to these choices and performs well as long as these parameters are set conservatively. For instance, one can simply set $\mu_0(0)=\mu_0(1)=0$ and $\sigma_0(0)^2, \sigma_0(1)^2, \sigma(0), \sigma(1)$ to reasonably large values, as we did in the simulation. This ensures that the experiment begins with a small number of users assigned to treatment in the first iteration.
>"In line 5 of Algorithm 1, it appears to be a typographical... paper."
This is indeed a typo. We thank the reviewer for the careful reading.
>"Explain the design aspects of the algorithm... this regard."
The baseline algorithm is Thompson sampling-based bandit (Appendix G). Unaware of a budget, it balances risk-reward trade-off using scalar parameter $c$: small $c$ balances user allocation for faster ramp-up, while large $c$ favors better-performing group assignment, reducing risk.
If we understand correctly, the reviewer suggests that we tune $c$ so that the experiment has the same budget violation probability under both methods and compare whether our algorithm will indeed ramp up faster. We provide such an example in the one-page PDF where we tune $c$ so that the Thompsons sampling bandit's budget violation probability is 3.7% in the adverse NTE setting (Line 245), matching the violation probability achieved by our algorithm in the same setting. Keeping the same tuning parameters, we show that our algorithm indeed ramps up faster in the PTE and NPTE settings (PTE, line 244, NPTE line 246).
We however stress that the baseline algorithm's main weakness is accurately determining $c$ without oracle knowledge, to achieve a target budget violation probability. In our one-page PDF simulation, finding an optimal $c$ required trial and error, i.e. repeating the same experiments many times (and thus used oracle knowledge). Thus, our manuscript doesn't emphasize comparing ramp-up speeds with an optimally tuned baseline. Instead, we showed how the baseline performs for different $c$ values, and how it is often overly cautious or overly aggressive.
---
Rebuttal Comment 1.1:
Comment: I appreciate the comprehensive explanation you provided in response to my questions. It has given me a clearer understanding of the problem and your proposed solution. I found your explanation helpful and enlightening.
As a result, I will be increasing my score for your paper.
---
Reply to Comment 1.1.1:
Comment: We're glad the reviewer found our responses adequate and is willing to adjusting their acceptance score! However, it seems that on our side the score remains unchanged (5, Borderline accept). We just want to quickly check if this still reflects the reviewer's stance or if there might be a delay in updating. In the former case, we'd appreciate the opportunity to address any remaining reservations the reviewer may have. | Summary: The authors address the problem of finding a risk-sensitive strategy for phased releases. A model that involves a risk budget is proposed and an algorithm based on Bayesian updates is presented to find a solution. The proposed algorithm is empirically tested on a range of problem setups and is shown to outperform existing bandit algorithms.
Strengths: The problem setup and the proposed model for phased release seem novel to me. Although this is arguably a rather narrow and specific problem, the authors do contribute some useful ideas in applying the Bayesian approach to a challenging problem, which could be valuable to the NeurIPS community. The paper is easy to read and overall presentation is clear.
Weaknesses: The proposed model is simple enough to avoid the need for tedious sampling-based solutions. However, one wonders whether some of the parameters (such as the risk budget) can be easily instantiated in real-world scenarios. The i.i.d. assumption simplifies the analysis but again one wonders whether it would lead to overly conservative or overly risky solutions in practice.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Page 9 L259 I suppose this is NPTE rather than PNTE. It does make me wonder why there is no PNTE scenarios in the experiments.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time spent reading our paper and encouraging remarks. We would like to take this opportunity to respond to the questions raised.
>"Although this is arguably a rather narrow and specific problem, the authors do contribute some useful ideas in applying the Bayesian approach to a challenging problem, which could be valuable to the NeurIPS community."
We agree with the reviewer that we are fundamentally solving the phased release problem. It's important to note that phased release stands as a significant challenge in its own right, affecting nearly every tech company due to the widespread use of A/B tests. The reviewer's perception of the problem is narrow and specific might stem from the fact that the bandit community has yet to explore this problem extensively, as most bandit algorithms have been tailored for more "main-stream" applications such as the recommender systems. The phased release problem presents a distinctive array of challenges, which renders existing bandit algorithms unsuitable for direct application.
With that in mind, we would also like to point out that the ideas presented in our paper are valuable in other scenarios where there is a need to adaptively infer an unobserved or partially observed budget whilst not depleting the budget with a high probability. For instance, this often happens in clinical trials, where balancing the trade-off between treating patients and experimenting with different treatment options is critical; our approach can explicitly quantify and control the extent to which subjects' treatment efficacy can be sacrificed (i.e., the budget) in return for more thorough experimentation on underperforming treatment options.
>"However, one wonders whether some of the parameters (such as the risk budget) can be easily instantiated in real-world scenarios."
Setting the right budget and risk of ruin parameters is crucial in practical applications. In fact, our approach was motivated by the attempt to quantify "latent budgets" that nearly all product teams have in mind when conducting phased release. For example, product teams may terminate the experiment if they observe a large drop in purchases of a certain services upon the feature update (i.e. negative treatment effect). They typically have a rough threshold in mind for how large such negative treatment can be for the different business metrics (e.g., revenue, sales, engagement) before they make a subjective decision to ramp down or terminate the experiment. The risk budget parameter effectively quantifies such thresholds and make it transparent to the management. Overtime, this will be conducive to a firm-wise standard for releasing features of different categories.
We also note that businesses employing phased releases typically possess a history of experiments with the same set of metrics. The companies can thus leverage historical data to retrospectively test choices of the allocation of budget ($B$) and probabilities of risk ($\delta$). Such retrospective analysis will yield guidelines for how to select these parameters in a way that leads to logical and favorable business outcomes in future experiments.
>"The i.i.d. assumption simplifies the analysis but again one wonders whether it would lead to overly conservative or overly risky solutions in practice."
We acknowledge that the i.i.d. assumption can be restrictive in certain practical settings. For example, a main reason the i.i.d. assumption would be violated is when there is interference between experimental units, meaning that one unit's treatment assignment impact's another's outcome. We agree with the reviewer that this work does not account for this type of violation, potentially leading to sub-optimal designs when they occur. However, our work was motivated by the setup in the technology industry where i.i.d. assumptions are common. Properly dealing with interference usually requires much more sophisticated designs and goes beyond the scope of this work. That said, handling interference in our proposed framework would be an exciting area of investigation for future work.
>"Page 9 L259 I suppose this is NPTE rather than PNTE. It does make me wonder why there is no PNTE scenarios in the experiments."
We apologize for the typo. Lines 259 to 265 pertain to NPTE. The PNTE scenarior the reviewer refers to is the case where the feature update become worse and worse throughout the experiment. We omitted ramp schedule for the PNTE scenario due to limited space: PNTE resembles NTE in Figure 1 (b). In both cases, the algorithm ramps up initially to explore true treatment effect, then ramps down as the budget depletes (treatment effect is negative in NTE or goes negative in PNTE). We opted to present the more interesting NPTE case, where the algorithm ramps up for true effect exploration, down for initial adverse effect, and up again as the effect turns positive.
In practice, NPTE is also much more prevalent than PNTE. The former captures the phenomenon that experimenters can learn from early stages of the experiment and improve their features based on this early feedback. This is indeed a key value in implementing phased release in a product development process. It is very rare for experimenters to actively modify the treatment and make it worse after digesting all the insights in previous iterations.
However, we did plot the budget distribution in a adversarial scenario where the feature is persistently negative and deteriorates as the experiment progresses (see Figure 1, (n)) and discussed how our algorithm may yield overly confident ramp in this scenario (Lines 287-292, 238-239). Basically, if the new feature suddenly becomes much worse than in the previous stages, the algorithm will need at least one stage to learn this before it can ramp-down in response.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal, I'll keep my score. | Rebuttal 1:
Rebuttal: ***We provide a one-page PDF of simulation requested by reviewer m8cf in the attachment.**
We extend our gratitude to all the reviewers for their diligent review of our paper, and we highly value the insights they have provided through their feedback. We intend to consolidate our responses into a comprehensive global reply, aiming to offer a synthesized perspective that might be beneficial to the broader readership.
**Broader impact**: Although we are fundamentally solving the the phased release problem (an important problem to tech industry in its own right but largely overlooked by the bandit community), the ideas presented in our paper are valuable in other settings where there is a need to adaptively infer an unobserved or partially observed budget whilst not depleting the budget with a high probability. For instance, this often happens in clinical trials, where balancing the trade-off between treating patients and experimenting with different treatment options is critical; our approach can explicitly quantify and control the extent to which subjects' treatment efficacy can be sacrificed (i.e., the budget) in return for more thorough experimentation on underperforming treatment options.
**Summary of the setting and the optimization objective**: At each stage $t=1,...T$, $N_t$ users will enter the experiment; the experimenter must then determine the number of users to be assigned to the treatment, denoted by $m_t\in [0, N_t/2]$ ($N_t/2$ since statistical tests have max power under even-split experimentation). The optimization problem is to maximize $\sum_{i=1}^T m_t$ while ensuring that the experiment satisfies the RRC experiment conditions (within budget with high probability by the end of the experiment).
**Summary of our solution**: Our strategy is to decompose the overall constraint (i.e., that the experiment is RRC) into a sequence of stage-wise adaptive constraints using Theorem 3.1. Then we solve a sequence of sub-problems: maximize $m_t$ under the stage-wise constraint for the $t$-th stage. These stage-wise constraints simplify to (11) under the Gaussian model, and we can find a maximum $m_t$ satisfying (11) by solving a simple quadratic equation defined in line 193 with coefficients (12). In this sense, our algorithm solves a relaxed version of the original optimization problem. We, however, emphasize that the way the original problem is relaxed has crucial practical implications. See next two points.
**Why our approach can yield fast ramp up**: The algorithm will initialize from a conservative prior, which ensures that $m_t$ starts small in first few stages. The algorithm then observes experiment outcomes and updates the prior at each stage. Note that the stage-wise constraints are set up to be adaptive (Eq. (2) conditions on all past information $\mathcal{F}_{t-1}$ ). So, if the feature turns out to be safe, the stage-wise constraints will relax in response: note that Eq. (11), Eq. (2) under Gaussian model, will hold for larger $m_t$ if posterior treatment effect takes larger, positive value. Since we always choose the largest $m_t$ satisfying the stage-wise constraint, this leads to the ramp-up of $m_t$—--that is, more users assigned to treatment. Note that stage-wise constraints will also accounts for other posterior statistics (e.g. posterior variances accounting for model uncertainty, c.f. Eq. (2)) when determining the ramp size.
In short, our algorithm generates fast ramp up since it maximizes $m_t$ within stage-wise constraints that both uphold the global constraint and incorporate all information collected throughout the experiment.
**Novelty of Theorem 3.1 as alternative to union bound approach**: We believe that Theorem 3.1 could be of interest to bandit designs in setting where there is a need to adaptively infer an unobserved or partially observed budget whilst not depleting the budget with a high probability.
In contrast to a more routine union-bound based approach where the total budget is partitioned and allocated to each stage a priori (e.g. fix $T$ stages and allocate $B/T$ budget to each stage and apply union bound), our approach involves an adaptive breakdown of the overall risk constraint into stage-wise constraints, as demonstrated in Theorem 3.1. Notably, this adaptive breakdown
1. allows for carrying over unused budget from previous iterations to future ones,
2. enables the stage-wise constraints to adapt based on past data (e.g. the constraints will relax as the algorithm learns that the new feature is indeed safe), and
3. does not require $T$ to be fixed a priori.
As far as we know, this result represents a novel contribution, although we acknowledge that the induction argument used to prove the theorem may not seem very challenging to theorists.
Pdf: /pdf/4563d096d2d8a5d1992f65c622815f1fe0051abc.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Offline Imitation Learning with Variational Counterfactual Reasoning | Accept (poster) | Summary: This paper proposes OILCA, which addresses the scarcity of expert data in offline imitation learning by generating counterfactual samples, to imagine what the expert will do under a unobserved state. OILCA takes the perspective of Structual Causal Model (SCM). The algorithm consists of four steps:
1) A heuristic expert policy is pretrained with the known expert data.
2) A conditional VAE is trained on the union of expert and supplementary data with the latent variable being "exogeneous variable" that integrates the information from task label and transition, to rebuild the possible next state.
3) Sample transitions with task labels from the expert data. Counterfactual next states are generated by applying a do-intervention on the latent variable, and expert action is then generated by the pretrained policy; the counterfactual transition is added back to the expert data.
4) Arbitrary offline imitation learning method with supplementary data is applied on the augmented expert data (DWBC is empirically the best choice).
On many environments with both in-distribution and out-of-distribution dataset, OILCA is proved to be better than multiple baselines. The data augmentation technique of OILCA is orthogonal to other offline IL algorithms and can benefit many of them.
Strengths: **1. The proposed method is sound, and can be combined with multiple offline IL methods.** The idea of generating imaginary (counterfactual) samples to combat scarcity of expert data is a still novel direction that has been proved to be successful by several works [1, 2]. Furthermore, the algorithm is orthogonal to many offline IL algorithms and empirically proved to be an amplifier for them, which significantly increases its impact.
**2. Generally, the idea of the paper is clearly conveyed through the authors' writing.** The motivation is clearly presented in the question between line 59 and 60, and the high-level process is clearly conveyed in the pseudocode. The questions asked at the beginning of the experiment section is well-answered and clearly indicates the important takeaways of the section.
**3. The experiment result is solid.** Not only does OILCA works significantly and unanimously better than multiple baselines across in-data distribution and out-of-data distribution dataset on many environments, but it also improves the performance of the baselines. Moreover, the visualization on toy environment clearly shows the property and behavior pattern of the algorithm.
**4. The paper also provides theoretical analysis to prove its generalization ability, which is a very important problem of IL/RL community.** Numerous methods, such as entropy regularizer [3] and pessimism [4], has been proposed to address the problem of generalization onto the vast state-action space without data coverage. Theoretical advantage on this area is a valuable contribution.
**References:**
[1] D. Hwang et al. Sample Generation for Reinforcement Learning via Diffusion Models. In ICLR, 2023.
[2] C. Lu et al. Synthetic Experience Replay. In ArXiv, 2023.
[3] J. Ho and S. Ermon. Generative Adversarial Imitation Learning. In NIPS, 2016.
[4] G-H Kim et al. LobsDICE: Offline Learning from Observation via Stationary Distribution Correction Estimation. In NeurIPS, 2022.
Weaknesses:
**1. The related work section could be expanded to include more related work.**
a) The work is related to works that uses generative model and tries to do few-shot adaptation on offline IL such as FIST [1], PARROT [2] and CEIP [3] (the latter two has RL finetune, but already works decently well at the offline IL stage). While this work uses generative model in a very different way, those works should be briefly discussed due to the similarity of the problem formulation and the common use of generative model.
b) The key idea of this work is to generate imaginary data to enhance the imitation learning, and the idea of data generation has already been studied by several works (see [1, 2] in the strength section). It would be great to see a discussion about the relation between this work and the existing works in data generation.
The above two parts are now missing from the related work, and poses a weakness on the investigation of the context.
**2. There are many moving parts in OILCA.** While the authors claim that OILCA is a 2-step solution (line 135-136), the networks that need to be trained include two actors (pretrain & final), one discriminator, and encoder/decoder of the conditional VAE. Two pairs of them needs to be trained jointly. This could be a potential source for instability in training and extra usage of computational resource (and there is no computational resource and training time spent reported in the paper).
**3. About the pseudocode:**
a) "$D_E\leftarrow$" should be added in front of the current line. This is a value assignment rather than an expression;
b) The meaning of $c$ should be reiterated in the pseudocode for quick understanding.
c) overall, I think the update process and generate process in line 3 and 7 can be clarified more clearly (for example, what is sampled in the beginning, and what is fed into a neural network with what output?) such that the do-intervention process on SCM is more clearly presented. Currently the readers can only make analogy from the definition on line 96-106 without explicit confirmation on notations, which is difficult to comprehend (especially when the parameter notation used in Eq. 2 and the pseudocode is different).
**References:**
[1] K. Hakhamaneshi et al. Hierarchical few-shot imitation with skill transition models. In ICLR, 2022.
[2] A. Singh et al. Parrot: Data-driven behavioral priors for reinforcement learning. In ICLR, 2021.
[3] K. Yan et al. CEIP: Combining Explicit and Implicit Priors for Reinforcement Learning with Demonstrations. In NeurIPS, 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Besides my concerns in the weakness section, I have two questions:
1. Judging from the pseudocode, it is quite likely a generated counterfactual state is again sampled as expert data and used to generate other counterfactual states. Will this leads to wild compound error when this process goes too far from a single original expert data source?
2. The pretrained expert policy is directly given in the pseudocode, but it also needs to be trained from original expert data, and thus will also suffer from out-of-distribution decision making. Does this create a chicken-and-egg dilemma, where you need to have a well-trained expert policy to generate more data, but data needs to be generated by a well-trained expert?
Below are my suggestions for the paper. I am happy to increase my score if the suggestions and concerns raised in the weakness section are addressed.
1. Expand related work to include the works discussed in the weakness section.
2. Give explanation for the questions proposed above, and the concern of moving parts.
3. Modify the pseudocode, and possibly the main paper, to more clearly present the do-intervention process and fix the minor issue of weakness 3. a) and 3. b).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitations in the appendix line 582-587, and I think is indeed the most important problem that readers could ask: does counterfactually generated data necessarily improve the performance? Will it instead hinder the performance because of the poor quality of generated data? It is a pity that this paper does not answer the question, despite the fact that it is already a good work.
As for potential negative societal impact of the work, there is no discussion. I suggest the authors to include the discussion somewhere in the paper; though the work is purely on simulated control environment, the work on automated control would inevitably causes some concerns such as misuse of technology and potential job loss.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Dear reviewer srX3:**
Thanks for your review of our paper. Here's our response to the weaknesses, questions, and limitations you have highlighted:
--------
**Weaknesses:**
**W1.The related work section could be expanded to include more related work.**
**A**: Thanks for your suggestions. Firstly, we will follow your recommendations and conduct further research on the usage of generative models for few-shot adaptation in offline imitation learning. We will incorporate these papers into our revised version of the related work section. Secondly, we agree that data generation for imitation learning is highly relevant to our study. Therefore, we will include some representative works in this area in our revised related work section.
**W2.Instablility analysis.**
**A**: Thanks for your insightful questions. We admit that our OILCA introduces several additional moving parts (a pretrained actor and encoder/decoder of conditional VAE) compared with vanilla DWBC. However, this does not necessarily bring the instability in training. First, the stability for joint training of final actor and discriminator has been previously proved in original paper of DWBC. Then, the pre-training of actor is just a simple supervised learning with the logged behavior data, which stability has been widely validated in various previous literacture on offline IL research. As for the joint training of encoder and decoder of conditional VAE in our counterfactual data augmentatiion module, its stability has been proved with the hyperparameter ablation study in our above response to Q2 of Reviewer agsz.
As for the computational resource, we apologize for nor providing related details in our submitted version considering that this is rarely addressed in the relevant literature. Actually, for the compuation device, we use the NVIDIA RTX A6000 GPU. Here, we further report the overall training time spent of our OILCA and baselines as follows. And we will supplement related analysis to the appendix in the revised version of paper.
Training time consumption for in-distribution experiments on Deepmind Control Suite (corresponding to the Table 1 in the paper). All the methods are trained 200 epochs.
**To save space, we put the result tables into the external anonymous link (Unit: sec.)**: https://hackmd.io/@littleshoes/ryaYsJfhn. Please check Table 10 in it.
Training time consumption for out-of-distribution experiments on CausalWorld (corresponding to the Table 2 in the paper). All the methods are trained 200 epochs. **To save space, we put the result tables into the external anonymous link (Unit: sec.)**: https://hackmd.io/@littleshoes/ryaYsJfhn. Please check Table 11 in it.
From the tables above, we can conclude that the additional time consumption brought by our counterfactual data augmentation is acceptable considering that offline IL component still occupies the main part of time consumption.
**W3. About the pseudocode:**
**A**: Thanks for your suggestions. We have revised the Figure 2(b) (see https://i.postimg.cc/KzWk86kG/Figure2-b.jpg), and we will add the related descriptions in the pseudocode.
--------
**Questions:**
**Q1.Counterfactual state compound error**
**A**: Once we have acquired a proficiently trained counterfactual sample generation model, we randomly select $(s\_t,a\_t,s\_{t+1},c)$ from the expert data collection and generate the counterfactual state $\tilde{s}\_{t+1}$. The generated samples are not utilized for subsequent expert data augmentation. A detailed explanation of this will be provided in the final version to prevent any misconceptions.
**Q2.Detailed training procedure of pre-trained policy.**
**A**: This question helps us to correct our presentation. We realize that we did not present the detailed training process of $\hat{\pi}\_{E}$. To answer this, we provide the details here.
One main perspective is how do we know that the interventions won’t lead to divergence, or learning use less models.
In our work, we use a pretrained policy $\hat{\pi}\_E$, which can be regarded as the S-Learner [1] in causal inference. And we consider the counterfactual decision risk, the objective of trained policy as $\mathcal{L} = \frac{1}{n}\sum l(s\_t, a\_t) * \frac{p(a\_{t+1}\mid s\_{t+1})}{p^{do(I)}(a\_{t+1}\mid \tilde{s}\_{t+1})}$. $l$ is the policy training loss. For the sample weight $\frac{p(a\_{t+1}\mid s\_{t+1})}{p^{do(I)}(a\_{t+1}\mid \tilde{s}\_{t+1})}$ may have a big variance, we clip it with a small value $m$. Moreover, to analysis the decision risk, we can derive the bound on the risk.
[1]Zhang, W., Li, J., & Liu, L. (2021). A unified survey of treatment effect heterogeneity modelling and uplift modelling. ACM Computing Surveys (CSUR), 54(8), 1-36.
**Lemma 1** The density of the counterfactuals based on the observations, i.e.
$p^{do(I)}\_{s\_t, a\_t}(\tilde{s}\_{t+1})=\mathbb{E}\_{s\_{t+1} \sim p\_{s\_t,a\_t}(s\_{t+1})}\left[p\_{s\_t, a\_t}^{{do}(I) \mid s\_{t+1}}(\tilde{{s}}\_{t+1})\right]$
This result shows that the density of intervened variable $\tilde{s}\_{t+1}$ is the marginal of the observations.
**Lemma 2.** We have the following lower bound on the logdensity of the counterfactuals:
$$\begin{aligned} \log \left(p^{d o(I)}(\tilde{s}\_{t+1}, a\_{t+1})\right) \geq& \mathbb{E}\_{s\_{t+1} \sim p\_{s\_t,a\_t}(s\_{t+1})}\left[\log \left( p\_{s\_t, a\_t}^{{do}(I) \mid s\_{t+1}}(\tilde{{s}}\_{t+1})\right)\right] \\\\ &+\mathbb{E}\_{{u} \sim p({u})}\left[\log \left(p\_{s\_t,a\_t}^{{do}(I)}(\tilde{s}\_{t+1} \mid {u})\right)\right]\end{aligned}$$
Due to the limited space, we only provide the brief proofs in the OpenReview. **For more detailed version, please refers to the Q2 of Reviewer srX3 in our external anonymous link (https://hackmd.io/@littleshoes/ryaYsJfhn).** And we will add them in the final revised Appendix.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thanks for your detailed response; I think they addresses my concern well. I have no follow up question and have changed my score accordingly from 6 to 7.
---
Reply to Comment 1.1.1:
Title: Thanks to Your Response
Comment: Thank you so much for recognizing our work! | Summary: This paper focuses on the problem of offline Imitation Learning, where an agent aims to learn an optimal expert behavior policy without additional interactions with the online environment. This setting widely exists in the real world as it usually consumes a lot of human effort and cost to collect expert data. Using sub-optimal data could be a valuable way for learning policy. The proposed method uses counterfactual data augmentation to generate high-quality data by learning the underlying data generation mechanism with unlabeled data. Then the new data is combined with expert data to train a policy model. Experiment results demonstrate that the proposed method achieves a large margin over multiple baselines.
Strengths: 1. This paper is well-written and easy to follow. Figure 1 is illustrative and helps me understand the core idea of this work.
2. The motivation of using a small portion of expert data with a large portion of unlabeled data is important in real-world tasks. The proposed method that leverages the information in unlabeled data in an unsupervised way helps generate more expert data and therefore trains a better policy model.
3. The proposed method is evaluated in two domains with locomotion and manipulation tasks, which of both are important and have a high impact.
Weaknesses: 1. In the abstract, there is a gap between the statement of poor generation caused by sub-optimal datasets and the elimination of spurious features. No clear connection between the sub-optimal dataset and spurious feature is mentioned.
2. In line 14 of the abstract, the authors say the in-distribution robustness is improved by their method but robustness is not mentioned before. Do the authors assume that robustness is almost the same as generalization when the testing scenario is “in distribution”?
3. Using counterfactual data augmentation to improve the performance of decision-making models (including RL, offline RL, IL) is already investigated by existing works. The novelty of the proposed method could be limited unless more differences between existing papers and this one are emphasized.
4. In Definition 2, the notations $\tilde{f}_i$ and $\tilde{PA}_i$ are not defined. These notations are important for understanding this definition.
5. I am a little confused about the three-step pipeline of counterfactual inference discussed after Definition 2. The second step is usually for modifying the SCM by removing the structural equations $f$ and replacing the value of $x$ with others. However, this part is not mentioned and the authors only say “denote the resulted SCM as $M_x$”. So, how do we get this $M_x$? Is the value of x changed? Figure 2 enhances my confusion since the do-intervention is only conducted to the exogenous variable $u_i$ rather than $s_t$ or $a_t$. I am not sure if this is still the standard definition of counterfactuals. It is also not consistent with the statement “What action might the expert take if a different state is observed?” since the state is not changed.
6. In section 3.1, the authors say “an additionally observed variable c is needed, where c could be, for example, the time index or previous data points in a time series, some kind of (possibly noisy) class label, or another concurrently observed variable”. However, the authors do not mention which one is considered for $c$ in this paper. I can only find some information in the experiment part, which also makes me have new questions. Does the process of selecting the variable $c$ create multiple environments with different contexts? This is usually considered a multi-task learning setting, which may not be true for a general imitation learning setting since the data collection is usually not possible to be clustered with additional context indicators. I hope the authors can provide more information about how to define and section the variable $c$ in the rebuttal.
7. After reading the entire paper, I find that robustness and spurious feature is only mentioned in the introduction. The theoretical analysis and experiment do not provide any evidence of the improvement of robustness and the elimination of spurious correlation. In particular, section 4.2 tries to investigate the robustness but I cannot find the setting that supports it. Robustness should be evaluated with small disturbances to the system, including the state, action, and dynamics. After checking Appendix 5.1, I cannot find any disturbance added to the dataset.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Points 5 and 6 of the weaknesses are the main questions I want to ask the authors.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: No potential negative societal impact is mentioned in either the main context or the appendix. One theoretical limitation is discussed in the appendix but some empirical limitations might be ignored. The most important one is how to select and obtain variable $c$, which is not discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer Txp8:
Thanks for your review of our paper. Here's our response to the weaknesses, questions, and limitations you have highlighted:
-------
**Weaknesses**
**W1.No clear connection between the sub-optimal dataset and spurious feature is mentioned.**
**A**: The motivation of our method is rooted in the limitation of expert data in standard offline imitation learning. Given this constraint, we propose a method to augment the expert data, which we believe will lead to better performance of the learned policy. The objective of our method is to obtain an augmented expert dataset for training an improved offline policy. However, a crucial challenge arises: determining the most suitable data augmentation method to use. Drawing inspiration from the advancements in causal inference in RL, we opt for a counterfactual approach to augment the expert dataset.
One of the primary reasons for the poor generalization of the learned policy is the presence of spurious features or spurious correlations. Specifically, a distribution limitation in the training data can result in the learned policy failing to generalize effectively to unseen data. For instance, in the case of both RL and IL states, which may consist of high-dimensional vectors, some dimensions are crucial for action selection while others are not. However, the irrelevant dimensions in the state can introduce spurious correlations into the learned policy. Intervening on the states through a do intervention can modify this aspect and eliminate the spurious correlation.
**W2.In distribution robustness**
**A**: See our robustness explaination (W5) to Reviewer agsz.
**W3.The novelty of the proposed method.**
**A**: To the best of our knowledge, our method is the first to specifically address counterfactual data augmentation in offline IL. Additionally, we thoroughly reviewed relevant papers on RL and synthesized our findings in the related work section. To provide further clarity on this point, we offer additional details below:
[1] Lu, C., Huang, B., Wang, K., Hernández-Lobato, J. M., Zhang, K., & Schölkopf, B. (2020). Sample-efficient reinforcement learning via counterfactual-based data augmentation. arXiv preprint arXiv:2012.09092.
This paper proposes counterfactual RL algorithms to learn both population-level and individual-level policies. It focuses on the sequential decision making setting and achieving an individual policy.
[2] Pitis, S., Creager, E., & Garg, A. (2020). Counterfactual data augmentation using locally factored dynamics. Advances in Neural Information Processing Systems, 33, 3976-3990.
This paper proposes a counterfactual data augmentation method by using the locally factored dynamics, which can generate the causally valid sample in the global model.
[3] Pitis, Silviu, et al. "Mocoda: Model-based counterfactual data augmentation." Advances in Neural Information Processing Systems 35 (2022): 18143-18156.
This paper applies a learned locally factored dynamics model to an augmented distribution of states and actions to generate counterfactual transitions for RL.
Our work utilizes the variational counterfactual method to generate counterfactual samples in the standard offline IL, distinguishing it from the three most related works mentioned above. Additionally, we provide theoretical analysis for identifying both the causal effect and the exogenous variable. As far as we know, this is the first time that identifiable variational counterfactual reasoning is used in offline imitation learning.
**W4.Definition 2 Notation.**
**A**: See our definition answer (W3) to Reviewer agsz.
**W5.Confusion about the three step pipline.**
**A**: Thanks for your questions, we realize that Figure 2b causes the misleading to you, thus we revise this figure in the following:
https://i.postimg.cc/KzWk86kG/Figure2-b.jpg
The SCM is typically considered as a predictive model. In our paper, Figure 2(a) presents the SCM, while Figure 2(b) illustrates the do intervention. The changed value of x corresponds to the generated $\tilde{s}\_{t+1}$. In conjunction with Definition 2, Figure 2(b) demonstrates that we replace the function $f\_u$ with $\tilde{f}\_u$, and $u\_{t+1}$ is changed to $\tilde{u}\_{t+1}$, corresponding to the change in $\mathbf{PA}\_i$ to $\tilde{\mathbf{PA}}\_i$. Consequently, we generate the new $\tilde{s}\_{t+1}$, corresponding to the change in $\mathbf{X}$. The prediction of $\tilde{a}\_{t+1}$ provides an answer to the counterfactual question, aligning with the statement.
**W6.The obtainability of $c$.**
**A**: Thanks for the thoughtful questions. This variable is readily obtainable, particularly in real-world scenarios. In the context of autonomous driving, the variable $c$ represents the diverse road conditions encountered during a single driving mission, encompassing information about various exogenous variables. In systems exhibiting periodic cycles, such as the days of the week from Monday to Sunday, the $d$-th day can be represented by the variable $c$. While these additional variables may not be prevalent in many imitation learning studies, considering the exogenous variable always allows for finding an appropriate label. In the context of the variable $c$, even in the original paper [1] proposing this additional variable, there are no strict constraints on its interpretation. This characteristic renders $c$ readily observable, particularly in numerous real-world scenarios.
[1] Khemakhem, I., Kingma, D., Monti, R., & Hyvarinen, A. (2020, June). Variational autoencoders and nonlinear ica: A unifying framework. In International Conference on Artificial Intelligence and Statistics (pp. 2207-2217). PMLR.
**W7.Robustness experiment**
**A**: See our robustness explaination (W5) to Reviewer agsz.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thanks for addressing my concerns. I have some follow-up questions.
W1: I appreciate the detailed response from the authors but it seems that my question still remains. I understand that the main contribution of this paper is counterfactual data augmentation but the relation between generalization and spurious correlation illustrated by the authors is not clear. Could the authors provide any reference or experimental support to the statement "One of the primary reasons for the poor generalization of the learned policy is the presence of spurious features or spurious correlations."? And another statement "However, the irrelevant dimensions in the state can introduce spurious correlations into the learned policy." also does not make sense to me. One situation I can think about is some special cases where the task-irrelevant background is artificially designed to be spuriously correlated to the task-relevant features. However, I don't think this is what is designed in the experiment. If the authors just mean some random task-irrelevant background, I don't think there exists any spurious correlation. If the spurious correlation is not explicitly investigated in this paper, I suggest removing the relevant statement (it seems "spurious" only appears once).
W2: what do the authors mean by "in distribution online testing"? Maybe a mathematical definition of robustness is better for me to understand.
W5: according to the review policy, I am not allowed to open external links. Sorry, I misunderstood the statement “What action might the expert take if a different state is observed?” in my initial review. Now I understand that you are predicting action based on a different state. But my question still remains: why does the counterfactual change $u_{t+1}$ rather than directly changing $s_{t+1}$? According to the definition of SCM, $u_{t+1}$ should be an exogenous variable whose causes are unknown or not included in the model. It is acceptable If the authors want to learn a model to approximate the generation process of $s_{t+1}$, but using $u_{t+1}$ in this process may not be usual.
W6: I still doubt that $c$ is readily obtainable in the real world. For autonomous driving, if $c$ is road condition, how would you label it? And the most important question I want to ask is what $c$ is used in tasks in Table 1 and Table 2.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer Txp8's Follow-up Questions
Comment: Dear Reviewer Txp8:
Thanks for your follow-up questions. To address your concerns and help you better understand this work, we make the corresponding explanations and clarifications as follows.
-----
**W1**
**A**:Thanks for your question and suggestion. As mentioned in your comment, the term "spurious" is only mentioned once in the paper, and we do not explicitly demonstrate its presence in the experiment. This lack of clarity may cause confusion to both you and the readers.
Furthermore, it is important to note that spurious correlation is not a necessary element for us to propose the counterfactual reasoning method. The scarcity of expert data is the main issue in the context of standard offline IL. The unobserved policy interactions serve as the counterfactual data, leading to the counterfactual question raised in the introduction.
We apologize for the ambiguity in our description and appreciate your suggestion. We will remove it and highlight our core focus (scarcity of expert data) in our revised version.
-----
**W2**
**A:** The in-distribution testing follows the standard offline imitation learning (IL) with online testing. The reported results represent the cumulative reward that the learned policy obtains in the online environment (corresponding to the same task that the offline training dataset comes from):
$E\left(\sum\_{t=0}^{T} \gamma^t R\_t^\pi(s\_{t},a\_{t})\right)$.
In our experiments, **in-distribution** refers to that the online testing environment and offline training dataset are from the same task without any additional modification. In our paper, the higher cumulative reward that the offline-trained policy obtains during the online testing, the stronger the in-distribution robustness of offline IL method. We apologize again for the confusion brought by the phrase 'in-distribution robustness' in the paper and promise we will revise it in the final version.
-----
**W5**
**A:** Thanks for the question. We apologize for the presentation error in Figure 2(b), which makes you confused.
The standard three steps of Pear's counterfactual framework [1] are abduction, action and prediction. In this paper, we strictly follow this framework. In our method, after we have a posterior distribution of $u$, we just sample a value $u\_{t+1}$ from the posterior distribution of $u$ (abduction), which is not do-intervention. The do-intervetion is that we use an another state-action pair $(s\_{t},a\_{t})$ to replace the original parents of current $s\_{t+1}$ (action), and this is the do-intervention rather than the sample process of $u$. Then, the new $\tilde{s}\_{t+1}$ is generated from the learned counterfactual model $f$ (prediction).
Thanks your good question, and we will revise this figure and the related discription in the final version of our paper.
[1] Pearl, J. (2010). Causal inference. Causality: objectives and assessment, 39-58.
------
**W6**
**A**:In this specific example of autonomous driving, we define $c$ as the road condition in autonomous driving, which does not represent traffic details. In fact, $c$ is just a coarse label, such as highway or mountain road, which is easily obtainable. Here, the 'road type' may be a more accurate word to describe $c$. Additionally, $c$ can also be different time periods for autonomous driving testing, such as morning peak or evening peak. These are all easily obtained in the meta data of real-world autonomous driving logged dataset.
In the tasks of Table 1, $c$ is various environment initializations in the DeepMind Control Suite for collecting offline data (refer to lines 307-309 in the paper). Here, the environments initialized with different random seeds correspond to the values of the class label $c$. In tasks of Table 2, the changes in features (stage\_color, stage\_friction, floor\_friction, etc.) of the environment are presented for collecting diverse offline data (refer to lines 324-328 in the paper). During each data collection process, the feature value is modified to be different from the initial environment value. Here, we regard environments with different above mentioned feature values as each kind of $c$.
------
Besides, thank you for your kind reminder and we apologize for the improper use of the external link. We indeed did not notice this point in the rebuttal policy. But we argue that our provided external is absolutely anonymous, so we didn't violate the anonymity principle. Here, we promise we will not use any external link in the following discussion. Thanks again for your careful reviews, insightful questions, and kind suggestions. Hope our above response can help address your concern. We are looking forward to the futher discussion if any follow-up questions. | Summary: The paper propose a novel learning framework OILCA for offline IL, which generate counterfactual data to augment the scarce expert data. They analyze the disentanglement identifiability of the constructed exogenous variable and the counterfactual identifiability of the augmented counterfactual expert data. The experiments especially in the CausalWorld benchmark demonstrate the effectiveness of the method.
Strengths: 1. It is in time to introduce causal inference to offline reinforcement learning and offline imitation learning. This topic is valuable to the RL community;
2. The paper is overall well-written and easy to follow for me;
3. The way of data augmentation through counterfactual inference makes sense;
4. The experiment is almost sufficient to demonstrate their method.
Weaknesses: This paper can be improved in several areas:
Writing and structure:
1. In this paper, it's not clear what the spurious relations under the MDP structure are. Could you explain it based on Figure 2? Notably, it appears that there is a direct causal relationship between $u$ and $s$, which seems should not be categorized as spurious relations.
2. The definition of do-intervention from line 96 to 119 is a little confusing. Specifically, the definition of do on lines 96~96 and in Figure 2(b) is replacing $f$ with another function $\tilde f$ while keeping $u$, but on line 117, the interpretation of do changes to keeping the function $f$ unchanged and replacing $u$ with $\tilde u$. It seems that line 117 is more in line with the actual implementation. Could the author clarify this point?
3. The explanation in Section 2.3 on why IVAE solves the identifiability problem is a bit of vague. This is an important context, especially regarding the introduction of the auxiliary variable (which is c defined in Section 3.1) and its role in identifiability. It might be beneficial to move some content about c from section 3.1 to 2.3 to better inform the reader about the preliminaries.
Related work:
The authors do not mention any related work in the main body of the paper. Consider moving some of the related work from the appendix into the main text. Moreover, there have been several recent studies applying causal inference to data augmentation for RL . The authors can compare their method with these works and discuss the similarities or differences. They might also consider using these methods as baselines for comparison:
- Sample-Efficient Reinforcement Learning via Counterfactual-Based Data Augmentation
- MOCODA: Model-based Counterfactual Data Augmentation
- Counterfactual data augmentation using locally factored dynamics
- Offline Reinforcement Learning with Causal Structured World Models
Method:
1. The authors rely on a pretrained expert policy in the data augmentation process. How was this expert policy obtained? If we assume that we use $\mathcal{D}_E$ to train the expert policy, given that this paper claims $\mathcal{D}_E$ is very limited, it should be challenging to reconstruct the expert policy adequately from $\mathcal{D}_E$. Consequently, the constructed $\tilde s$, $\tilde a$ might be different from the actual data distribution in $\mathcal{D}_E$. How can we ensure that using the augmented dataset from $\tilde pi_e$, which is unreliable, will have a positive effect on the downstream imitation task?
2. The authors state that they ultimately use Equation (2) for data augmentation, but Equation (2) uses $p(u|c)$ to generate $u$, not the posterior-based method described on line 101, which should use $q(u|s,a,s',c)$ instead. The authors need to clarify the inconsistency between these two points.
3. The authors claim that their contribution is to prove that their method can improve the generalizability of imitation learning, which is a bit of overclaim. From their theoretical analysis, it only shows that increasing the amount of data can improve the accuracy of imitation learning, a relatively trivial conclusion that holds for any machine learning task. If we are conducting a theoretical analysis, we need to explain why using data generated by the counterfactual data augmentation method can better improve the algorithm's generalizability. Otherwise, it raises the question: Does data augmentation not reliant on counterfactual inference have the same improvement?
Experiments:
1. To my knowledge, the original DEEPMIND CONTROL SUITE environment does not have exogenous variables. How did the authors construct the experimental environment and dataset to fit the setting they proposed? Supplementary note: if this is not achievable, I don't think experiments on MuJoCo are necessary. The authors could consider designing more experiments to validate their algorithm's effectiveness on benchmarks designed for causal inference, such as:
- Alchemy: A benchmark and analysis toolkit for meta-reinforcement learning agents
- Systematic Evaluation of Causal Discovery in Visual Model Based Reinforcement Learning
2. While the experimental results seem significant, the ablation study and visualizations are insufficient. Perhaps the authors could consider the differences between their data and data generated by different methods, the clustering of reconstructed exogenous variables, and whether the corresponding transitions or policy behaviors meet expectations.
3. The description of the baselines is insufficient. The authors should emphasize how the baselines utilize $\mathcal{D}_E$ and $\mathcal{D}_U$ respectively.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NAN
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer XaVg:
Thanks for your review of our paper. Here's our response to the weaknesses, questions, and limitations you have highlighted:
-------
**Writing and structure:**
**1.In this paper, it's not clear what the spurious relations under the MDP structure are.**
**A**: We follow an standard causal MDP setting [1][2][3], where both the relationship of state and the exogenous variable are the same as ours.
[1] Kazemi, M., & Paoletti, N. (2022). Towards Causal Temporal Reasoning for Markov Decision Processes. arXiv preprint arXiv:2212.08712.
[2] Oberst, M., & Sontag, D. (2019, May). Counterfactual off-policy evaluation with gumbel-max structural causal models. In International Conference on Machine Learning (pp. 4881-4890). PMLR.
[3] Tsirtsis, S., De, A., & Rodriguez, M. (2021). Counterfactual explanations in sequential decision making under uncertainty. Advances in Neural Information Processing Systems, 34, 30127-30139.
**2.The definition of do-intervention from line 96 to 119 is a little confusing.**
**A**: For this definition and the related figure are confused, we revise the Figure 2(b),
https://i.postimg.cc/KzWk86kG/Figure2-b.jpg
and we will revise the related description in the final version.
**3.The explanation in Section 2.3 on why IVAE solves the identifiability problem is a bit of vague.**
**A**: Thanks for the question, we will move part of the Section 3.1 into the preliminary.
-------
**Related Work:**
**1.The authors do not mention any related work in the main body of the paper.**
**A**: The related work will be presented in the appendix due to space limitations. In the appendix, we will discuss the related work mentioned above [1][2][3]. Additionally, we have carefully examined [4], which proposes a method called FOCUS comprising two components: causal structure learning for a world model and offline model-based policy learning. In summary, the aforementioned related works are not suitable as baselines. [1] focuses on learning a personalized policy in a sequential decision process. [2,3] generate counterfactual samples using local causal models with factored MDP. [4] leverages causal discovery to improve the world model. However, in the final version, we will remove some related works from the main text.
-------
**Method:**
**1.The authors rely on a pretrained expert policy in the data augmentation process. How was this expert policy obtained?**
**A:** See the pre-trained policy answer (Q2) for Reviewer srX3.
**2.Clarify the inconsistency between $p(u|c)$ and $p(u|s,a,c',c)$.**
**A:** There is a mistake in the pseudocode when generating counterfactual states. Currently, the counterfactual states are generated using the posterior instead of the prior. We will correct this in the final version.
**3.The authors claim that their contribution is to prove that their method can improve the generalizability of imitation learning, which is a bit of overclaim**
**A:** For this question, we have presented the distribution of the counterfactual samples mentioned earlier. Drawing from the existing literature on counterfactual data augmentation [1][2], we aim to show that the samples generated by OILCA can be considered as $D_E$ with theoretical guarantees. Therefore, we analyze the outcomes from the standpoint of sample numbers.
[1] MOCODA: Model-based Counterfactual Data Augmentation
[2] Counterfactual data augmentation using locally factored dynamics
-------
**Experiment:**
**1.The original DEEPMIND CONTROL SUITE environment does not have exogenous variables.**
**A:** In the DEEPMIND CONTROL SUITE, different noise exists in the dynamic environment depending on the environment initialization. These variations can be considered as different distributions of the exogenous variable. Additionally, we utilize the DEEPMIND CONTROL SUITE because it is commonly employed in papers proposing these baselines. By comparing the results, we can assess the effectiveness of our training framework.
As for the causal benchmark, we evaluate the performance using the causal world dataset. This benchmark is widely used for robotics control and closely aligns with the standard offline imitation learning setting.
Furthermore, we explore the above-mentioned benchmarks for causal meta-reinforcement learning and model-based reinforcement learning. However, it may require some additional time to integrate them into the standard offline imitation learning setting, and the suitability of these datasets for offline IL needs to be further discussed.
**2.While the experimental results seem significant, the ablation study and visualizations are insufficient.**
**A:** To address the challenge of not having access to the true distribution of the exogenous variable, we visualize the learned exogenous variable alongside the true exogenous variable using a synthetic dataset. Furthermore, rather than conducting an ablation study, we prioritize compatibility as a training framework. This is because we believe that compatibility plays a more crucial role. Considering the limited research on data augmentation in offline imitation learning, combining different data augmentation methods is seen as a valuable contribution in this field. We intend to further discuss this aspect in our future work.
**3.The description of the baselines is insufficient. The authors should emphasize how the baselines utilize $\mathcal{D}_E$ and $\mathcal{D}_U$ respectively.**
**A:** We described the baselines in the appendix, to answer this question, we will add more detailed about how these methods use the expert and unlabeled data.
---
Rebuttal Comment 1.1:
Title: Additional response for weaknesses (Part1)
Comment: **Method W3**
Thanks for your insightful question, we provide a new theoretical gaurantee for you to understand the problem, moreover, we analyse it from a supervised learning perspective:
**Theoretical gaurantee**
Let $X \subseteq \mathbb{R}^d$ be the input space and $y \subseteq \mathbb{R}$ be the label space. We denote by $\mathcal{D}$ the population distribution over $z=X \times \mathcal{Y}$. The $L\_p$ norm of a random variable $X$ is denoted as $\|X\|\_p=\left(\mathbb{E}|X|^p\right)^{\frac{1}{p}}$. Given a set $S= \{ \mathbf{z}\_1, \mathbf{z}\_2, \ldots, \mathbf{z}\_m\}$, we define $S^{\backslash i}$ as the set after removing the $i$-th data point in the set $S$, and $S^i$ as the set after replacing the $i$-th data point with $\mathbf{z}\_i^{\prime}$ in the set $S$. Let $[m]=\{1,2, \ldots, m\}$, then for every set $V \subseteq[n]$, we define $S\_V=\{\mathbf{z}\_i: i \in V\}$. In addition, for some function $f=f(S)$, we denote its conditional $L\_p$ norm with respect to $S\_V$ by $\|f\|\_p\left(S\_V\right)=\left(\mathbb{E}\left[\|f\|^p \mid S\_V\right]\right)^{\frac{1}{p}}$. Besides, we denote the total variation distance by $d\_{\mathrm{TV}}$ and KL divergence by $d\_{\mathrm{KL}}$, respectively.
We let $(y)^x$ be the set of all measurable functions from $X$ to $y, \mathcal{A}$ be a learning algorithm and $\mathcal{A}(S) \in({y})^x$ be the hypothesis learned on the dataset $S$. Given a learned hypothesis $\mathcal{A}(S)$ and a loss function $\ell:(y)^x \times \mathcal{Z} \rightarrow \mathbb{R}\_{+}$, the true error $\mathcal{R}\_{\mathcal{D}}(\mathcal{A}(S))$ with respect to the data distribution $\mathcal{D}$ is defined as $\mathbb{E}\_{\mathbf{z} \sim \mathcal{D}}[\ell(\mathcal{A}(S), \mathbf{z})]$. In addition, the corresponding empirical error $\widehat{\mathcal{R}}\_S(\mathcal{A}(S))$ is defined as $\frac{1}{m} \sum\_{i=1}^m \ell\left(\mathcal{A}(S), \mathbf{z}\_i\right)$.
In this part, we describe the process of DA (Data augmentation) in a mathematical way. Given a training set $S$ with $m\_S$ i.i.d. examples from $\mathcal{D}$, we can train a data augmentation model $G$, and denote the model distribution by $\mathcal{D}\_G(S)$. We note that the randomness from training the augmentation model is ignored. In addition, we define the expectation of the model distribution with regard to $S$ as $\mathcal{D}\_G=\mathbb{E}\_S\left[\mathcal{D}\_G(S)\right]$. Based on the trained data augmentation, we can then obtain a new dataset $S\_G$ with $m\_G$ i.i.d. samples from $\mathcal{D}\_G(S)$, where $m\_G$ is a hyperparameter. Typically, we consider the case that $m\_G=\Omega\left(m\_S\right)$ if DA is utilized. We denote the total number of the data points in augmented set $\widetilde{S}=S \cup S\_G$ by $m\_T$. Besides, we define the mixed distribution after augmentation as $\widetilde{\mathcal{D}}(S)=\frac{m\_S}{m\_T} \mathcal{D}+\frac{m\_G}{m\_T} \mathcal{D}\_G(S)$. As a result, a hypothesis $\mathcal{A}(\widetilde{S})$ can be learned on the augmented dataset $\widetilde{S}$.
To understand the effect of DA, we are focus on the generalization error $\left|\mathcal{R}\_{\mathcal{D}}(\mathcal{A}(\widetilde{S}))-\widehat{\mathcal{R}}\_{\widetilde{S}}(\mathcal{A}(\widetilde{S}))\right|$ with regard to the learned hypothesis $\mathcal{A}(\widetilde{S})$. For convenience, we denote it by Gen-error in the remaining part.
**Definition 1 (Uniform stability)**. Algorithm $\mathcal{A}$ is uniformly $\beta\_m$-stable with respect to the loss function $\ell$ if the following holds
$$
\forall S \in Z^m, \forall \mathbf{z} \in Z, \forall i \in[m], \sup \_{\mathbf{z}}\left|\ell(\mathcal{A}(S), \mathbf{z})-\ell\left(\mathcal{A}\left(S^i\right), \mathbf{z}\right)\right| \leq \beta\_m
$$
To understand DA, the generalization error of the hypothesis $\mathcal{A}(\widetilde{S})$ learned on the dataset $\widetilde{S}$ after augmentation. Formally, we need to bound $\left|\mathcal{R}\_{\mathcal{D}}(\mathcal{A}(\widetilde{S}))-\widehat{\mathcal{R}}\_{\widetilde{S}}(\mathcal{A}(\widetilde{S}))\right|$, which has been defined as Gen-error. Recall that $\widetilde{\mathcal{D}}(S)$ has been defined as the mixed distribution after augmentation, to derive such a bound, we first decomposed Gen-error as
$$
\mid \text { Gen-error } \mid \leq \underbrace{\left|\mathcal{R}\_{\mathcal{D}}(\mathcal{A}(\widetilde{S}))-\mathcal{R}\_{\widetilde{\mathcal{D}}(S)}(\mathcal{A}(\widetilde{S}))\right|}\_{\text {Distributions' divergence }}+\underbrace{\left|\mathcal{R}\_{\widetilde{\mathcal{D}}(S)}(\mathcal{A}(\widetilde{S}))-\widehat{\mathcal{R}}\_{\widetilde{S}}(\mathcal{A}(\widetilde{S}))\right|}\_{\text {Generaliztion error w.r.t. mixed distribution }} .
$$
---
Reply to Comment 1.1.1:
Title: Additional response for weaknesses (Part2)
Comment: The first term on the right hand can be bounded by the divergence (e.g., $d\_{\mathrm{TV}}, d\_{\mathrm{KL}}$ ) between the mixed distribution $\widetilde{\mathcal{D}}(S)$ and the true distribution $\mathcal{D}$. It is heavily dependent on the ability of the chosen augmentation model. For the second term, we note that classical stability bounds can not be used directly, because points in $\widetilde{S}$ are drawn non-i.i.d.. We mainly use a core property of $\widetilde{S}$, that is, $S$ satisfies the i.i.d. assumption, and $S\_G$ satisfies the conditional i.i.d. assumption when $S$ is fixed. Inspired by this property, we furthermore decompose this term and utilize sharp moment inequalities to obtain an upper bound. Finally, we conclude with the following result.
**Theorem 1 (Corollary 8, [1])**. Assume that $\mathcal{A}$ is a $\beta\_m$-stable learning algorithm and the loss function $\ell$ is bounded by $M$. Given a training set $S$ with $m$ i.i.d. examples sampled from the distribution $\mathcal{D}$, then for any $\delta \in(0,1)$, with probability at least $1-\delta$, it holds that
$$
\left|\mathcal{R}\_{\mathcal{D}}(\mathcal{A}(S))-\widehat{\mathcal{R}}\_S(A(S))\right| \lesssim \log (m) \beta\_m \log \left(\frac{1}{\delta}\right)+M \sqrt{\frac{1}{m} \log \left(\frac{1}{\delta}\right)} .
$$
We note that all generalization bounds mentioned above require a primary condition: data points are drawn i.i.d. according to the population distribution $\mathcal{D}$. However, it no longer holds in the setting of DA. On the one hand, the distribution $\mathcal{D}\_G(S)$ learned by the augmentation model is generally not the same as the true distribution $\mathcal{D}$. On the other hand, the learned $\mathcal{D}\_G(S)$ is heavily dependent on the sampled dataset $S$. This property brings obstacles to the derivation of the generalization bound for DA.
**Theorem 2 (Generalization bound for DA)**. Assume that $\mathcal{A}$ is a $\beta\_m$-stable learning algorithm and the loss function $\ell$ is bounded by $M$. Given an set $\widetilde{S}$ augmented, then for any $\delta \in(0,1)$, with probability at least $1-\delta$, it holds that
$$
\begin{aligned}
\mid \text { Gen-error } \mid & \lesssim \underbrace{\frac{m\_G}{m\_T} M d\_{\mathrm{TV}}\left(\mathcal{D}, \mathcal{D}\_G(S)\right)}\_{\text {Distributions' divergence }}+\frac{M\left(\sqrt{m\_S}+\sqrt{m\_G}\right)+m\_S \sqrt{m\_G} \beta\_{m\_T}}{m\_T} \sqrt{\log \left(\frac{1}{\delta}\right)} \\\\
& +\frac{\beta\_{m\_T}\left(m\_S \log m\_S+m\_G \log m\_G\right)+m\_S \log m\_S M \mathcal{T}\left(m\_S, m\_G\right)}{m\_T} \log \left(\frac{1}{\delta}\right),
\end{aligned}
$$
where $\mathcal{T}\left(m\_S, m\_G\right)=\sup \_i d\_{\mathrm{TV}}\left(\mathcal{D}\_G^{m\_G}(S), \mathcal{D}\_G^{m\_G}\left(S^i\right)\right)$.
[1] Olivier Bousquet, Yegor Klochkov, and Nikita Zhivotovskiy. Sharper bounds for uniformly
stable algorithms. In COLT, volume 125, pages 610–626, 2020.
From above analysis, we can see that our method will have a tighter generalization bound compared to the data augmentation policy.
Moreover, it is common to use a weak expression model to enhance data and use a strong expression model to learn new algorithms in the field of data augmentation. And we will add this in the final revised version.
----
**Experiment**
**W1**
In the DeepMind Control Suite, during the collection of offline data, we apply random Gaussian perturbation to the action outputted by the policy. This perturbation is specified in the XML configuration file as an integral part of the environment. Additionally, the distribution of the perturbation differs across different environment initializations (auxiliary variable $c$) due to their initialization seeds. Especially, different seeds correspond to different mean and variance of the Gaussian distribution perturbation via the random number generator. This approach is employed to introduce uncertainty into the environment, thereby aligning with our problem setting.
Finally, thanks for you careful reviews of our work, we will add the above contents to our revised version. | Summary: This paper introduces OILCA, a causality-regularized data augmentation method for offline imitation learning tasks. Overall, the idea is novel to me. The empirical results shown in the experiment section seem very promising. However, to support the claims made in the paper, more experiments are needed. Please see details in my suggestions & questions section.
Strengths: Overall, the idea is novel to me. The authors provide theoretical analysis to support their results. The empirical results shown in the experiment section seem very promising.
Weaknesses: Missing reference:
As your work is related to causality + imitation learning, I believe the work of Causal confusion in imitation learning clearly worths a citation and discussion. Also, there is quite a lot of related work that is not discussed, e.g., with a quick Google search on causality + reinforcement learning:
[1] Pim de Haan, Dinesh Jayaraman, and Sergey Levine. Causal confusion in imitation learning. In NeurIPS, pages 11698–11709, 2019.
[2] Gasse, Maxime, et al. "Causal reinforcement learning using observational and interventional data." arXiv preprint arXiv:2106.14421, 2021.
[3] Sun, H., & Wang, T.. Toward Causal-Aware RL: State-Wise Action-Refined Temporal Difference. arXiv preprint arXiv:2201.00354, 2022.
Suggestions:
The flow of the introduction is a bit awkward to me: the authors mentioned the difficulty of learning from a mixed dataset containing both expert and random-quality data, but they then answer the counterfactual question of how the expert could perform under different states, this question is uncorrelated with the previous discussion. I hope the authors can update this part to better convey their idea and motivations.
The notions in Definition 2 are unclear to me. what does the \tilde mean when it is over PA_i? This is not explained in the paper.
Figure 5 is not mentioned in the text and is therefore confusing: how do you increase the percentage of expert data? Why is the proportion always smaller than 1.0?
It's sort of misleading to call a well-performing method a 'robust' method. (Q2, experiment section). To demonstrate the robustness, the authors should show far more empirical studies rather than a comparison under a single setting.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1.**(Very Important) A critical confusion is that, if you can do accurate counterfactual reasoning over other states (not included in the expert trajectory), why do not use such a model directly perform as the learned policy?**
Actually, if I am right, this is a **very strong assumption** for your theoretical proof --- if you assume the counterfactual reasoning is accurate (such that the augmentation is reliable), you have already guaranteed to achieve an improved result.
2. How is your algorithm's sensitivity to hyper-parameters? This is especially important for offline RL/IL algorithms --- the application scenarios do not permit excessive hyper-parameter tuning, hence a one-for-all hyper-parameter is of great importance. I would like to see an ablation study on stress-testing not only OILCA but also other baseline methods.
The underlying question is: while causal augmentation is beneficial, does it add extra instability to the learning process?
3. On the performance gain: Can the authors disclose what settings have been changed over previous algorithms like ORIL and DWBC?
If the training settings are the same, the performance gain can be attributed to the augmented expert data, then what do other algorithms perform under this augmented dataset?
4. How much expert data is needed in augmentation? As it has been shown that Behavior Cloning under clean and high-quality data can perform as good as imitation learning[4], it will be great to show some results to demonstrate in a minimal effort that the data augmentation is effective. --- I'm keen to see under what specific setting, BC can be as powerful as those offline IL methods, and how your method's performance surpasses others when you gradually decrease the proportion of expert data.
[4] Mandlekar, Ajay, et al. "What matters in learning from offline human demonstrations for robot manipulation." arXiv preprint arXiv:2108.03298 (2021).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: please see above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer asgz:
Thanks for your review of our paper. Here's our response to the weaknesses, questions, and limitations you have highlighted:
-------
**Weaknesses:**
**W1. Missing references.**
**A**: Thank you for the valuable suggestion. We will include and discuss the raised related works in our revision of the paper.
**W2. The flow of the introduction.**
**A**: Thank you for the suggestion. We will update the introduction part to further illustrate our idea and motivation.
For the limited expert data, we choose data augmentation to solve this problem, and with the advantage of counterfactual reasoning, we adopt the counterfactual data augmentation, which is to answer the counterfactual question.
**W3. What does the \tilde mean when it is over PA_i?**
**A**: $\mathbf{PA}_i$ is the parent of $X_i$. $\tilde{\mathbf{PA}}_i$ is the intervented parent of $X_i$. We will clarify it in our revision.
**W4. Figure 5: How do you increase the percentage of expert data? Why is the proportion always smaller than 1.0?**
**A**: Figure 5 presents the experiment analyzing the extent to which the number of augmented expert data influences the performance of the learned policy. This experiment is described in detail in the paper (lines 297-302).
We increase the number of augmented expert samples and concanate them into $D_E$, until the proportion $|D_E|/|D_U|=1$. So the proportion is always smaller than or equal to 1.0.
**W5. In distribution robustness?**
**A**: This robustness is to demonstrate the performance of in distribution online testing, rather than the robustness for the additive noise.
We apologize for any potential confusion and we will address and revise this in the final version.
-------
**Questions:**
**Q1. (Very Important) Assumption: the counterfactual reasoning is accurate.**
**A**: We thank the reiviewer for raising this concern.
Even when the counterfactual reasoning process is not perfect, our framework can provide a trade-off between the information brought by the augmented data and the causal model accuracy.
Even when the counterfactual reasoning process is not perfect, our framework can provide a trade-off between the information brought by the augmented data and the causal model accuracy.
Experiments show that our framework can indeed bring additional performance gains.
**Q2. Algorithm's sensitivity to hyper-parameters.**
**A**: Due to the space limitation here, we put our detailed rebuttal for this question into an external anonymous link (https://hackmd.io/@littleshoes/ryaYsJfhn). Please check the Q2 of Reviewer asgz in it.
**Q3. What settings have been changed over previous algorithms like ORIL and DWBC?**
**A**: Due to the space limitation here, we put our detailed rebuttal for this question into an external anonymous link (https://hackmd.io/@littleshoes/ryaYsJfhn). Please check the Q3 of Reviewer asgz in it.
**Q4.How much expert data is needed in augmentation?**
**A**: Due to the space limitation here, we put our detailed rebuttal for this question into an external anonymous link (https://hackmd.io/@littleshoes/ryaYsJfhn). Please check the Q4 of Reviewer asgz in it.
---
Rebuttal Comment 1.1:
Title: Revisement to the Response Above
Comment: **1. A Revised version of our response to Q1**
**Q1. (Very Important) Assumption: the counterfactual reasoning is accurate.**
**A**: We thank the reviewer for raising this concern.
Even when the counterfactual reasoning process is not perfect, our framework can provide a trade-off between the information brought by the augmented data and the causal model accuracy.
If the benefits introduced by the augmented data are larger than the harm brought by the model errors, then our framework can overall bring effectiveness.
Experiments show that our framework can indeed bring additional performance gains.
**2. A little typo: the Reviewer asgz above should be agsz.**
Thanks again for your careful reviews. Looking forward to your feedback and further discussion.
---
Rebuttal 2:
Title: Follow-up question
Comment: I sincerely appreciate the authors' response and their diligent work in providing additional empirical evidence.
According to the current results, the counterfactual reasoning policy does not perform well, indicating the augmented data does not perfectly
To make the ablation study concrete, I believe another important experiment is to compare the performance of BC + augmented data. In this way, the performance gain of each element can be transparent. As has been shown in [1] (which is cited in your paper), BC is able to achieve on-par performance with IL with high-quality data.
If time permitted, I would also see another important comparison: the performance gap when increasing expert data proportion using 1. the golden expert; and 2. using your method.
I understand the discussion period is going to the end, if the authors could kindly provide an initial experiment on one or two environments, it will be much appreciated. And I believe those results will help a lot for further discussion among reviewers. Those experiments will provide general readers with a more comprehensive understanding of the performance (ability, pitfall) of your work and further enhance its impact.
---
_**Reference**_
[1] Mandlekar, Ajay, et al. "What matters in learning from offline human demonstrations for robot manipulation." arXiv preprint arXiv:2108.03298 (2021).
---
Rebuttal Comment 2.1:
Title: Additional Experiments (Part 1)
Comment: Dear Reviewer agsz:
Thanks for your appreciation and encouragement to our response. Here, to make the ablation study more concrete and better demonstrate the effectiveness of our proposed method, we follow your suggestions to conduct the following supplementary experiments. Please forgive us for the limited additional experiments due to the limited time. We have tried our best to accomplish it.
---------
**Additional Experiment 1:**
In fact, our experiment results for BC+augmented data have been provided in our response to Q3 in our initial rebuttal. In detail, it is Table 6 in that external link. We understand you may have neglected it due to the external link. Here, we provide it again as follows. (CA indicates the counterfactual data augmentation)
| Task Name| BC-exp| BC-exp+CA| BC-all| BC-all+CA| ORIL| ORIL+CA| BCND| BCND+CA| LobsDICE| LobsDICE+CA| DWBC| OILCA(DWBC+CA)|
|------------------|------------------|------------------|------------------|------------------|------------------|------------------|------------------|------------------|------------------|-------------------|------------------|------------------|
|Cartpole Swingup| 195.44 $\pm$ 7.39 | 367.31 $\pm$ 13.28 | 269.03 $\pm$ 7.06 | 436.85 $\pm$ 17.02 | 221.21 $\pm$ 14.49 | 426.79 $\pm$ 12.09 | 243.52 $\pm$ 11.33 | 452.68 $\pm$ 12.86 | 292.96 $\pm$ 11.05 | 398.27 $\pm$ 21.34 | 382.55 $\pm$ 8.95 | 608.38 $\pm$ 35.54 |
| Cheetah Run | 66.59 $\pm$ 11.09 | 94.73 $\pm$ 8.22 | 90.0 $\pm$ 31.74 | 125.19 $\pm$ 18.20 | 45.08 $\pm$ 9.88 | 78.44 $\pm$ 6.95 | 96.06 $\pm$ 16.15 | 158.62 $\pm$ 8.85 | 74.53 $\pm$ 7.75 | 89.65 $\pm$ 12.04 | 66.87 $\pm$ 4.60 | 116.05 $\pm$ 14.65 |
| Finger Turn Hard | 129.20 $\pm$ 4.51 | 186.97 $\pm$ 17.46 | 104.56 $\pm$ 8.32 | 152.38 $\pm$ 11.67 | 185.57 $\pm$ 26.75 | 227.94 $\pm$ 15.47 | 204.67 $\pm$ 13.18 | 284.29 $\pm$ 12.03 | 190.93 $\pm$ 12.19 | 237.83 $\pm$ 24.91 | 243.47 $\pm$ 17.12 | 298.73 $\pm$ 5.11 |
| Fish Swim | 74.59 $\pm$ 11.73 | 164.35 $\pm$ 12.91 | 68.87 $\pm$ 11.93 | 137.98 $\pm$ 6.74 | 84.90 $\pm$ 1.96 | 156.92 $\pm$ 8.18 | 153.28 $\pm$ 19.29 | 268.56 $\pm$ 6.03 | 188.84 $\pm$ 11.28 | 229.24 $\pm$ 13.62 | 212.39 $\pm$ 7.62 | 290.29 $\pm$ 10.07 |
| Reaching | 281.18 $\pm$ 16.45 | 608.77 $\pm$ 15.42 | 176.54 $\pm$ 9.75 | 527.61 $\pm$ 10.58 | 339.40 $\pm$ 12.98 | 652.21 $\pm$ 7.05 | 228.33 $\pm$ 7.14 | 582.44 $\pm$ 9.07 | 243.29 $\pm$ 9.84 | 461.38 $\pm$ 14.05 | 479.92 $\pm$ 18.75 | 976.60 $\pm$ 20.13 |
| Pushing | 256.64 $\pm$ 12.70 | 343.80 $\pm$ 9.79 | 235.58 $\pm$ 10.23 | 356.17 $\pm$ 13.81 | 283.91 $\pm$ 19.72 | 367.46 $\pm$ 6.31 | 191.23 $\pm$ 12.64 | 320.94 $\pm$ 10.37 | 206.44 $\pm$ 15.35 | 263.74 $\pm$ 12.84 | 298.09 $\pm$ 14.94 | 405.08 $\pm$ 24.03 |
| Picking | 270.01 $\pm$ 13.13 | 391.55 $\pm$ 12.07 | 258.54 $\pm$ 16.53 | 415.39 $\pm$ 14.42 | 388.15 $\pm$ 19.21 | 458.03 $\pm$ 13.95 | 221.89 $\pm$ 7.68 | 486.32 $\pm$ 8.03 | 337.78 $\pm$ 12.09 | 439.16 $\pm$ 18.46 | 366.26 $\pm$ 8.77 | 491.09 $\pm$ 6.44 |
| Pick and Place | 294.0 $\pm$ 7.34 | 385.16 $\pm$ 9.34 | 225.42 $\pm$ 12.44 | 351.27 $\pm$ 11.21 | 270.75 $\pm$ 14.87 | 372.18 $\pm$ 10.74 | 259.12 $\pm$ 8.01 | 393.59 $\pm$ 7.81 | 266.09 $\pm$ 10.31 | 357.11 $\pm$ 16.28 | 349.66 $\pm$ 7.39 | 490.24 $\pm$ 11.69 |
From the table, we can observe that both the BC-exp+CA and BC-all+CA behave worse than our OILCA in most tasks. We attribute the reason for this phenomenon to that the performance of BC is more sensitive to the quality of training data, though it has the potential to achieve on-par performance with IL with sufficient high-quality data. As discussed above, the counterfactual reasoning policy does not perform well, which means the augmented data is actually of uneven quality, still leaving a distance from the high-quality data. On the other hand, the BC-exp+CA and BC-exp+all both perform much better than vanilla BC-exp and BC-all, respectively. This demonstrates that our counterfactual data augmentation module can indeed consistently improve the imitation learning performance, regardless of the base method selection.
--------
---
Reply to Comment 2.1.1:
Title: Additional Experiments (Part 2)
Comment: **Additional Experiment 2:**
The performance gap when increasing expert data proportion using two kinds of augmented data: 1) sampling with golden expert policy in online environment, 2) our counterfactual data augmentation method. The results are as follows:
**Cartpole Swingup Task**:
|Proportion|Counterfactual Data Augmentation|Golden Expert|
|-------------------------------------|------------------|------------------|
|Original proportion (no augmentation, <10%)|382.55 $\pm$ 8.95|382.55 $\pm$ 8.95|
|10%|430.21 $\pm$ 13.20|441.36 $\pm$ 12.01|
|30%|463.78 $\pm$ 21.95|472.92 $\pm$ 7.62|
|50%|502.81 $\pm$ 20.76|520.15 $\pm$ 15.43|
|70%|557.90 $\pm$ 16.62|562.89 $\pm$ 20.47|
|90%|589.01 $\pm$ 38.29|593.37 $\pm$ 16.81|
|100%|608.38 $\pm$ 35.54|621.80 $\pm$ 9.26|
|200%|596.52 $\pm$ 28.37|634.12 $\pm$ 18.29|
|300%|612.30 $\pm$ 41.25|635.93 $\pm$ 25.15|
|500%|601.47 $\pm$ 27.82|627.47 $\pm$ 22.86|
|1000%|605.81 $\pm$ 31.63|629.94 $\pm$ 23.28|
**Cheetah Run Task**:
|Proportion|Counterfactual Data Augmentation|Golden Expert|
|-------------------------------------|------------------|------------------|
|Original proportion (no augmentation, <10%)|66.87 $\pm$ 4.60|66.87 $\pm$ 4.60|
|10%|71.85 $\pm$ 8.26| 74.56 $\pm$ 3.29|
|30%|86.44 $\pm$ 13.62| 82.06 $\pm$ 9.36 |
|50%|92.60 $\pm$ 16.51| 89.21 $\pm$ 12.98 |
|70%|105.57 $\pm$ 11.29| 111.27 $\pm$ 11.56 |
|90%|113.12 $\pm$ 9.25| 118.32 $\pm$ 15.27|
|100%|116.05 $\pm$ 14.65| 128.07 $\pm$ 8.31|
|200%|106.39 $\pm$ 10.08|132.64 $\pm$ 14.24|
|300%|118.51 $\pm$ 15.72|125.18 $\pm$ 8.73|
|500%|109.96 $\pm$ 9.84| 129.72 $\pm$ 12.34|
|1000%|117.08 $\pm$ 7.69| 124.80 $\pm$ 9.46|
**Finger Turn Hard Task**:
|Proportion|Counterfactual Data Augmentation|Golden Expert|
|-------------------------------------|------------------|------------------|
|Original proportion (no augmentation, <10%)|243.47 $\pm$ 17.12|243.47 $\pm$ 17.12|
|10%|261.77 $\pm$ 14.68|255.62 $\pm$ 18.29|
|30%|269.85 $\pm$ 13.39|272.18 $\pm$ 12.25|
|50%|276.12 $\pm$ 9.82|285.48 $\pm$ 8.36|
|70%|283.69 $\pm$ 12.71|295.83 $\pm$ 13.48|
|90%|288.27 $\pm$ 7.09|306.26 $\pm$ 10.81|
|100%|298.73 $\pm$ 5.11|303.51 $\pm$ 11.67|
|200%|303.64 $\pm$ 12.91|311.70 $\pm$ 9.74|
|300%|301.57 $\pm$ 8.30|305.42 $\pm$ 14.53|
|500%|289.15 $\pm$ 15.27|302.15 $\pm$ 12.16|
|1000%|295.48 $\pm$ 7.84|304.93 $\pm$ 11.19|
From tables above, we can find that our counterfactual data augmentation still behaves slightly worse than augmentation with golden expert under most proportions, though achieving obvious improvement over other IL baselines. This is actually reasonable, because the augmented data sampled with the golden expert can be considered to be high-quality enough. Thus, the performance with such augmented data from golden expert can be regarded as a kind of upper bound for the data augmentation research in offline IL area.
-----------
Thanks again for your appreciation, insightful review and valuable suggestions to our work. Honestly hope the above empirical evidences can help you better recognize our work and contribute to your discussion with other reviewers. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Globally injective and bijective neural operators | Accept (poster) | Summary: This paper extends several known results on ReLU networks from finite dimensional domains to the more challenging infinite dimensional domains. Namely:
(a) Conditions for injectiviity of infinite dimensional ReLU networks are provided
(b) Universality of infinite dimensional ReLU networks was proven
(c) Extension to network with non-linear kernels are provided
Strengths: Strengths: From a mathematical standpoint the paper proves some natural and non-trivial theorems
Weaknesses: I think this paper is not appropriate for this venue: it is not clear to me why the ML world should care that much about infinite dimensional ReLU operators, and the authors do not make an effort to explain this. There is certainly room for purely theoretical papers in Neurips and there are many such papers. However, I believe that, even if the proofs are above the paygrade of most Neurips attendants, it should be at least clear to said attendants why the questions are interesting. I do not feel this is the case here
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: To convince me, mostly rebuttal should focus on importance of the problems discussed in the paper.
I note some minor comments and typos I saw while reading the paper:
Line 7 `the case the case'
Line 34 'on its face' not sure that is an expression in english
Line 43, 53 114 and throughout `the equivalent condition' should be `an equivalent condition'
Definition 1 should be more explicit IMHO: What are bias functions? What do the linear kernels do? Better to just right out the formula
Line 105 be *the* ReLU activation
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for your comments and careful eye.
_I think this paper is not appropriate for this venue: it is not clear to me why the ML world should care that much about infinite dimensional ReLU operators, and the authors do not make an effort to explain this. There is certainly room for purely theoretical papers in Neurips and there are many such papers. However, I believe that, even if the proofs are above the paygrade of most Neurips attendants, it should be at least clear to said attendants why the questions are interesting. I do not feel this is the case here._
We could have done a better job of valorizing the results, and emphasizing that our analysis goes far beyond the ReLU case. The ReLU activation function is of interest only in section 2.1. We think that our results, properly valorized, are well-suited for NeurIPS not because the paygrade of the proofs, but because the results are practical and important. That said, it is clear from your comment and others, that our paper still needs more motivation. Therefore, we would like to include some more applications of injectivity \& invertibility. For more details, please see the global comment to all reviewers. We hope that including those comments will better motivate the focus on injectivity/invertibility, and address your criticisms.
_I note some minor comments and typos I saw while reading the paper: Line 7 the case the case'_
Thank you for pointing this out. It has been addressed.
_Line 34 'on its face' not sure that is an expression in english Line 43_
We have replaced the phrase 'on its face' with the phrase 'on first inspection.'
_Line 43, 53 114 and throughout the equivalent condition' should be `an equivalent condition.'_
All have been fixed, thank you.
_Definition 1 should be more explicit IMHO: What are bias functions?_
We have made Definition 1 more explicit by adding equation definition of $T_\ell$, .
Bias functions are analogous to bias vectors. We believe that their meaning will become clear by defining $T_\ell$ more explicitly.
_What do the linear kernels do?_
A linear kernel is a kernel in the sense of a convolution. That is, they are the filter of a convolution. That is, an operator of the form $v \mapsto \int_\Omega k(x - y) v(y) dy$, where $x$ is the independent variable. A nonlinear kernel is an operator of the form $v \mapsto \int_\Omega k(x,y,v(x),v(y))v(y) dy$ where again $x$ is independent. Note that this map is no longer linear in $v$.
_Better to just right out the formula Line 105 be the ReLU activation._
You are right, that is more clear. This change has been made.
---
Rebuttal Comment 1.1:
Comment: Thanks for your answers. I still think Neurips is not the right venue for this paper. I would sent it to e.g., Acta Numerica... I read the motivation you provided in the main remark and the concepts discussed there are still very vague and abstract to me.
I believe 99% of Neurips audience will feel the same way. | Summary: The paper considers the question of when neural operators, which have infinite-dimensional inputs and outputs, are injective and bijective. This question is answered in quite some generality in different settings and under different assumptions.
Strengths: --- A careful and at times, deep, analysis for the questions under consideration is provided.
Weaknesses: 1. Relevance and Scope: While appreciating the depth of the functional analysis that is presented, this reviewer is left a bit perplexed about the rationale behind all this heavy machinery. The authors do not really motivate why one needs bijective neural operators in the first place. There is some boilerplate on generative models in the discussion but this does not occupy any centerstage. If the fundamental question is itself not posed properly, the interest of the subsequent analysis becomes rather limited. The authors should clearly motivate why they consider these questions in the first place and explain it well to the reader.
2. Clarity: The paper has excessively abstract notation and presentation. It needs to be reorganized to bring out the main contributions well. Here are some specific questions about the clarity:
a) Why introduce Directed spanning sets in Definition 1 (which is almost impossible to follow as no intuition is provided into what it is) and Proposition 1, when immediately afterwards you have Proposition 2 which provides the answer for a bijective activation function. Who cares about ReLU when you already have a result on Leaky ReLU which is heavily used to begin with.
b) Similarly what is the use of the local layer wise analysis when you anyway consider the global end-to-end neural operator in Section 3 ?
c) In Lemma 1, what does condition 3.1 even mean in practice ? How can one check it ?
d) The authors have to concede that having a universal approximation theorem does not mean much, see Lanthaler et al and Kovachki et al 2021a where this issue of universal approximation is critiqued as the system size can grow (even double) exponentially. So having a universal approximation theorem does not imply any kind of efficient approximation but it is a necessary condition at best.
e) Does section 3.3 imply that FNO and WNO are injective and bijective neural operators ? If not, why not ? If yes, under what conditions ?
f) Can you find a non-trivial example for section 4.1, apart from Example 1 which appears to be trivial. Has anyone ever implemented example 1? If so, what is its empirical performance.
g) What is the rationale for section 4.2, why would I want to explicitly compute the inverse of the neural operator -- in what situations is that useful ? Can you give an example ?
All the questions clearly indicate that the paper is not well-written and needs a substantial rewrite.
3. Concreteness: The main limitation in my view is that lack of exemplification of the results, except for example 1 -- the authors should give many more examples of practical utility, with conditions on the weight matrices, kernels as well as activation functions so that either available or de novo neural operators fall into their theoretical framework. This will enable the readers to better understand the significance of this work.
4. Novelty: In many places in the paper, the authors refer to Puthawala et al 2022 -- how does the current paper differ from this reference ? A thorough elaboration of these differences is essential for judging the novelty of this paper.
5. Finite-dimensional representations and Aliasing: The authors always assume that one has access to functions as both inputs and outputs. This is far from the case and in practice, one has to work with finite-dimensional representations of the functions, see for instance Fanaskov and Oseledets, Spectral neural operators, 2022 for a discussion on this issue. This use of finite-dimensional representation can lead to what are called aliasing errors. For instance, FNO has aliasing errors -- how do these errors affect your theoretical considerations -- in particular, does aliasing destroy the bijection property of the neural operator. Addressing this issue is crucial for any relevance for the theory, particularly in the construction of inverse neural operators
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: Quite a few questions were already asked in the section on Weaknesses. The authors should address them.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: --- No solid rationale is provided for the whole premise of the paper.
--- A lot of abstract theory with few examples, if any. Practical utility of the
concept is totally unclear from the current version.
--- Unclear if the framework survives contact with reality in the form of finite-dimensional
representation of functions (for instance sampling on a uniform grid) that can lead to
aliasing errors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for your comments, careful eye, and fair criticisms.
_Relevance and Scope_
We should have motivated the application of injectivity and invertibility more. We will, please see the global comment.
_Why introduce...begin with._
You make a good point. The for these two cases back-to-back is to use the ReLU activation to 'build a bridge' from the Euclidean case to infinite dimensions, and draw contrasts between them. We did not do a good enough job making this parallel clear, and so we will include a sentence at line 92 saying the following.
"Deriving a condition for layer-wise injectivity with bijective activation functions is trivial in a finite dimensional setting. With a ReLU activation function it requires the so-called Directed Spanning Set (DSS) condition. This condition is not automatic, but will hold with high probability for random weight matrices if they are expansive enough.
In this section we derive a generalization of the DSS condition and show that it is much more restrictive than the infinite dimensional setting. We then present a less restrictive condition that is met when the activation function is bijective, e.g. a leaky-ReLU activation is used."
_Similarly what...Section 3?_
Although it would appear that the end-to-end condition supersedes the layer-wise one, it is not the case. The layer-wise case is less restrictive and has different applications & implications then global result does not. We didn't do enough to make this apparent in the manuscript. Therefore, we would like to add the following sentences just after Line 97.
"Although it may appear that the end-to-end result is strictly stronger than the layerwise result, this is not the case. The layerwise result is an exact characterization, whereas the end-to-end result is sufficient for injectivity, but not necessary. The layerwise analysis is also constructive, and so gives a rough guide for the construction of injective networks, whereas the global analysis is less so. Finally, the layerwise condition has different applications, such as network of stochastic depth, see e.g. [Huang et al. 2016] or [Benitez et al. 2023]. End-to-end injectivity by enforcing layerwise injectivity is straight forward, whereas deriving a sufficient condition for any depth is more daunting."
_In Lemma...check it ?_
To make condition 3.1 clearer, and to provide a 'hands on' example, we would like to include the following Condition 3.1 has a straight-forward interpretation. We think that this example provides a reprieve from the abstraction, and gives a simpler takeaway.
"Lemma 1 and Eqn. 3.1 may be interpreted as saying that if _some_ orthonormal sequence $\\{\xi_k\\}\_{k \in \mathbb{N}}$ exists that doesn't overlap the range of $T$, then $T$ may be embedded in a small space without losing injectivity. As an example of such a $\\{\xi_k\\}\_{k \in \mathbb{N}}$, if the range $\mathrm{Ran}(T)$ of $T$ is included in the continuous function space (for instance, a finite rank neural operator with continuous basis like FNOs), then we may choose $\\{\xi_k\\}\_{k \in \mathbb{N}}$ to be a discontinuous basis and the condition is automatically met."
_The authors...at best._
Indeed, having a universal approximation theorem is only a starting point for showing efficient approximation but, we argue, even such a starting point wasn't achieved until recently in the finite-dimensional case. The proof of our universal approximation result is compatible with future efficient (quantitative) approximation results that may arise for neural operators. To make this point clear, we would like to include the following sentences just after Line 185.
"The proof for universal approximation theorem is constructive. If, in the future, efficient approximation bounds for neural operators are given, such bounds can likely be used directly in our universality proof to generate corresponding efficient approximation bounds for injective neural operators."
_Does section...what conditions?_
In short, it does give such conditions. We have (abstract) conditions that apply perfectly well to FNO or WNO. It all depends on the choice of basis. It is clear that we did not do a good enough job of drawing attention to this in the main text. Therefore we would like to modify lines 218 - 221 to say the following.
"We show that Propositions 1, 2 (characterization of layerwise injectivity/bijectivity), and Lemma 1 (global injectivity) all have natural analogues for finite rank operator $K_{\ell,N}$ in Proposition 6 and Lemma 3 in Appendix C. These conditions applies out-of-the-box to both Fourier Neural Operators (FNO) and Wavelet Neural Operators (WNO). We also show the universal approximation in the case of finite rank approximation."
_Can you...empirical performance._
Motivated by your comment, we derived another four examples that illustrate non-triviality of the results in Section 4.1. The entire derivation and proof is worked out, but is too long to include in this rebuttal. Please see the global comment for the setup of these examples.
_What is...an example?_
There are two main rationales for Section 4.2, some abstract involving the algebra of groups and other involves justifying the discussion of operator encoder-type networks. We feel we didn't do a good enough job of bring them to the fore. Therefore, we would like to add the following sentences just after Line 294.
"The proof that neural operators may be inverted with other neural operators provides a theoretical justification for Integral Auto Encoder networks (IAE-nets, Ong et al. 2022) where an encoder/decoder pair that parallel the roles of the finite-dimensional VAEs (Kingma & Welling 2023). This section proves that the decoder half of a IAE-net is provably able to inverse the encoder half. Our analysis also shows that injective differential operators (as arise in PDE) and integral operator encoders form a formal algebra under operator composition."
---
Rebuttal Comment 1.1:
Title: Reply to the authors' rebuttal
Comment: I start by thanking the authors for their rebuttal and apologize for the delay in responding. The authors have attempted to address several of my comments yet many concerns still remain. I outline them below:
1. Now, the authors motivate the rationale in terms of pseudo differential operators (PDOs) and claim that that their construction is an analogue of this very important concept to neural operators. I respectfully disagree. PDOs are a very general theory for linear PDEs whereas your construction presumably is limited to (possibly) nonlinear neural operators. I don't see the connection about why PDOs are a motivation and why they were not introduced in the first place. In particular, are you claiming that your invertible NOs are a natural framework for solving linear PDEs -- if so, where is the evidence ? Again, i don't see a direct relation with Bayesian inverse problems (BIP). Are you saying that your construction will directly solve BIPs ? I presume that any solution of BIPs will involve some form of sampling (either MCMC or normalizing flows or diffusion models) and all that is needed is a surrogate of the forward operator -- this surrogate need not be invertible at all -- so I am still not convinced that there is any practical justification behind your construction.
2. The rewriting that you promise for a camera ready version might improve the overall presentation.
3. I am still not able to understand how your results apply to FNO and VNO *out of the box* ? Does it mean that FNO is invertible under all conditions -- this is impossible to believe as FNO works very well for problems such as diffusion equation where there is no invertibility whatsoever. Please clarify my genuine concern here. Also, what do you mean by a finite-rank operator here -- do you claim that FNO is finite-rank which it is patently not -- I am confused here.
4. Your 4 examples A-D leave me confused. Does A mean that this is the class for which you have infectivity ? B is just a linear operator -- having subjectivity is of little utility for it. C and D are so succinct that it is impossible for me to evaluate them.
5. You did not answer my question about how exactly does this paper differ from and what is its novelty vis a vis Puthawala et al 2022 ?
6. You did not answer my question about what happens to your constructions about injective and bijective operators when the data itself is only available as point samples on a grid, as it is in practice ?
7. What exactly do you justify about IAEnets ? If its decoder is exactly able to invert the encoder -- so what ? The universal approximation theorem (as well as quantitative error bounds) should hold even if they decoder only approximately inverts the encoder -- see constructions in the paper of Lanthaler, Mishra and Karniadakis on DeepONets where such approximation suffices.
Summary: The authors have not convinced this reviewer of the concrete utility of their construction for any practical applications of operator learning. I am looking forward to your replies and apologize again for the delay in responding to your rebuttal.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking time to read and consider our rebuttal. It is clear that you care deeply about improving our work, and we appreciate this. We only wish we had more than 5000 characters to answer your questions.
Our paper is foundational, theoretical, and we feel it provides useful results for futuer more applied work. We believe that NeurIPS is the right venue for these kinds of papers. Consider the additional literature suggested by reviewer kfSM. We hope that, like those suggested works, you can agree that our work is rigorous, provides deep insights on an important architecture, and so will likely be well-received at NeurIPS.
1. We are talking about non-linear operators and mappings, and our prime applications are in solving inverse problems. That said, the framework also applied to solving differential equations. To make the connection more concrete, we have an example worked out.
$$
-D_x^2u(x)+su(x)+p(u(x))=f(x)
$$
where $x\in [0,1]$, $u(0)=u(1)=0$, $p(u)$ is a non-linear term, $f$ is a source, and $s>0$. There is a linear integral operator $G:L^2(0,1)\to L^2(0,1)$,
$$
Gf(x)=\int_0^1 k(x,y)f(y)dy,
$$
for which $v=Gf$ solves the linear equation
$$
-D_x^2v(x)+sv(x)=f(x)
$$
with $u(0)=u(1)=0$. Here, $k$ is Green's function. The equation for $u$ may be rewritten as
$$
u=N(u)
$$
where
$$
N(u)=N_{s,p,f}(u)=-G(p(u)+f).
$$
When $s$ is large enough, this non-linear integral operator form can be solved using a fixed point iteration $u_k=N(u_{k-1})$, $u_0=f$. By unrolling the iteration, the result of the $k$:th iteration, $N^k(f)$, can be approximated by a NO. Above, the integral operator $G$ is a pseudodifferential operator which solves the linear equation for $v$ whereas $N^k$ is a NO which gives an approximate solution of the non-linear equation for $u$.
We are also interested in inverse problems for PDE of determining the coefficient functions when one is given observations of the (boundary values of) solutions $u_i$ corresponding to various sources $f_i$, $i\le m$. There, the operation
$$
p\to(N_{s,p,f_1})^k(f_1),...,(N_{s,p,f_m})^k(f_m)
$$
is a NO. If this NO is injective, the non-linear function $p$ can be determined when the solutions $u_i$ are observed.
2. We think so too.
3.
- "Does it...all conditions." If each of an FNO satisfies the conditions in Prop. 5, then yes.
- "this is...invertibility whatsoever." Theorems 1 and 2 are universal approximation theorems that *any* continuous operator (not only injective ones) can be approximated by an injective neural operator. This means that even diffusion problems, those without invertibility, may be approximated arbitrarily well.
- "what do...patently not." A FNO is not a finite rank operator. What we mean by finite-rank neural operator, that a neural operator whose (non-local) integral operator for each layer is represented by finite truncated basis expansion. If it is a Fourier basis, it is an actual FNO.
4. We apologize, the rebuttal had tight space constraints. Our machinery shows that problems of the form of A are surjective. For B, it is not linear, note that the term $K$ is nonlinear in $u$. For C, the point is that coercivity is a broadly useful property. See, e.g. Li, Schwab, Antholzer et Haltmeier Inverse Problems 36, 2020. In this setting, a regularization operator is learned that must be coercive, see condition 2.2.c.. For D, the details are involved but briefly, in quantum mechanics, some phenomena may be modeled as nonlinear (or non-physical in the case of negative energy) perturbations to physical systems. Our coercivity analysis `does not care' about such perturbations.
5. The results of Puthawala et al. 2022 only apply in the finite-dimensional Euclidian setting and that there. Further, surjectivity becomes nontrivial in the infinite-dimensional case. Notions like bi/injectivity, or closedness under inverses don't transfer from the Sobolev to Euclidian spaces. The current work also considers the non-Relu activation case much more.
6. Please see our response to reviewer KyUH, Line 215. Bijectivity and/or injectivity holds when the function spaces are suitably approximated by finite dimensional spaces using a finite set of basis vectors. One can apply the similar techniques that are used in Finite Element Method (FEM) analysis where the weak formulation of a PDE becomes a matrix equation. The infinite dimensional theory advice in choosing a suitable basis and gives error estimates for such approximations.
7. Please allow us to draw a parallel between a similar work. There is great interest in applying DL to fluid dynamics, including divergence-free fluid flow, and a bevy of DL models that are universal approximators. It is necessary to design a neural network that provably models divergence-free flow? No. But, designing such a network is natural https://arxiv.org/abs/2210.01741. Encoding injectivity directly into a neural network is natural in applications when encoding a signal, or in modeling invertible processes. | Summary: The authors present theoretical results in the field of operator learning, specifically dealing with operators are injective and surjective. They build on existing finite-dimensional work and consider the infinite-dimensional case of learning mappings between infinite-rank Sobolev spaces. The paper contributes several theoretical results in this context, including characterizing the conditions for layer-wise injectivity and bijectivity given certain activation functions, universal approximation results for injective linear neural operators, and sufficient conditions for surjective/bijective nonlinear integral operators. The paper lays the groundwork for analysis of learning injective and bijective operators in the infinite-dimensional setting.
Strengths: The paper has several significant theoretical results and answers many natural questions one would have about the theory of injectivity and bijectivity in neural operators. Examples and practical implementation results in finite-rank cases are also described in the paper and appendix, which is helpful and grounds the theoretical results in practice. The writing is very clear and the proofs are clean. Overall, it is a very thorough paper.
Weaknesses: Just a note on writing, too many proofs and examples are black-boxed and put in the appendix. At least a description of various proof techniques in the main text would be helpful. The paper would also benefit from more motivation of injective/bijective models in downstream tasks/generative modeling, describing any particular successes or failures different activation functions have had, etc. More discussion on how this theory should guide practice would also improve the significance of the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Line 107: Is it possible to give a more intuitive description of Definition 2 and why it is called a directed spanning set?
Line 234: Not sure why it is a natural conclusion from using the sup norm that the approximation does not smooth out non-smooth operators. Can you clarify this comment?
Line 215: This is a very interesting remark, can you imagine other orthonormal bases and discuss whether they would be worth formulating as a neural operator architecture?
Line 249: What does it mean for integral transform with kernel the attention mechanism in a transformer to have be injective, practically/in a particular problem?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: This theoretical work naturally pertains to tasks where injectivity and bijectivity are desirable, however neural operators are of course used in much broader contexts. As the authors discuss, their contribution is largely theoretical and an important next step for this work would be explicit constructions of injective neural operators/inverses that are good approximators.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review, suggestions, and strong endorsement. Please find answers to your questions and feedback below.
_Just a note on writing, too many proofs and examples are black-boxed and put in the appendix. At least a description of various proof techniques in the main text would be helpful._
We find it difficult to balance including sufficient 'teasers' for the proofs of the main results against other page-length considerations. We realized, in light of your comment, that the proof of Theorem 2 lacked intuition. We would therefore add a few sentences giving some intuition about the main proof ideas just after line 231.
_The paper would also benefit from more motivation of injective/bijective models in downstream tasks/generative modeling, describing any particular successes or failures different activation functions have had, etc. More discussion on how this theory should guide practice would also improve the significance of the paper._
This point is well taken. We could have done a better job of more motivation and application discussion. In our global comment at the top, for some more discussion that we'd like to include.
_Line 107: Is it possible to give a more intuitive description of Definition 2 and why it is called a directed spanning set?_
Yes, it is possible. To make the definition more digestible, and explain the notes, we would like to add the following text just after line 110, between Def. 2 and Prop. 2. This addition would give some intuition for the $\ker(T|_{S(v,T+b)})$ term of Eqn. 2.1.
"The name directed spanning set arises from the kernel term of Eqn. (2.1). The indices of $S(v,T + b)$ are those that are directed (positive) in the direction of $v$. If $T$ restricted to these indices together span $L^2(D)^n$, then the kernel term is $\{0\}$, and the condition automatically satisfied. Hence, The DSS conditions measures the extend to which the set of indices which are directed w.r.t. form a span of the input space of L."
_Line 234: Not sure why it is a natural conclusion from using the sup norm that the approximation does not smooth out non-smooth operators. Can you clarify this comment?_
The issue of smoothening out nonsmooth operators was a feature of some prior work that is not present in our work, but this point wasn't clearly made in Remark 1. We would like to modify the content of Remark 1 to make this point more clear. Please see our proposed new text for Remark 1 below.
"Observe that in our finite-rank approximation result, we only require that the target function G+ is continuous and bounded, but not smooth. This differs from prior work that requires smoothness of the function to be approximated."
_Line 215: This is a very interesting remark, can you imagine other orthonormal bases and discuss whether they would be worth formulating as a neural operator architecture?_
We're glad that you found it interesting! Your follow up question is interesting as well. We can't think of any other NO architecture that can be formed by a particular choice of basis. We would, however, like to include the following remark just after line 217 to make this point more clear.
"Lemma 2 and Remark 3 in the appendix C.3 give a 'recipe' to construct the projection $B$ such that the composition $B \circ T$ (interpreted as augmenting finite rank neural operator $T$ with one layer $B$) is injective.
The projection $B$ is constructed by using an orthogonal sequence $\{\xi_k\}_k$ subject to the condition (3.1), which that does not over-leap the range of $T$.
This condition is automatically satisfied for any orthogonal base $\{\varphi_k\}_k$.
That is, if we find an orthogonal sequence $\{\xi_k\}_k$ that is 'easy' to compute, then, we can construct 'easy' projection $B$.
This could yield practical implications in guiding the choice of the orthogonal basis ${\varphi_k}_k$ for the neural operator's design."
_Line 249: What does it mean for integral transform with kernel the attention mechanism in a transformer to have be injective, practically/in a particular problem?_
Appealing to the analogy that the attention mechanism allows a layer to 'focus' on particular pieces of a signal, an injective attention mechanism could be considered one that `allows focus, but not blindspots.' Here, a 'blindspot' means a (complete) loss of information. | Summary: This paper provides a theoretical analysis of the injectivity and surjectivity of neural operators.
Section 2 discusses the injectivity of a single layer of neural operators. For the ReLU activation, the injectivity of the layer is characterized by the directed spanning set. On the other hand, if the activation is bijective, the injectivity of the layer is equivalent to the injectivity of its constituent.
Section 3 of the paper examines how injectivity can be preserved when composing layers. The paper shows that integral neural operators with L^2-integral kernels possess a universal approximation property for continuous operators. A truncated series expansion-based approximation, implementable for this family, is demonstrated to maintain the universal approximation property.
Section 4 considers nonlinear integral operators and presents a sufficient condition for a layer in this class to be bijective. Additionally, this document proposes a method for constructing the inverse of a nonlinear integral neural operator layer. The inverse is expressed as a limit of integral neural operators.
Strengths: [originality]
- As far as I understand, this paper is the first to provide a rigorous framework for analyzing the injectivity and bijectivity of neural operators.
[quality]
- The paper is written with mathematical rigor, and there are no noticeable flaws in the logical development. However, I have not been able to check the proofs in detail.
[clarity]
- The sections have clear purposes, with concise explanations of the goals in each section.
[significance]
- This paper presents universality results and identifies conditions under which injectivity or bijectivity can be guaranteed. These theoretical foundations can improve the models and facilitate their application by increasing their reliability and enhancing understanding.
Weaknesses: [originality]
- None in particular
[quality]
- The paper contains a noticeable number of grammatical and typographical errors. It is recommended that an up-to-date automatic checker is used to revise the paper and correct these issues.
- The model classes, including layers, can be assigned specific symbols to make them stand out in the paper.
- Line 33 mentions "infinite-rank Sobolev spaces," but I could not find the definition of the notion of rank of a Sobolev space in the paper or elsewhere on the internet.
[clarity]
- Throughout the introduction, it remains unclear what is meant by "finite-rank case" and "infinite-rank case." This issue may be resolved if the definition of the rank of Sobolev spaces is provided or if it is made clear that it refers to the rank of the target operator.
[significance]
- The theoretical results presented in the paper do not appear to have any immediate tangible implications for the models or learning algorithms of neural operators.
- The motivation of the paper needs to be elaborated as it is currently unclear why it is important to understand the conditions under which the neural operators have injectivity or bijectivity.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Please kindly correct me if any of my understandings in the above comments are incorrect.
- Line 32: Should "infinite-dimension setting" be "infinite-rank setting"?
- Line 33: What does "infinite-rank Sobolev space" mean? I have not been able to find a definition of the rank of Sobolev spaces either in this paper or elsewhere.
[minor suggestions]
- Line 27 “a operators” → “an operator”
- Line 21 “Bayesian UQ” → “Bayesian uncertainty quantification” (if UQ stands for that)
- Line 23 “existence and uniqueness” → “existence and uniqueness of the solutions”
- Line 47 “and their implementation” → “and that their implementation”
- Line 48 “universality approximation” → “universal approximation”
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: None in particular.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review, suggestions, and endorsement. We have address your feedback below
_The paper contains a noticeable number of grammatical and typographical errors. It is recommended that an up-to-date automatic checker is used to revise the paper and correct these issues._
We apologize for any and all grammatical and typographical errors. We reviewed all of the text and typography and fixed and removed all errors.
_The model classes, including layers, can be assigned specific symbols to make them stand out in the paper._
We appreciate any suggestions that improve clarity, and want to make sure that we understand the suggestion exactly, and hoped to get more clarification on this point. Is the suggestion to use a specific symbol to represent an injective versus non-injective layer? For example, $\mathcal L^{\textrm{inj}}$ for the injective case and $\mathcal L^{\textrm{bijec}}$ for the bijective case?
_Line 33 mentions "infinite-rank Sobolev spaces," but I could not find the definition of the notion of rank of a Sobolev space in the paper or elsewhere on the internet._
The prefix infinite-rank was meant purely for emphasis, but we agree that it is confusing and the emphasis isn't needed. We have removed it.
_Throughout the introduction, it remains unclear what is meant by "finite-rank case" and "infinite-rank case." This issue may be resolved if the definition of the rank of Sobolev spaces is provided or if it is made clear that it refers to the rank of the target operator._
In order to make this distinction clearer, we have added an additional sentence at the start of the introduction that frames and defined the finite-rank and infinite-rank cases, and draws attention to their essential differences. We think this has made the distinction clearer, and makes the work more accessible.
_The theoretical results presented in the paper do not appear to have any immediate tangible implications for the models or learning algorithms of neural operators._
It is true that our work is principally theoretical and does not make concrete recommendations for e.g. learning algorithms, but we think it has useful application and takeaways. Mainly by giving an algebraic perspective on neural operators, and by laying the ground work for rigorous application of Bayesian inversion via neural operators. We elaborate on these points more in the global comment.
Additionally, the proof of Theorem 1, that establishes universality of injective operators, proceed by `giving a recipe' for construction an arbitrarily good injective approximator. This recipe can be followed in application, and so may be a guide for injective approximation. We have included statements to this effect just after the statement of Theorem 1.
_The motivation of the paper needs to be elaborated as it is currently unclear why it is important to understand the conditions under which the neural operators have injectivity or bijectivity._
We realized that our motivation could be made even stronger. We have done just this in the global comment above.
_minor suggestions_
Thank you for the suggestions. All of the suggested changes have been made.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
> We appreciate any suggestions that improve clarity, and want to make sure that we understand the suggestion exactly, and hoped to get more clarification on this point. Is the suggestion to use a specific symbol to represent an injective versus non-injective layer? For example, for the injective case and for the bijective case?
Please consider this as just a very small suggestion. What I meant in the review was assigning dedicated symbols to some sets of functions/operators, e.g., denoting (say) $\mathrm{NO}^{injective}$ to refer to the subset of $\mathrm{NO}_L(\sigma; D, d{in}, d{out})$ consisting only of injective elements. If a symbol is given to such a set, it may be visually faster to see that, e.g., Theorem 1 describes the approximation ability of such a subset.
---
Reply to Comment 1.1.1:
Title: Thank you for the clarification
Comment: Thank you for the clarification.
Now that we understand, we agree that making such a modification would increase readability, draw attention to the pertinent content of the theorem (injectivity, rather than dimension bookkeeping), and make the theorem \& subsequent discussion more digestible overall.
We will certainly implement this notation. | Rebuttal 1:
Rebuttal: We appreciate all the reviewers' valuable comments, and close attention.
We have a few things that we would like to say globally, in response to points brought up by multiple reviewers. We have also replies to individual reviewers below.
Several reviewers remarked that the question of injectivity for neural operators could be better motivated. We agree. To make the motivation of the work stronger, we would like to add the two remarks to the paper. The two remarks will in the introduction just after Line 39, and will then be briefly restated in the conclusion.
(1) The first remark forms a connection between the algebra of injective/invertable neural operators, and pseudodifferential operators. Pseudodifferential operators revolutionized the theory of linear PDE and so we believe that making such a connection is valuable, and lays the groundwork for an algebraic study of neural operators.
``Our work draws parallels between neural operators and pseudodifferential operators [Taylor, Princeton Mathematical Series 1981], a class that contains many inverses of linear partial differential operators and integral operators. The connection to pseudodifferential operators provided an algebraic perspective to linear PDE [Kohn \& Nirenberg, Communications on Pure and Applied Mathematics 1965, Shubin, Springer 1987]. An important fact in the analysis of pseudodifferential operators, is that the inverses of certain operators, e.g. elliptic pseudodifferential operator, are themselves pseudodifferential operator. By proving an analogous result in section 4.2, that the inverse of invertible NO are themselves given by NO, we draw an important and profound connection between (non)linear partial differential equations and NO.
''
(2) The second described the application of injective neural operators to Bayesian inverse problems. In short, there are a lot of ways to incorrectly discretize inverse problems that makes Bayesian methods discretization dependent. Injective and invertible neural operators are natural ways to do this correctly.
"There are significant benefits to applying Bayesian solution methods to inverse and imaging problems in infinite dimensions. This, for example, allows one to study functions of continuous space \& time variables. These infinite dimensional models can then be approximated by a finite dimensional model without losing discretization invariance, see [Stuart, Acta Numerica 2010]. Crucially, discretization must be done `at the last possible moment,' or else performance degrades as a discretization becomes finer, see [Lassas-Siltanen, Inverse Problems 2004] and also [Lassas-Saksman-Siltanen, Inverse Problems in Imaging 2009]. By formulating machine learning problems in infinite dimensional function spaces and then approximating these methods using finite dimensional subspaces, we avoid bespoke ad-hoc methods and instead obtain methods that apply to any discretization.
One example is the following. Let $\mathcal M$ be the submanifold of $X=L^2([0,1]^2)$ or $X=L^2([0,1]^3)$ corresponding to natural images or 3D medical model. Let $K\subset \mathbb{R}^D$ be a manifold with the same topology as $\mathcal M$, $\iota:\mathbb{R}^D\to X$ be an embedding, and define Let $K_1=\iota(K)\subset \mathcal M$. Given $\mu$, a measure supported on $M$, the task is to find a neural operator $f_\theta\colon X\to X$ that maps (pushes forward) the uniform distribution on the model space $K_1$ to $\mu$ and so thus maps $K_1$ to $\mathcal M$. If $f_\theta\colon X\to X$ is bijective, computing likelihood functions in statistical analysis is made easier via the change of variables formula. Further, we may interpret $f_\theta^{-1}$ as encoder and $f_\theta$ the corresponding decoder, which parameterized elements of $\mathcal M$. As everything is formulated in infinite dimension function space $X$, we obtain discretization invariant methods.''
Motivated by the remarks of Reviewer ZQ9v, we would also like to include the following examples as other functions for which the machinery developed in Section 4.1 applies. We think that these addition flesh out the applications of that section.
The below four points are all other functions, for which we may prove
(A) We can obtain surjectivity (by illustrating coercivity) of neural operators of the form $F(U) = Wu + \sigma_2(K (u))$ where $\sigma_1$ is bijective, $\sigma_2$ bounded.
(B) We can obtain surjectivity (by illustrating coercivity) of neural operators of the form $F(U) = \alpha u + K (u)$ where $K(u) = \int_D a(x,y,u(x),u(y))dy$ where $a(x,y,s_1,s_2)$ is continuous and is such that $\exists R > 0$, $c_1 < \alpha$ so that for all $|(s_1,s_2)| > R$, $\mathrm{sign}(s_1)\mathrm{sign}(s_2)a(x,y,s_1,s_2) \geq -c_1$.
(C) In imaging applications layer wise coercivity of neural networks have been a useful property, see for example [Li et al., Inverse Problems, 2022].
(D) We may also show that coercivity is preserved by perturbations in a bounded domain. This makes it possible to study non-linear and non-positive perturbations of physical models. For example, in quantum mechanics when a non-negative energy potential $|\phi|^4$ is replaced by a Mexican hat potential $-C|\phi|^2 + |\phi|^4$, as occurs in the study of magnetization, superconductors and the Higgs field. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This study investigates the injectivity and bijectivity of neural operators (NOs) in the infinite-rank setting which is less investigated than their finite-rank counter, such as invertible flow networks. This study is based on the finite-rank analysis by Puthawala et al. (2022a). As a previous work, Alberti et al. (2022) have shown a global injectivity of an infinite-rank NN based on wavelet expansion. The NO in consideration is formulated as a composite of hidden layers of the form $(\mathcal{L}v)(x) := \sigma( T(v)(x) + b(x) )$. In Section 2, the iff conditions for linear NOs are stated. Particularly Prop 2 (DSS condition for ReLU) is shown by extending the finite-rank results by Puthawala et al. (2022a), and Prop 3 (for bijective activation) is shown based on Fredholm theory. In Section 3, $cc$-universalities of (Theorem 1) injective linear NOs, and (Theorem 2) injective finite-rank NOs are shown. It is remarkable that different from finite-rank results, Theorem 1 does not require any assumption between input and output dimensions. In Section 4, a nonlinear NO with each layer being of the form $\sigma \circ F_1(u) = \sigma(Wu + K(u))$, which covers attention mechanisms, is considered, (Props 3 and 4) sufficient conditions of the surjectivity and bijectivity are stated using the Leray-Schauder fix point theorem, and (Theorem 3) the inverses of bijective nonlinear NOs are constructed.
Strengths: - Modern deep learning tasks tend to be formulated in the *infinite-dimensional* setup, and the authors established a mathematical foundation for a *general class of NOs*.
Weaknesses: (Minor comments)
- I believe that the NOs in consideration cover a wide range of practical examples, but it would broaden their potential readers if the authors could explicitly showcase such examples (eg., not just ones mentioned in l.252 and Example 1, but also subnetworks, operator transformers, and integral autoencoders, etc...)
- Definition 1 would be clearer if the description in ll.77-80 by sentence “and $T_\ell$ : … are sums of …” is associated with an expression such as $T_\ell = …$.
- Some citations in ll.339-341 are duplicated.
- Additional literature (not mandatory as these are in finite-dimensional settings
- https://proceedings.neurips.cc/paper/2020/hash/2290a7385ed77cc5592dc2153229f082-Abstract.html
- https://arxiv.org/abs/2204.07415
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: please refer to comments in the Weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your valuable comments, constructive feedback, and endorsement. We have addressed your feedback below
_I believe that the NOs in consideration cover a wide range of practical examples, but it would broaden their potential readers if the authors could explicitly showcase such examples (eg., not just ones mentioned in l.252 and Example 1, but also subnetworks, operator transformers, and integral autoencoders, etc...)._
This is a great suggestion. Just after l.39, we have included more examples of model applications including graph neural operators, IAE-nets, Operator transformers, and Factorized Fourier Neural operators. We have cited applications where neural operators are used as well, especially in the context of inverse problems where injectivity is a critical property.
_Definition 1 would be clearer if the description in ll.77-80 by sentence “and_ $T_\ell$ : … are sums of …” _is associated with an expression such as_ $T_\ell =$.
We agree that this change will make the description clearer. We have included such a change in the new version.
_Some citations in ll.339-341 are duplicated._
The duplicated reference have been removed.
_Additional literature (not mandatory as these are in finite-dimensional settings)._
We intended to cite these papers to help fill out the story in finite dimensions but, in an oversight, did not. We have included reference to them in the related work section, and referenced them again the section on universality. | null | null | null | null | null | null |
Strategic Classification under Unknown Personalized Manipulation | Accept (poster) | Summary: - This paper studies strategic classification with unknown and personalized manipulation in an online/distributional setting, i.e. different agents can manipulate their features to a different degree, and the learner does not know the extent to which agents can manipulate their features.
- The paper assumes realizability, i.e. the existence of a correct classifier in the hypothesis class, that remains correct under strategic manipulation. Both bounds on PAC sample complexity and mistake bounds are provided for a variety of assumptions about the possible manipulations (metric ball vs nonball) and the feedback provided to the learner (original features observed before deploying classifier or not; original/manipulate/no features observed along the true label after classification).
**While I remain unsure about the realistic applicability of the most interesting results, the rebuttal and others' reviews have not uncovered any new crucial weaknesses, so I am keeping my score.**
Strengths: - The paper is well written, and good intuition is provided for some of the technical theorems.
- The setting of varying and unknown manipulations strength in strategic classification is relevant.
- Both lower bounds and upper bounds based on concrete algorithms are provided
Weaknesses: - The motivation for the easier cases is not entirely clear. For example, in which setting is it realistic for the learner to see the *original* features of the next agent before choosing which hypothesis to deploy?
- This is particularly relevant, as the more interesting results that seem clearly distinct from the previous literatur are on these easier cases.
- Minor comments:
- The Formatting of table 1 could be improved. In particular, some tildes are intersecting the table lines. Additionally, the third row with multiple results for slightly different settings could be further split up to avoid confusion.
- If I understand correctly, it is possible for the strategic setting to be realizable for a class of hypotheses, while the non-strategic setting is not. Highlighting this might make some of the proofs (like for theorem 9) easier to parse.
- I guess the tie-breaking procedure is slightly underspecified when there are multiple points at the decision boundary with the same distance from x?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Do you expect (versions of) the proposed algorithms to still work if different players use different metrics rather than just differently sized balls for the same metric?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - The main limitations that are not discussed in the weaknesses were highlighted and discussed by the authors in the discussion/open problems section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 6Wy9 for their valuable comments. In response to their questions, we provide the following clarifications and explanations.
> The motivation for the easier cases.
For the easiest model (x is observed first):
- Consider a teacher giving students a writing assignment or take-home exam. The teacher might have a good knowledge of the students' abilities based on their performance in class, but the grade has to be based on how well they do the assignment. The students might manipulate by using the help of ChatGPT / Google / WolframAlpha / their parents, etc. The teacher wants to create an assignment that will work well even in the presence of these manipulation tools.
- If we think of each example as representing a subpopulation (e.g., an organization is thinking of offering loans to a certain group) then there might be known statistics about that population, even though the individual classification (loan) decisions have to be made based on responses to the classifier.
For the second-easiest model (x is observed after):
- Here, there are a lot more scenarios that might fit. For example, if a high-school student takes the SAT test multiple times, most colleges promise to only consider the highest one (or even to "superscore" the test by considering the highest score separately in each section) but they do require the student to submit all of them.
> Do you expect (versions of) the proposed algorithms to still work if different players use different metrics rather than just differently sized balls for the same metric?
Yes, we expect the algorithms to work if the metrics are known to the learner. As our setting is online, the algorithm utilizes the metric at each round to order all hypotheses and reduce the version space, which should still work under different metrics.
Responses to minor comments:
> The Formatting of table 1 could be improved.
We will improve the formatting of the table.
> Is it possible for the strategic setting to be realizable for a class of hypotheses, while the non-strategic setting is not?
Yes, the reviewer is correct on this. We will highlight it in the final version.
> The tie-breaking procedure is slightly underspecified when there are multiple points at the decision boundary with the same distance from x?
We allow tie breaking arbitrarily when more than one points have the same distance from x. We will clarify this in the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response.
If I understand the teacher example correctly, it would entail using a different exam for every student, correct? Actually, it looks like in both examples it might be more accurate to model things in a batch fashion, where a batch of data points is observed and the same classifier has to be deployed for all of them. Do your results extend to settings similar to that batch setting?
---
Reply to Comment 1.1.1:
Title: Replying to batch setting
Comment: Thanks for your comment. In the teacher example, we could think of a computer-taught course where the system can indeed change the test in between students, or we could think of a teacher teaching the same class over multiple years, changing the test each year based on results from previous years.
In the loan approval example, the loan applicants come sequentially.
The batch setting is an interesting question. We believe the similar algorithmic idea of Strategic Halving still applies, i.e., finding a predictor $f_t$ such that a mistake will eliminate as many hypotheses in the version space as possible. One simple possible way of solving the batch setting could be: at each round $t$, sampling an representative agent $x_t$ uniformly at random from the batch, selecting a predictor by running Strategic Halving given $x_t$ as the input, and applying $f_t$ to the whole batch. | Summary: The paper studies strategic classification in the setting where agents generally have different power of manipulation, which is unknown to the classifier. The authors consider both the online model and the PAC model, and investigate 4 increasingly difficult settings in which the classifier observes different information about each agent. They show nontrivial regret / sample complexity upper and lower bounds for all these settings.
Strengths: The paper explores a meaningful and challenging new direction in strategic classification. The models and settings are natural and rich, and the results are fairly comprehensive (and in some cases surprising). Technically, there seem to be several interesting ideas behind the bounds.
Weaknesses: I'd be more excited to see results for infinite hypothesis classes and agonostic settings (both of which the authors discuss as future directions).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: (also including detailed comments here)
Footnote 1, "agents break ties randomly": do you really mean "arbitrarily"?
Line 112, remark on randomized classifiers: the alternative model where agents don't observe the internal randomness of the classifier makes sense too. Any reason for choosing this particular model?
Overview of results: while these results are certainly nice and important, I wonder if anything can be said in the case where $|\mathcal{H}| = \infty$. Can you comment on this (e.g., conjectures, difficulties)? (Later I see you mention this as a future direction, which sounds fair since the paper already has quite a number of results.)
Line 240: why is "GALLANT" all capitalized?
General technical question: how crucial is the metric space assumption? In other words, what if the triangle inequality doesn't hold? I guess at least some of your positive results still hold?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer Hsdn for their valuable comments. In response to their questions, we provide the following clarifications and explanations.
> Extension to general hypothesis class and the agnostic case.
We agree with the reviewer that the general hypothesis class and the agnostic case are interesting open questions to explore! As mentioned by the reviewer, we had already included these directions in the "Discussion and Open Problems" section of our submission and we have already obtained numerous results. The primary focus of this work has been on formulating the problem and presenting preliminary findings in this direction, which is a challenging task in itself.
> "Agents break ties randomly": do you really mean "arbitrarily"?
We actually allow breaking ties in any fixed way. For simplicity, we fix this tie breaking way to be random. We will clarify this in the final version.
> Remark on randomized classifiers: the alternative model where agents don't observe the internal randomness of the classifier makes sense too. Any reason for choosing this particular model?
The paper by Ahmadi et al. (2023) actually gives a good discussion of both randomization models in their setting (where manipulation abilities are known up-front via a manipulation graph). They show that the model you mention (which they call the "fractional model") behaves similarly to the deterministic model. It also brings up additional complications, such as: do different manipulations have different costs, and if so, do agents still try to maximize their probability of success or do they instead look at expected utility = expected gain - cost? For these reasons, we decided to focus on the randomized model where agents react to the realization of the randomness. We can add additional discussion in the final version.
> Comment in the case where $|H|=\infty$.
Thanks for noticing the discussion on this question. As mentioned in the future directions, for infinite class, it is unclear what combinatorial measure can capture the essence of strategic classification.
> Why is "GALLANT" all capitalized?
We will change it to “Gallant”.
> How crucial is the metric space assumption? In other words, what if the triangle inequality doesn't hold? I guess at least some of your positive results still hold?
The metric space assumption is mainly used to provide an ordering of hypotheses. Take the Strategic Halving algorithm for example. If we replace the distance d with some function d’ such that the function d’ allows us to do binary search as we did in metric space, then our positive result should still hold.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! I have no further questions. | Summary: The paper studies strategic classification where a sequence of agents, given information about decision rule, may manipulate their features strategically to receive favorable decisions. The goal of the learner is to find a hypothesis that minimizes the number of mistakes through sequential interaction with agents. In addition to conventional strategic classification where agents manipulate their features within a bounded radius ball, this paper also considers personalized manipulation where agents can only manipulate the features belonging to a manipulation set that is unknown to the learner. For both scenarios, it provides the mistake bounds and PAC sample complexity under several settings: 1) when original features are revealed to the learner before choosing the hypothesis and manipulated features after, 2) when both original and manipulated features are revealed after choosing the hypothesis, 3) when only manipulated features are revealed after choosing hypothesis, 4) neither original nor manipulated features are revealed.
Strengths: 1. Establishing mistake bounds and sample complexity is an important topic in strategic classification, this can be challenging especially when agent features before/after manipulation are unknown to the learner.
2. The paper considered many settings comprehensively, from a simpler setting when both original and manipulated features are revealed to the learner, to a more complex setting when neither is known to the learner.
3. Section 3 (Overview of the results) is very helpful
Weaknesses: 1. The presentation and some statements can be misleading and confusing.
The authors emphasized in the abstract/introduction that one prime difference between the present paper and the prior works is that this paper considers personalized manipulation with non-ball manipulations. However, the majority of the paper still focuses on conventional settings with ball manipulations. It can be misleading as I expect more results on non-ball manipulations after reading the abstract.
2. The paper needs more literature review and discuss the differences with prior works to show the novelty.
The paper introduced the existing works on regret bound in strategic learning very briefly, e.g., (Ahmadi et al., 2021), (Ahmadi et al., 2021). It seems they also considered mistake bound under uniform, unknown manipulations. Moreover, there are many other works studying PAC learning for strategic classification and conducting sample complexity analysis, e.g., [1,2,3]. I think a large body of related works is missing and authors should elaborate more on these works and discuss differences with the present paper.
[1] Sundaram, R., Vullikanti, A., Xu, H., & Yao, F. (2021, July). Pac-learning for strategic classification. In International Conference on Machine Learning (pp. 9978-9988). PMLR.
[2] Zhang, H., & Conitzer, V. (2021, May). Incentive-aware PAC learning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 6, pp. 5797-5804).
[3] Lechner, T., & Urner, R. (2022, June). Learning losses for strategic classification. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 7, pp. 7337-7344).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: It is not clear to me how the present model captures the personalized manipulations (i.e., different manipulation abilities across agents). Based on my understanding, the authors assume there exists a pre-defined manipulation set $u$ that constrains agents' ability to manipulate. Agents choose to manipulate only if such a manipulation set overlaps the positive region of the predictor, and they would break ties randomly. By personalization, do you mean the manipulation set differs across agents? It seems that the manipulation set $u$ is the same and fixed over time. Should not this manipulation set depends on the predictor? Why break ties randomly? It is very hard for me to interpret this model. It would be helpful if authors can link the model to a real example.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors discussed the limitations, but the potential negative societal impact is not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer V2P4 for their valuable comments. In response to their questions, we provide the following clarifications and explanations.
> One prime difference between the present paper and the prior works is that this paper considers personalized manipulation with non-ball manipulations.
No, we did not say that the prime difference lies in non-ball manipulations. In the abstract, we stated that the main difference lies in that the manipulations are personalized and unknown (line 8-10). In the introduction, we began by mentioning that we model the manipulation through a set of feature vectors and then introduced ball manipulations (line 41-47).
We will add a sentence in the abstract to address that the main focus of this paper is on ball manipulations.
> The paper needs more literature review and discuss the differences with prior works to show the novelty.
Due to space limitations, we briefly discussed differences with the prior work and deferred some related work to the appendix. Check our appendix A for more related work.
Although we only provided a brief discussion of prior work, we have indeed addressed the essential differences. For instance, we pointed out that the manipulations in Ahmadi et al., (2021) are uniform, while the manipulations in [1,2,3] are known. Throughout the paper, we consistently emphasize that the core of our work revolves around non-uniform unknown manipulations.
We will add a more detailed discussion in the final version.
> How does the present model capture the personalized manipulations?
* Does the manipulation set differ across agents?
Yes, personalized manipulation means that the manipulation set differs across agents. Note that the manipulation set varies over time. Recall that each agent is a triple of the original feature vector $x$, the manipulation set $u$ and the label $y$. At each time step $t$, a new agent $(x_t,u_t,y_t)$ comes. In the adversarial online setting, this new agent, including her manipulation set $u_t$ is picked by the environment adversarially. In the distributional setting, the new agent is sampled from a distribution.
* Should not this manipulation set depend on the predictor?
No, the manipulation set does not depend on the predictor but the manipulated feature vector does. Note that the manipulation set $u_t$ is different from the manipulated feature vector $\Delta_t = \Delta(x_t,f_t,u_t)$. The manipulation set $u_t$ is a set of feature vectors that the agent can modify their original feature vector $x_t$ to. That is to say, the agent can only modify their original feature within $u_t$. For any implemented predictor $f_t$, if $u_t$ overlaps the positive region of the predictor, then the agent may manipulate as such a manipulation can help the agent receive a positive prediction. If $u_t$ does not overlap the positive region of the predictor, then the agent will not manipulate as such manipulation cannot help the agent receive a positive prediction. Hence, the manipulation set does not depend on the predictor. The manipulated feature vector $\Delta_t = \Delta(x_t,f_t,u_t)$, which is the feature vector the agent modifies to after observing the predictor $f_t$, depends on the predictor.
> Why break ties randomly?
We actually allow breaking ties in any fixed way. For simplicity, we fix this tie breaking way to be random. We will clarify this in the final version.
> Link the model to a real example.
Consider the example of college admission (line 27-28). Imagine a simplified situation where a college admits students with SAT scores higher than a certain threshold, denoted as C. For a student, obtaining a score of 1500 may be the best possible outcome if they are only allowed to take the SAT test once. However, if they have the opportunity to retake the test, they can achieve higher scores. The manipulation set for the student comprises all achievable scores through retakes. Notably, this manipulation ability varies between students of different financial backgrounds; wealthier students can afford to retake the test more frequently than their less affluent counterparts, resulting in different increases in scores that are independent of the predictor, i.e., the threshold C.
However, once the threshold C is disclosed, students will adapt their manipulative behavior accordingly. If they reach the score C, they will no longer retake the test, effectively capping their score at that level. Conversely, if a student realizes that obtaining a score of C is unattainable, they will refrain from manipulating their scores. Therefore, the final SAT score becomes the manipulated feature, influenced by the predictor, which is the threshold C.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply! I am not an expert in learning theory and I admit there are places that I may not fully understand. After looking at other reviewer's comments, I would like to change my score to 5. | Summary: The paper explores a setting of strategic classification - where agents respond to deployed predictive policies by potentially manipulating their feature vector so as to achieve positive predictions. The authors explore this setting while allowing agent manipulations to be personalized (no assumptions regarding same manipulation budget across agents, etc.), and also do not assume that the learner knows the manipulation power of the agent ahead of time. The authors assume realizability - that the incoming agents and their labels are consistent with some "true" underlying concept h* in the concept class. They consider different variants of an online classification problem - based on whether the original/manipulated feature vector of the agent is revealed, and when are they revealed - before/after the learner selects its predictor. In these four settings, the authors give upper and lower bounds for mistakes (in a potentially adversarial setting), as well as sample complexity for PAC learning (In a stochastic data generation setting).
Strengths: The paper offers a "complete story" in the context of an interesting sub-problem in strategic manipulation (where the true label cannot change) in realizable, online settings. I liked the framework the authors present to capture the feasible manipulation region for each agent, which abstracts away the cost function/budget in an elegant manner. I also like that the authors operate under the fact that the learner does not know how agents would manipulate. The division to revealing the original/manipulated features before/after the classification also makes a lot of sense and is a nice way of thinking about manipulation settings (where the true label is unaffected by such changes). In terms of results/techniques - the idea of leveraging the ball structure of feasible manipulation area to order concepts by distances and allow for bisecting the version space is pretty neat. The results for the setting where both the original and manipulated features are revealed after the predictor is picked are even more interesting (especially the algorithm and sample complexity for improper learning).
The paper is overall very well-written, and seems sound (although I have just looked, and not carefully read the appendix).
Weaknesses: I find the realizability assumption rather strong, in the following sense - as it is written in line 139, realizability in this paper is defined with respect to the manipulated features, i.e. there exists a concept h* which is the one giving the "true" labels for all of the manipulated features. But this, in turn, depends on the manipulations, which are personalized. This degree of circularity makes it a bit hard to think of how can we pick H ahead of time such that h* is present, without potentially having H to be of very large complexity, or otherwise restricting the personalized nature of the manipulations to some degree. But maybe this is inevitable for such general model of manipulation.
I think it is implicitly assumed that given agent x, the personalized manipulation is fixed (there can't be another agent with the same features, and different manipulation profile). If so, maybe it is worth noting in the main text.
Across the main text, the authors claim that prior work in strategic classification has not entailed personalized manipulation power for agents (e.g. line 375). This is inaccurate - see e.g. Bechavod et al. 2021.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I found it a bit difficult to follow example 1 in the main text. For example, if H is singletons over [n], how can f_t be all-negative? Also, could you clarify the definition of the metric - is it that distances from 0 to other nodes are 1, and all other distances are 2? Regardless, could you provide a more detailed explanation of the dynamics presented in this example?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors properly discuss limitations in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer XhgR for their valuable comments. We are particularly pleased that the reviewer finds merit in our modeling of personalized manipulation sets, the setting of unknown manipulation, and the algorithmic ideas involving bisecting the version space. In light of their questions and concerns, we offer the following clarifications and explanations.
> Concerns on the realizability assumption.
The essence of our realizability assumption is that there exists a hypothesis h* satisfying the following conditions:
1. All negative points and their manipulation sets lie on the negative side of h*.
2. As for positive points, they either reside on the positive side of h*, or on the negative side with their corresponding manipulation sets intersecting the positive side of h*.
From the adversarial robustness perspective, note that our realizability assumption is weaker than the realizability assumption in adversarial robustness (see Montasser et al., (2019) for example) which assumes that
1. All negative points and their manipulation sets (referred to as perturbation sets in the context of adversarial robustness) lie on the negative side of h*.
2. All positive points and their manipulation sets lie on the positive side of h*.
From the game theory perspective, the strategic classification problem is a Stackelberg game, where the learner (the leader) selects a predictor and then the agent (the follower) manipulates their feature vector to best respond to the predictor. Our realizability assumption assumes that the value of the Stackelberg equilibrium (the negative strategic loss) is 0.
In the final version, we will include a thorough discussion on the realizability assumption.
> Is it implicitly assumed that given agent x, the personalized manipulation is fixed?
No, our approach allows different manipulations for the same x. As demonstrated in Example 1 of the paper, for x being 0, the radius can take values of either 0 or 1. We will address this in our Model section in the final version.
> Previous work on strategic classification with personalized manipulation power, e.g., Bechavod et al. (2021).
Bechavod et al. (2021) actually focused on the regression problem, which inherently differs from the classification problem. We will incorporate the literature of strategic regression in the related work.
> Explanation on Example 1.
* If $H$ is singletons over $[n]$, how can $f_t$ be all-negative?
This is because we are considering a more general case where $f_t$ can be any hypothesis and does not necessarily belong to $H$. As described in line 2 of protocol 1, $f_t\in Y^X$ indicates that $f_t$ can be any hypothesis.
* Is it that distances from 0 to other nodes are 1, and all other distances are 2?
You are correct. Consider a star graph, where 0 is the central node and the others are leaf nodes. Let the distance between every two nodes be the length of the shortest path on this star graph. Then the distance from 0 to other leaf nodes is 1 and the distance between any two leaf nodes is 2.
* A more detailed explanation of the dynamics presented in this example.
In the realizable setting, suppose the target function $h^*$ is the singleton $21_{\{i^*\}} -1$ for some $i^* \in [n]$. Then when $f_t\neq h^*$, there are three different cases.
1. If the learner picks $f_t$ = all-negative, then the environment will pick agent $(0, 1, +1)$. Note that no matter what $i^*$ is, the target function $21_{\{i^*\}} -1$ will predict the agent $(0,1)$ by positive since the distance between $0$ and $i^*$ is 1 and the agent can manipulate the original feature 0 to $i^*$. The all-negative predictor will misclassify this agent by negative. The learner will observe the original feature 0, manipulated feature (conditional on all-negative is implemented) 0, true label +1, and the prediction -1, which do not contain any information about $i^*$. Hence, the learner makes a mistake but cannot learn anything about $i^*$.
2. If the learner picks an $f_t$ which predicts $0$ by positive, the environment will pick agent $(0,0,-1)$. Note that no matter what $i^*$ is, the singleton $21_{\{i^*\}} -1$ will predict the agent $(0,0)$ by negative since the distance between 0 and $i^*$ is 1 and the agent cannot manipulate the original feature $0$ to $i^*$. The implemented classifier $f_t$ will misclassify the agent by positive. The learner will observe the original feature $0$, manipulated feature $0$, true label $-1$, and the prediction $+1$, which do not contain any information about $i^*$. Hence, the learner makes a mistake but cannot learn anything about $i^*$.
3. If the learner picks an $f_t$ which predicts some $i\in [n]$ that is not equal to $i^*$ by positive, the environment will pick agent $(i,0,-1)$. The target function $21_{\{i^*\}} -1$ will predict the agent $(i,0)$ by negative since the distance between $i$ and $i^*$ is 2 and the agent cannot manipulate the original feature $i$ to $i^*$. The implemented classifier $f_t$ will misclassify this agent by positive. The learner will observe the original feature $i$, manipulated feature $i$, true label $-1$, and the prediction $+1$. The learner learns that $21_{\{i\}} -1$ cannot be the target function and does not know which one of $[n]\setminus i$ is the true $i^*$. Hence, the learner makes a mistake but is only able to eliminate one hypothesis.
Since $i^*$ can be chosen adversarially, the learner will not be able to pick an $f_t = 21_{\{i^*\}} -1$ (alternatively, identifying $i^*$) before making $\Omega(n)$ mistakes. The formal proof for a more general case can be found in the proof of Theorem 3. Example 1 is utilized to offer intuitive insights into the linear dependency.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: I wish to thank the authors for their response.
After reading the response, I tend to leave my original score unchanged. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies an online learning problem where the learner interacts with a strategic agent in the following way: the learner chooses a hypothesis from a hypothesis class; the agent, after observing the chosen hypothesis, can manipulate its feature vector to be within some ball of unknown and personalized radius to obtain a positive prediction. The paper considers various settings (adversarial online and distributional online) and various feedback models (what feedback does the learner observe before and after its decision). The paper provides mistake bounds (upper and lower) (for the adversarial online setting) and sample complexity bounds (for the distributional online setting) for all feedback models.
Strengths: * The paper is well-written - it is clear and easy to follow. It also includes proof sketches and intuitions behind the results and proofs.
* The paper studies an interesting online learning model and does a comprehensive job of modelling various types of feedback.
* Algorithms 1 and 2 build on classic ideas but are cleverly modified for the problem being studied.
* Theorem 7 about conservative algorithms is very interesting.
Weaknesses: * This paper studies the setting of a finite hypothesis class and in the realizable setting. Extensions to a general hypothesis class (with an appropriate combinatorial measure) and the agnostic case would really enhance the value of the paper. (Note - the authors point out these avenues for future work in the conclusion.)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Is the definition of "conservative algorithms" (in section 4.3) a well-known concept? It certainly makes a lot of sense. I am wondering whether that formalism already exists or not. If not, do you have thoughts on how this could be used elsewhere in online learning for lower bounds?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations: adequately addressed.
Negative societal impact: this is theoretical paper and I don't see any direct negative impacts arising from this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer Ro1X for their valuable comments. In response to their questions, we provide the following clarifications and explanations.
> Extension to general hypothesis class and the agnostic case.
We agree with the reviewer that the general hypothesis class and the agnostic case are interesting open questions to explore! As mentioned by the reviewer, we had already included these directions in the "Discussion and Open Problems" section of our submission. However, it is essential to emphasize that our primary focus has been on formulating the problem and presenting preliminary findings in this direction, which is a challenging task in itself.
> Is the definition of "conservative algorithms" a well-known concept?
Yes, the concept of 'conservative algorithms' is widely recognized in online learning, referring to algorithms that make updates only in mistake rounds.
The 'conservative' philosophy is widely adopted when designing algorithms, including the well-known Perceptron algorithm. Besides, the book 'Prediction, Learning, and Games' by Nicolo Cesa-Bianchi and Gábor Lugosi introduces conservative algorithms for linear classification and explains that "The philosophy behind this conservative policy is that there is no reason to change the weight vector if it has performed well in the last time instance."
In standard online learning, it is well-known that if a hypothesis class is learnable with a mistake bound of M, then it is learnable by a conservative algorithm with the same mistake bound of M. Consequently, establishing a lower bound for conservative algorithms effectively establishes a lower bound for all algorithms.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response!
EDIT: I want to mention just for clarity that my rating and review remain unchanged. | null | null | null | null | null | null |
HAP: Structure-Aware Masked Image Modeling for Human-Centric Perception | Accept (poster) | Summary: This paper proposes a Human structure-Aware Pre-training (HAP) method that incorporates human structure priors into the masked image modeling (MIM) training strategy [26] for tasks related to human-centric perception. The authors have demonstrated the advantages of the proposed method on 5 human-centric perception tasks across 12 benchmark datasets. Overall, the method is simple and intuitive, and extensive experiments have provided solid evidence of its superiority in human-centric perception tasks.
Strengths: 1. The author, through analyzing the deficiencies of the current MIM training strategy [26] in human-centric perception tasks, further proposes the introduction of human structure priors to expand the MIM training approach. This method is intuitive and appropriate.
2. Although the author uses the existing InfoNCE loss to implement structure-invariant alignment loss, the application of this structure-invariant alignment concept to different masked views in order to align them, thereby enhancing the feature representation ability of human structure information, is both straightforward and promising.
3. The author not only demonstrated the superiority of the proposed method across twelve benchmark datasets, but also validated the benefits of each proposed training strategy or loss function through ablation experiments.
4. The organization of the article and the presentation of the methods are both clear and well-structured.
Weaknesses: While this method effectively integrates prior knowledge and existing loss functions to enhance the performance of human-centric perception tasks, its technical originality and novelty still leave something to be desired.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The accuracy of pose estimation methods seems to significantly impact the performance of the HAP proposed by the author. However, it appears that the author hasn't conducted sufficient analysis, such as how OpenPose or AlphaPose specifically affect the performance of HAP.
2. In 3D human pose and shape estimation task, the performance of 3D human mesh estimation often decreases due to occlusions. The HAP method proposed by the author seems promising in solving this issue. However, it's unfortunate that the author did not conduct relevant experiments to further highlight the advantages of the proposed HAP. It would be beneficial for the author to incorporate such experiments to enhance the strengths of their HAP.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The author mentions that their proposed method may have potential negative societal impacts when applied to image generation tasks. Personally, I consider this to be a problem that we collectively need to address in the process of advancing artificial intelligence. Therefore, it does not diminish my positive evaluation of the research work. However, I still hope that the author can provide possible solutions to mitigate these potential negative impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The accuracy of pose estimation methods seems to significantly impact the performance of the HAP proposed by the author. However, it appears that the author hasn't conducted sufficient analysis, such as how OpenPose or AlphaPose specifically affect the performance of HAP.**
R1: Thanks for pointing out this. We further experiment HAP with 2D keypoints extracted by OpenPose.
The results are 72.0\%/91.2\% on MSMT17/MPII with 100-epoch per-training, which are similar to the results with the keypoints extracted by ViTPose (our original HAP), i.e., 72.2\%/91.3\% on MSMT17/MPII.
They both achieve superior performance than baseline (69.4\% on MSMT17 and 90.4\% MPII).
It reflects that the accuracy of pose estimation has slight effect on the pre-training quality.
We will further study more pose estimation methods to validate our statement.
**Q2: In 3D human pose and shape estimation task, the performance of 3D human mesh estimation often decreases due to occlusions. The HAP method proposed by the author seems promising in solving this issue. However, it's unfortunate that the author did not conduct relevant experiments to further highlight the advantages of the proposed HAP. It would be beneficial for the author to incorporate such experiments to enhance the strengths of their HAP.**
R2: Thanks for your constructive suggestion.
We will add occlusion-based experiments in 3D human pose and shape estimation task by carefully studying experimental settings in the revised version to further enhance the strengths of our HAP.
**Q3: The author mentions that their proposed method may have potential negative societal impacts when applied to image generation tasks. Personally, I consider this to be a problem that we collectively need to address in the process of advancing artificial intelligence. Therefore, it does not diminish my positive evaluation of the research work. However, I still hope that the author can provide possible solutions to mitigate these potential negative impacts.**
R3: Thanks a lot for your positive evaluation of our research work.
And very thanks for pointing out the suggestion that we can "provide possible solutions to mitigate these potential negative impacts''.
We will carefully update the "Broader impacts'' section as follows based on your suggestion in the revised version.
"Broader impacts and possible solutions: The proposed method is pre-trained and evaluated on person datasets, thus could reflect biases in these datasets, including those with negative impacts. When using the proposed method and the generated images, one should pay careful attention to dataset selection and pre-processing to ensure a diverse and representative sample. To avoid unfair or discriminatory outcomes, please analyze the model's behavior with respect to different demographics and consider fairness-aware loss functions or post-hoc bias mitigation techniques. We advocate for responsible use, and encourage the development of tools to detect and mitigate the potential misuse of our HAP and other masked image modeling techniques for unethical purposes."
---
Rebuttal Comment 1.1:
Comment: The author has responded thoroughly to my questions. However, it's regrettable that I have yet to see results from experiments based on occlusion. Nonetheless, I still hold a positive view of this paper and look forward to the author supplementing with occlusion experiments and making the model public to drive community development. I will vote in favor of accepting this submission.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your thoughtful assessment on our response and your positive view of our work.
We also apologize for not including the occlusion experiments up to now.
We are actively working on the occlusion experiments, unfortunately, we need more time to comprehensively understand the experimental setting to ensure that the experiments are correct and the comparison is fair.
We agree with you that the occlusion experiments can further enhance the strengths of our HAP, and we will keep going on this exploration.
We will make the model and code public following your suggestion and expectation, and we also hope that our publicly available model and code can drive the community development.
Thanks again for your positive comments which are valuable to improve the quality and impact of our work! | Summary: The authors introduce masked image modeling as a pre-training method specifically designed for human-centric perception tasks. To this end, the authors incorporate human structure priors (high scaling ratio, mediate masking ratio, block-wise masking), human part prior (2D keypoints as guidance for mask sampling), and structure-invariant alignment loss (contrastive loss on the [CLS] tokens across views). Extensive experiments demonstrate that the proposed method achieves competitive performance on various human-centric perception tasks.
Strengths: - The proposed method is well motivated. Directly applying MIM in human-centric perception tasks will indeed induce some problems. The authors exploited some priors from the person dataset to make pre-training tailored for specific downstream tasks.
- The paper is generally well-written and easy to follow.
- The experiments are extensive and the results are promising.
Weaknesses: - The technical novelty is limited. The proposed method is essentially a combination of many previous techniques used in masked image modeling. For example, the block-wise masking has been proposed in BEiT [2]. Semantic-guided masking has been proposed in prior works [5, 25, 33, 36, 41, 60, 74]. The only difference is that these works use attention maps while the authors use keypoints. Adding alignment loss is actually combining masked image modeling with contrastive learning (like what has been done in iBOT [77]).
- The authors use 2D keypoints to guide pre-training, which are extracted by existing pose estimation methods. I am afraid the obtained keypoints are not fully unsupervised.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have an additional question. The authors use contrastive loss with negative samples. What about removing negatives? Prior works (e.g., BYOL, SimSiam, DINO) have proven that negative samples are not necessary for better representation learning. Overall, I am leaning towards borderline accept considering that the authors provide a new formulation of previous techniques for a new human-centric perception scenario.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed the limitations and the broader impacts in Sec. 5, which looks good to me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The technical novelty is limited. The proposed method is essentially a combination of many previous techniques used in masked image modeling. For example, the block-wise masking has been proposed in BEiT [2]. Semantic-guided masking has been proposed in prior works [5, 25, 33, 36, 41, 60, 74]. The only difference is that these works use attention maps while the authors use keypoints. Adding alignment loss is actually combining masked image modeling with contrastive learning (like what has been done in iBOT [77]).**
A1: Thanks, we agree that many techniques have been used in masked image modeling (MIM), while
our HAP further **revisits and reformulates** these techniques in a novel way to introduce **human structure priors** (our key contribution) into MIM pre-training for human-centric perception tasks. In details,
i) Block-wise masking.
Using block-wise masking (proposed in BEiT [2]) is just a **finding** that it performs sightly better than random-wise masking in human-centric perception tasks (refer to preliminary study in Sec. 3.1), thus it is not the main point in our HAP.
ii) Human part prior guided masking is different from semantic-guided masking.
The proposed human part prior guided masking is the key point in HAP that is **specialized** for human-centric tasks by introducing keypoints.
It can be regarded as an **external and human-friendly** guidance for masking, instead of **self** guidance from attention maps in semantic-guided masking [5, 25, 33, 36, 41, 60, 74].
We also experiment HAP by directly using attention maps to guide masking.
This setting achieves 71.7\%/90.4\% on MSMT17/MPII (refer to Table 7 in Appendix), largely inferior than our HAP with human part guidance (76.4\%/91.8\%) and with keypoints (75.9\%/91.6\%),
which evidently shows the superiority of the proposed masking strategy guided by human prior.
iii) Adding alignment loss combines MIM with contrastive learning.
We agree. Despite similar formulation as contrastive loss, the proposed structure-aware alignment concept further applies contrastive loss on masked views generated from human part prior to enhance the feature discriminative ability.
iBOT [77] is different from ours in: a) the masked views are randomly generated without prior guidance. b) the loss is formulated by cross-entropy loss, not contrastive loss as in ours.
**Q2: The authors use 2D keypoints to guide pre-training, which are extracted by existing pose estimation methods. I am afraid the obtained keypoints are not fully unsupervised.**
R2: Thanks for pointing out this concern.
The 2D keypoints only provides the prior guidance for mask sampling in HAP, rather than the supervision signal for pre-training.
HAP only use image pixels as the pre-training targets, and it is a self-supervised learning method.
We will carefully revise the manuscript to make it clear.
**Q3: The authors use contrastive loss with negative samples. What about removing negatives? Prior works (e.g., BYOL, SimSiam, DINO) have proven that negative samples are not necessary for better representation learning.**
Thanks for your constructive suggestion.
Following your advice, we perform our HAP using the loss of SimSiam to replace our original alignment loss.
The results are 76.1\% (loss of SimSiam) vs. 76.4\% (our alignment loss) on MSMT17 for Person ReID, 91.6\% (loss of SimSiam) vs. 91.8\% (our alignment loss) on MPII for 2D pose estimation.
It shows that our HAP still works well if removing the negatives in contrastive loss, while our alignment loss with negatives performs slightly better.
We will include this interesting finding in the revised version.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks for the authors' rebuttal. My concerns are largely addressed. Although the technical novelty is somewhat limited, I am in favor of accepting this submission considering the proposed method provides a new formulation for the human-centric perception scenario, which may have a positive impact on the human-centric vision community. For Q2, I understand that HAP only uses image pixels as the pre-training target. However, the introduction of 2D keypoints still needs supervision unless the corresponding pose estimation methods are unsupervised. The authors should be careful when claiming their method as "self-supervised".
---
Reply to Comment 1.1.1:
Comment: Thanks for your response and suggestion.
We greatly appreciate your positive feedback regarding our work and your approval about the positive impact of our work on the human-centric vision community.
We understand and agree your concern that "the introduction of 2D keypoints still needs supervision unless the corresponding pose estimation methods are unsupervised".
We thus will carefully revise our manuscript about the "self-supervised" claim following your suggestion.
Thanks again! | Summary: The work studies masked image modeling (MIM) in human-centric perception. It first revisits the vanilla MIM and finds that human structure prior (2D pose) helps the downstream human-related tasks. This encourages the authors to incorporate this prior into the classical MAE. Based on this human-centric masking strategy, a structure-invariant alignment loss is developed as a regularization for pre-training. To evaluate the effectiveness of the proposed method, the authors conduct extensive experiments on 11 human-centric benchmarks. The competitive performance is observed when compared to the recent human-centric pre-training counterparts.
Strengths: 1) The overall writing and organization are satisfactory. The paper well describes the necessity of human structure prior to human-centric pre-training. The statistics are convincing to convey the motivation to the audience.
2) The pipeline incorporates MAE with pose-related masking is simple and effective. It can be easily applied to any MIM method.
3) The experiments are sufficient to verify the effectiveness of the proposed method, with informative visualization and ablation studies.
4) Codes are attached, and implementation details are well described for re-implementation.
Weaknesses: The performance on multiple benchmarks is overclaimed. For example, the results of Attribute recognition are marginal when comparing HAP with other human-centric pre-training methods. With bells and whistles, HAP (multi-dataset training and larger image size) outperforms other methods significantly. However, the vanilla HAP is inferior to the LiftedCL. PATH, and UniHCP, though they use different pre-training datasets. So, I reckon the superior performance this paper claims should be re-examined, or restated at least.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: As the technical novelty of this paper is human-related masking, the authors are encouraged to ablate the part selection. For example, the P parts are randomly selected, as denoted in Section 3.2. I am wondering how the performance will be if only making one or two parts, such as the head or upper body. Then that will be six ablation studies (each one masks a part only) to investigate the impact of each part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: In the limitation section, the authors state that the accuracy of pose estimation could affect the pre-training quality. However, the quantitative or qualitative results are not presented.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The performance on multiple benchmarks is overclaimed. For example, the results of Attribute recognition are marginal when comparing HAP with other human-centric pre-training methods. With bells and whistles, HAP (multi-dataset training and larger image size) outperforms other methods significantly. However, the vanilla HAP is inferior to the LiftedCL, PATH, and UniHCP, though they use different pre-training datasets. So, I reckon the superior performance of this paper claims should be re-examined, or restated at least.**
R1: Thanks for your valuable advice. We will re-examine and restate the claim of superior performance in the revised version following your advice.
For clarification, our HAP is a self-supervised method, while PATH and UniHCP are supervised learning method, thus the comparison is not totally fair.
When comparing with other self-supervised methods (like SOLIDER), our HAP achieves superior performance.
**Q2: I am wondering how the performance will be if only making one or two parts, such as the head or upper body. Then that will be six ablation studies (each one masks a part only) to investigate the impact of each part.**
Thanks for this intriguing point. Following your suggestion, we implement six ablation studies, in which each one only masks one part, i.e., head, left arm, right arm, left leg, right leg and upper body, to investigate the impact of each part.
Other settings keep the same as in overall HAP.
The results in the below table show that our overall HAP achieves slightly better performance than only masking one part.
Among six parts, the upper body part can provide relatively more information than others.
| dataset | unif{0, 6} | head | left arm | right arm | left leg | right leg | upper body |
| --- | --- | --- | --- | --- | --- | --- | --- |
| MSMT17 | 72.2 | 71.8 | 72.0 | 71.9 | 71.9 | 72.0 | 72.1 |
| MPII | 91.3 | 90.9 | 91.2 | 91.1 | 91.1 | 91.1 | 91.2 |
**Q3: In the limitation section, the authors state that the accuracy of pose estimation could affect the pre-training quality. However, the quantitative or qualitative results are not presented.**
R3: Thank you for pointing out this.
We add an experiment that uses 2D keypoints extracted from OpenPose, another pose estimation method, for the pre-training of HAP.
It achieves 72.0\%/91.2\% on MSMT17/MPII with 100-epoch pre-training, which are similar to our original HAP with ViTPose as pose detector (72.2\%/91.3\% on MSMT17/MPII).
It reflects that the accuracy of pose estimation has slight effect on the pre-training quality.
We will further study more pose estimation methods to validate our statement.
---
Rebuttal Comment 1.1:
Comment: My concerns have been well addressed. For example, the ablation of different body parts is provided with a convincing analysis. I will vote for acceptance for this submission.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer BM1x (2)
Comment: We really appreciate your positive feekback on our work and response.
We will carefully revise our paper following your suggestions and comments, including adding the ablation and the analysis of different body parts. Thanks! | Summary: This paper propose a pertaining strategy for human-centric vision tasks. They extend the mask image modelling approach by incorporating a prior on the human body parts to guide the mask sampling strategy. In short, they mask parts of the image which contains body part. Authors also propose an alignement loss to make sure that the same image with two different masks have same feature representations. They show fine-tuning performance on several standard benchmarks.
Strengths: 1) The paper is well-written and easy to read. The figures are clear and they help to get the main idea of the paper. Since authors do not claim to bring a very novel idea/technical contribution they explain that they start from an existing work (MAE) and try to extend this method for human-centric tasks. They perform experiments to identify issues for this method and they propose patches to fix them. The intuition behind the proposed fixs are based on human prior and are explained with references to prior works (i.e. body parts). Even if we could argue that the novelty is somehow limited, the proposed paper is well-constructed and may have a positive impact in the vision community specially for researchers working on human0centric vision.
2) The introduction introduced well the research problem, and summarise the main idea and claim of the paper. I appreciate that authors do not claim their method as SSL (even if they do in the Appendix l.13) because they use pseudo-GT (keypoints) for sampling masks in the input images.
3) Section 3.1 is very much appreciated. I guess that the project started with these experiments so it make sens t.
4) Authors presented good ablation studies (Table 2-3), as a reader we understand the impact of each components during the pretraing (some important ones are still missing see weaknesses).
5) Supplementary materials contain all the details for each downstream tasks which make the results reproducible. Furthermore authors also include the code for pretraining using their approach. Authors mentioned that code will be release soon.
6) Good performance on several downstream benchmarks where they compare against recent works. They reach SoTA on multiple datasets.
Weaknesses: 1) One major weakness of this submission is the lack of a simple baseline which is; pretraining on LUPerson by training to regress the 2d keypoints extracted by VITPose. I understand that they are noisy because they are pseudo-GT but I guess that because there are more than 2M images in LUPerson it should already give a better understand of why we need to pretrain using supervision signal from the pixel instead of using supervision signal for the 2d keypoints. For example for the downstream tasks of 3D human mesh recovery most of the training data used by the research community are pseudo-GT extracted on in the wild images and it works better than using indoor image with perfect ground-truth.
2) Authors mentioned that 2d keypoints are obtained by VITPose, it would be great to study the impact of the keypoint quality by using another off-the-shell detector such as OpenPose or pretraining on a big dataset with ground-truth 2D pose such as MSCOCO. I think that such ablation is very important for making sure that the key component of your proposed method is: a) to leverage a large-scale dataset for pretrainning — or b) to use a very recent off-the-shell 2d pose detector. At the moment we do not have any answer to this question in the proposed manuscript. I would suggest to pretrain your method on either LUPerson or MSCOCO with the same number of pretraining examples and using either GT 2d pose of 2D pose extracted from VITPose.
3) I did not find the information in the paper of Supp. Mat. About the initialisation of the weights for the pretraining stage. Do you train from scratch or from MAE-Imagenet?
4) It is a bit surprising that the fine-tuning on the downstream task of 3D pose does not work as well as [14] since you are just changing the backbone. It is a bit counter-intuitive with the results that you get on the other tasks. Moreover the backbone in [14] is a CNN pertained on 2d pose estimation so the conclusion regarding this task are a bit weird in comparison to the other tasks.
5) The Structure Alignement Loss is appealing since it bring contrastive learning in the paper but given results in Table 2 it seems that it does not bring a significant gain. The improvements are quite marginal and similar to HAP-2 given a certain variance during the fine-tuning stage (only on MSMT17 results are better than HAP-2).
6) Other human-centric pretraining method such as PATH are proposing zero-shot or results on downstream task with a frozen encoder. Do you have some results following this fine-tuning strategy (i.e. training only the head). It would be great to see how important it is to fine-tune the encoder for each task. Or maybe using adaptor.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) l.44 “Or a given input image, we generate two views. Not sure that we can really say that there are two views. Using the word ‘view’ makes reference to 3D and here the masked images are from the same image not from two different camera viewpoints. So I would suggest to find a different explaination.
2) Most of my questions are in the weaknesses. Weaknesses 1) and 2) are the most important points to me.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Lack of a simple baseline: pre-training on LUPerson by training to regress the 2D keypoints extracted by ViTPose.**
A1: Thanks for your constructive suggestion. We experiment this simple baseline and achieve 50.3\% on MSMT17 and 89.8\% on MPII with 100-epoch pre-training. This baseline is significantly inferior than our original HAP with RGB pixel as supervision signal (72.2\% on MSMT17 and 91.3\% on MPII). The underlying reason is that regressing 2D keypoints may lose informative details provided by RGB pixels, which has negative impact on human-centric tasks, especially on ReID tasks (refer to MSMT17 results). In contrast, our HAP is able to incorporate prior knowledge of human body structure into the pre-training, which is beneficial to maintain both details and body-structure information. We will add the above experiment and analysis in the revised version.
**Q2: It would be great to study impact of the keypoint quality by using another off-the-shell detector such as OpenPose or pre-training on a big dataset with ground-truth 2D pose such as MSCOCO.**
A2: Thanks for your advice.
i) Using 2D keypoints extracted by another off-the-shell detector OpenPose in HAP achieves 72.0\% on MSMT17 and 91.2\% on MPII with 100-epoch pre-training. The results are similar to our original HAP with ViTPose as pose detector (72.2\%/91.3\% on MSMT17/MPII), reflecting that our method is robust to the off-the-shell detector method.
ii) MSCOCO has about **260k** images for the 2D pose task. Using MSCOCO with its 2D ground-truth pose in HAP achieves 60.3\%/89.5\% on MSMT17/MPII with 100-epoch pre-training. Moreover, using the same number of pre-training samples of LUPerson (about 260k) achieves 63.0\%/89.7\%. These results are inferior than our original HAP with **2.1M** LUPerson (72.2\%/91.3\% on MSMT17/MPII), showing that large-scale dataset is important for pre-training.
**Q3: I did not find the information in the paper of Supp. Mat, about the initialization of the weights for the pre-training stage. Do you train from scratch or from MAE-ImageNet?**
A3: Our HAP is trained with the weight initialized from MAE-ImageNet (Line 18-19 in Appendix).
**Q4: The conclusion regarding 3D pose task area bit weird in comparison to the other tasks.**
A4: Thanks for pointing out it. The underlying reasons are:
i) Backbone. ViT and CNN work differently.
ViTs process image patches sequentially and use attention mechanism to capture global information, resulting in the destroy of spatial information yet it is important in 3D pose.
In contrast, CNN can leverage local receptive fields through convolutions to capture the spatial dependencies, which is beneficial for 3D pose.
Given that most of existing methods [14,a,b,c] use CNN to extract 2D pose and then use transformer for 3D estimation, there lacks plain ViT-based 3D pose method.
Thus we conjecture that plain ViT does not work well on 3D pose until now.
ii) Training. CNNs have been well studied in 3D pose task with various training experiences and tricks, while these experiences are lacked of study in ViT.
Our utilization of [14] with ViT backbone is only a simple and preliminary attempt to validate the possibility of our HAP on 3D pose task, which is not optimal to achieve good performance.
We will continue to study ViT on 3D pose task in the future.
[a] Exploiting Temporal Contexts with Strided Transformer for 3D Human Pose Estimation, TMM2022.
[b] Keypoint Transformer: Solving Joint Identification in Challenging Hands and Object Interactions for Accurate 3D Pose Estimation, CVPR2022.
[c] P-STMO: Pre-Trained Spatial Temporal Many-to-One Model for 3D Human Pose Estimation, ECCV2022.
**Q5: The Structure Alignment Loss is appealing since it brings contrastive learning in the paper, but given results in Table 2, it seems that it does not bring a significant gain. The improvements are quite marginal and similar to HAP-2 given a certain variance during the fine-tuning stage (only on MSMT17 results are better than HAP-2).**
A5: There might be a slight misunderstanding here. In fact, HAP-2 is the one which uses the structure alignment loss. Compared with HAP-2 vs. HAP-0 and HAP vs. HAP-1, our structure alignment loss results in +1.5\%/+1.3\% improvement on MSMT17, and +0.7\%/+0.7\% improvement on MPII, respectively.
We also appreciate your positive comments on the appealing structure alignment loss.
**Q6: Other human-centric pre-training method such as PATH are proposing zero-shot or results on downstream task with a frozen encoder. Do you have some results following this fine-tuning strategy (i.e. training only the head). It would be great to see how important it is to fine-tune the encoder for each task. Or maybe using adaptor.**
R6: Thanks for your insightful advice.
i) Unfortunately, our HAP fails on the evaluations of training only the head, achieving largey inferior performance than PATH.
The reason is that HAP is a self-supervised method that pre-trains on the datasets **different** from that in the downstream tasks, while PATH is a supervised learning method using most of the downstream datasets.
ii) Fine-tuning the encoder for each task is important.
Under the fine-tuning setting, our HAP performs better than PATH on most of the benchmarks.
Moreover, PATH also points out that fine-tuning is necessary to obtain high performance on in-dataset, out-of-dataset and unseen-task evaluations.
**Q7: Find a different explanation of "views''.**
R7: Thank you for pointing out this. We would like to accept your suggestion and use "masked images'' to replace "views'' in the revised version.
---
Rebuttal Comment 1.1:
Title: Rebuttal
Comment: After reading other reviews and rebuttal from authors I am leaning towards acceptance of the paper, I will move to 'weak accept' since I see a clear consensus between the different reviews. Authors answered all questions that I raised during the first stage of the reviewing procedure.
The new results such as the 2D-keypoints pretraining baseline and the pretraining on MSCOCO only show that the proposed method is working well as soon as a large-scale training set is deployed for pre-training.
Regarding Q4 on 3D pose, there is a recent paper Humans-4D accepted to ICCV'23 showing that ViT can be used for 3D pose estimation. This is just a remark for the author.
I am still not really convinced by the Structure Alignement Loss, I would suggest authors to downgrade their claim regarding this 'novelty'.
I appreciate answer to Q6; authors clearly explain that fine-tuning the encoder is needed for achieving good results. Using the encoder frozen is failing which is a bit surprising; one comment can be added in the main paper.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer QQMw (2)
Comment: We greatly appreciate your positive and encouraging feedback regarding our work and response.
i) We concur with your viewpoint that our method ``is working well as soon as a large-scale training set is deployed for pre-training''.
ii) We really thank you for pointing out a recent paper Humans-4D [a], which is a ViT-based work for the 3D pose estimation.
We will cite and study this work, and further integrate its experiences to our HAP.
iii) Thanks for your constructive suggestion on the Structure Alignment Loss.
We will carefully revise our paper to downgrade the claim regarding the novelty of the Structure Alignment Loss.
iv) Thanks for your advice. We will add a comment on the failed results when using the frozen encoder in the main paper following your suggestion.
We appreciate your insights and suggestions that are valuable for us to improve the quality of our paper.
We will carefully revise and refine our manuscript following your suggestions.
Thanks again!
[a] Goel, Shubham, Georgios Pavlakos, Jathushan Rajasegaran, Angjoo Kanazawa, and Jitendra Malik. Humans in 4D: Reconstructing and Tracking Humans with Transformers. ICCV2023. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful and constructive comments. We are encouraged that reviewers generally recognize the strengths of our paper in:
- Method: positive impact [QQMw], simple and effective [BM1x], make sense [QQMw], well motivated [ueQY], intuitive and appropriate [rHqg], straightforward and promising [rHqg].
- Experiment: sufficient and extensive [BM1x, ueQY], good and informative ablation studies [QQMw, BM1x], good performance [QQMw, ueQY].
- Presentation: clear [QQMw, rHqg], well-written [QQMw, BM1x, ueQY], well-constructed [QQMw, rHqg], well-described details [BM1x].
The point-wise responses have been provided below. We hope our responses can clarify all reviewers' confusion and alleviate all concerns.
Note that due to the limitation of rebuttal time, we only conducted pre-training for 100 epochs for the additional experiments suggested by the reviewers. In the future, we will expand to 400 epochs to further demonstrate the reliability of the conclusion. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DELTA: Diverse Client Sampling for Fasting Federated Learning | Accept (poster) | Summary: The paper proposed an unbiased client sampling method in Federated Learning. The proposed method is motivated by considering both variance and gradient diversity when doing client sampling. The authors theoretically proved that the proposed sampling method can achieve better convergence rate for non-convex objective functions. Besides, the authors also demonstrated its practical advantages by numerical experiments.
Strengths: Considering gradient diversity is a novel idea in FL client sampling research. The paper demonstrates its proposed method by both theory and experiments. The motivation is clear, and the paper is in general good writing. Given that client sampling is an important problem of federated learning and the claimed advantage of the proposed method, the paper can have significant impact on the area.
Weaknesses: Assumption 4 is a very strong assumption. It can be violated in many problems. For example, the simple least square regression problem can violate this assumption.
Besides, the paper seems to have too much content for a nine-page paper. As a result, the key theoretical result is lack of intuition.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. May the authors discuss what will happen if Assumption 4 does not hold?
2. How large is U used in the theory of practical IS and practical DELTA?
3. May the authors discuss what is the key intuition of theoretical advantage of the new IS analysis? In my understanding, the importance sampling can only improve the constants rather than the convergence order. Thus, I find the theoretical result quite surprising. Unfortunately, I do not have enough time to check the correction of the theory.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors mentioned that their proposed method does not address the backdoor attack concern.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer 9Bi1, thank you for providing constructive feedback. We have fully revised our manuscript and have addressed all of the comments, as well as added new experiments to further strengthen our work. Please find our responses to your raised questions below:
> Assumption 4 is a very strong assumption. It can be violated in many problems. For example, the simple least square regression problem can violate this assumption.
>
Thank you for your suggestion. We would like to explain the soundness of Assumption 4:
- This assumption is essential for IS methods [1,2,3]. Furthermore, it finds common usage in the FL community for convergence analysis [4,5,6,7].
In particular, according to the definition of IS, $E_{q(x)}[h(x)]=E_{p(x)}\left[\frac{q(x)}{p(x)} h(x)\right]$, where $q(x)$ is the given sampling distribution, $p(x)$ is our proposed sampling distribution and $h(x)$ is the value function. With a little work (similar to our Corollary F.1), one can prove the variance is minimized when $p^*(x) \propto q(x)\|h(x)\|$. If $\|h(x)\|$ is not consistently bounded, $p(x)$ is meaningless.
- We would like to clarify that in our practical algorithm of DELTA, the used assumption is gradient heterogeneity bound: $E|| \nabla F_i(x_t)-\nabla f(x_t)||^2 \leq G^2$ instead of Assumption 4, as used in eq (89) of the convergence analysis of the practical algorithm. This is a looser assumption than Assumption 4.
- Figure 1 in the one-page response PDF demonstrates the gradient norm of FedIS on MNIST and FashionMNIST datasets, suggesting gradient can be bounded.
For the least square regression problem, the gradient becomes unbounded only if the model parameters are at infinity in the parameter space. In other words, when the model parameters are normally distributed in the parameter space, the gradient is always finite-large and therefore bounded.
> May the authors discuss what will happen if Assumption 4 does not hold?
>
We would like to clarify the consequence/meaning of Assumption 4 not holding and explain it :
- If Assumption 4 doesn’t hold:
1. All IS-type methods become inapplicable in FL, given that this assumption forms an inherent prerequisite for utilizing IS to attribute meaning to the expression $p^*(x)\propto ||\nabla F_i(x)||$.
2. FL algorithms may falter in training since a lack of gradient bounding implies that gradients can assume "nan" values, thereby disrupting the deep learning training process.
3. For the practical FedIS algorithm, employing the user's last round participation gradient to approximate the current gradient will introduce an unbounded, significant noise.
- Figure 1 in the one-page response PDF showcases the gradient norm of FedIS on MNIST and FashionMNIST datasets, indicating that the gradient can be bounded, as its norm remains below 8.
> How large is U used in the theory of practical IS and practical DELTA?
>
We would like to clarify $U$ in practical IS and $\tilde{U}$ in practical DELTA possess limited magnitudes. We also provide the experimental results to illustrate it.
- Intuitively, $U=||\nabla F_i(x_{t,k},\xi_{t,k}) / \nabla F_i(x_{s,k},\xi_{s,k})||$ (same to $\tilde{U}$) tends to be smaller than 1 since the gradient tends to be zero as training processing, where index $t$ represent the current communication round and $s$ represent the last participated round of client $i$. The discussion of $U$ is provided in Remark G.1, Appendix G.
- Figures 1 and 2 in the one-page response PDF depict the norms of gradients and gradient diversity across all clients in each round. Notably, these figures demonstrate that in the case of practical IS and practical DELTA, the change ratio of both gradient and gradient diversity remains limited, with the maximum norm being under 8 and the minimum norm exceeding 0.5.
> May the authors discuss what is the key intuition of theoretical advantage of the new IS analysis?
>
Thank you for your comment. We are willing to share with you the intuition of our theoretical advantage of IS analysis.
First, we clarify that the rate improvement comes from our improved analysis of unbiased sampling in federated learning, including FedAvg. IS then reduces variance.
***Intuition:***
- The intuition of unbiased sampling FL: Though the existing analysis for partial user participation in FL (including FedAvg) can be extended to unbiased sampling scenarios, *the failure to leverage the unbiased property leads to imprecise upper bounds on convergence for unbiased sampling FL [1,2,3].*
- The intuition of FedIS: Following the unbiased sampling FL analysis, we optimize the convergence variance with respect to sampling probability.
For details explanation and evidence, please see “Reply to All Reviewers, Elaborate on the novelty and contribution of our analysis below”.
[1]Stochastic optimization with importance sampling for regularized loss minimization, ICML 2015
[2]Not all samples are created equal: Deep learning with importance sampling. ICML 2018
[3]Low-Cost Lipschitz-Independent Adaptive Importance Sampling of Stochastic Gradients. ICPR, 2021
[4]Diverse client selection for federated learning: Submodularity and convergence analysis. ICML 2021
[5]On the effectiveness of partial variance reduction in federated learning with heterogeneous data. CVPR. 2023
[6]Sharper convergence guarantees for asynchronous sgd for distributed and federated learning. NIPS, 2022
[7]Optimal client sampling for federated learning. TMLR, 2022.
---
Rebuttal 2:
Title: To Reviewer 9Bi1
Comment: Dear Reviewer 9Bi1,
I hope all is well.
We are sincerely grateful for your valuable time and expertise devoted to assessing our submission. Your insights and comments have greatly aided in refining our work.
Regarding your concerns, we hope that our rebuttal was able to shed light on them. In summary, we prove the practical DELTA requires a looser assumption($E\|\nabla F_i(x_t)-\nabla f(x_t)\|^2 \leq G^2$) instead of Assumption 4, while the practical FedIS relies on Assumption 4 since it constitutes a necessary requirement for IS in deep learning. Furthermore, we offer experimental results demonstrating bounded gradient norms and diversity. Moreover, we explain the key intuition behind our advanced FedIS analysis: rate improvement via refined unbiased sampling in FL with partial client participation, and variance reduction via IS, as you understand.
Your valuable suggestions have greatly improved our paper. We deeply value your expertise, and we have integrated your feedback to refine our work. Thank you once again for your invaluable time and consideration.
Best Regards,
authors
---
Rebuttal Comment 2.1:
Title: Reply to Rebuttal
Comment: Thank you for your reply. Most of my concerns are addressed. However, I still have doubts on how IS can improve the order of convergence rate rather than the constant. I do not find the intuition for theory very convincing on this achievement. I would like to keep my score due to lack of confidence.
---
Reply to Comment 2.1.1:
Title: To Reviewer 9Bi1
Comment: Thank you very much for your feedback and the time you've dedicated to reviewing our rebuttal. There might be some misunderstanding regarding your questions on the IS analysis. We would like to offer further elaborations as follows:
- **`[IS improves the constant]`** First of all, we would like to reiterate that the role of IS is to reduce convergence variance, i.e., improve the constant of term **$\mathcal{O}(\frac{1}{\sqrt{T}})$**, as per your understanding.
- **`[Enhanced analysis of FL under unbiased sampling improves the order]`** As for the order improvement of term $\mathcal{O}(\frac{1}{T^{\frac{2}{3}}})$, it results from our improved analysis of FL under unbiased sampling. The intuition behind why the improved analysis of FL under unbiased sampling surpasses the existing analysis of FL with partial client participation (vanilla FedAvg with random sampling)[1,2,3] is:
- `[Unbiased sampling bridges the convergence rate gap between partial and full user participation]` **Unbiased sampling allows for the equitable transformation of the aggregated model updating bounds with partial user participation into model updating bounds with full user participation**, such as $E\|\Delta_t\|^2$ in Eq. (33), thereby improving the convergence rate order from vanilla FedAvg with random sampling $\mathcal{O}(\frac{1}{T}+\frac{1}{\sqrt{T}} + \frac{1}{T^{\frac{2}{3}}})$ to full client participation $\mathcal{O}(\frac{1}{T}+\frac{1}{\sqrt{T}})$. Comprehensive information on the utilization of unbiased sampling in the derivation process can be found in our "Reply to All Reviewers.”
- `[Evidence of same rate as full participation]` When incorporating full client participation into our analysis, **the upper bound of our convergence recovers the convergence rate of FedAvg with full client participation**, i.e., $\mathcal{O}(\frac{1}{T}+\frac{1}{\sqrt{nKT}})$ [2].
- `[An improved FedAvg analysis]` Our improved analysis of FL under unbiased sampling can also be viewed as an improved convergence analysis for vanilla FedAvg with random sampling.
- **`[Comparison between ours and existing IS]`** Since IS is one kind of unbiased sampling, **we establish the convergence analysis of IS based on our improved analysis of FL under unbiased sampling.** Thus, our FedIS analysis can achieve an order of $\mathcal{O}(\frac{1}{T}+\frac{1}{\sqrt{T}})$ , in which IS helps reduce the constant of term $\mathcal{O}(\frac{1}{\sqrt{T}})$. In contrast, **existing IS analysis is established on the analysis of vanilla FedAvg with partial client participation** with order $\mathcal{O}(\frac{1}{T}+\frac{1}{\sqrt{T}} + \frac{1}{T^{\frac{2}{3}}})$, thus our FedIS analysis yields superior outcomes compared to prior research efforts.
- **`[Same upper bound in recent studies]`** Some very recent studies [4,5,6] have similarly noted that the convergence rate of vanilla FedAvg is not tight. These studies have employed various techniques to attain a convergence rate of $\mathcal{O}(1/\epsilon+1/\epsilon^2)$, as our rate order, rather than the rate associated with vanilla FedAvg. This achievement gets a tighter convergence result as the mini-batch SGD convergence is lower bounded by $\mathcal{O}(1/\epsilon+1/\epsilon^2)$ [7]. Though the used techniques are different and orthogonal to ours (unbiased sampling), these variance reduction methods all achieve the same order.
In summary, we present a novel theoretical analysis of nonconvex FedIS. The improved analysis of FL under unbiased sampling establishes a convergence rate of order $\mathcal{O}(\frac{1}{T}+\frac{1}{\sqrt{T}})$, and the role of IS contributes to reducing the constant of term $\mathcal{O}(\frac{1}{\sqrt{T}})$.
[1]Scaffold: Stochastic controlled averaging for federated learning. ICML, 2020.\
[2]Achieving linear speedup with partial worker participation in non-iid federated learning. ICLR 2021.\
[3]Reddi S J, Charles Z, Zaheer M, et al. Adaptive Federated Optimization[C]//International Conference on Learning Representations. 2020.\
[4] Li B, Schmidt M N, Alstrøm T S, et al. Partial Variance Reduction improves Non-Convex Federated learning on heterogeneous data[J]. arXiv preprint arXiv:2212.02191, 2022.\
[5] Koloskova A, Stich S U, Jaggi M. Sharper convergence guarantees for asynchronous sgd for distributed and federated learning[J]. arXiv preprint arXiv:2206.08307, 2022.\
[6] Wang S, Ji M. A Unified Analysis of Federated Learning with Arbitrary Client Participation[C]//Advances in Neural Information Processing Systems.2022\
[7]Arjevani Y, Carmon Y, Duchi J C, et al. Lower bounds for non-convex stochastic optimization[J]. Mathematical Programming, 2023, 199(1-2): 165-214.
---
Rebuttal 3:
Title: To reviewer 9Bi1
Comment: Dear reviewer 9Bi1,
We greatly appreciate your insightful review, which plays a crucial role in enhancing the quality of our manuscript. Furthermore, we've incorporated comments from other reviewers to further enhance our paper. We summarize the revisions in the "Author Rebuttal by Authors" global comment.
As the discussion period is ending soon, we want to confirm that our responses, including the second-round discussion on the intuition of FedIS analysis, have adequately addressed your concerns.
We are happy to take any follow-up questions and look forward to discussing with you.
We sincerely appreciate your expertise and dedicated time in reviewing our manuscript.
Warmest regards,
Authors | Summary: The authors introduce DELTA (Diverse Client Sampling) which is an unbiased method for client selection in Federated Learning (FL). DELTA is heavily inspired by Importance Sampling (IS) and cluster-based IS, resulting in sampling diverse clients with significant gradients however without the clustering. They provide convergence rate analysis of FL under IS scheme (FedIS) and then of FL under their DELTA scheme (FedDELTA) and closed form expressions for the sampling probabilities are given and discussed. Practical versions of those algorithms FedPracIS and FedPracDELTA are also analyzed. In the experiments loss profiles, accuracies and timings of proposed algorithms against baselines (including FedAvg, Power-of-Choice and cluster-based IS) and over a collection of real and synthetic datasets are presented and discussed, exhibiting the empirical advantages of the sampling scheme.
Strengths: - Comprehensive and nicely structured, easy to follow presentation, with a good balance of theoretical, practical and intuitive/explanatory information.
- Rich empirical results also communicating the benefits of proposed, analyzed sampling approaches.
- Simplicity of resulting FL algorithm: Basically the well-known, seminal FedAvg idea/template with a novel client sampling scheme (DELTA, PracticalDELTA) on top (Algorithm 1).
Weaknesses: - Minor: "developing an efficient and effective practical algorithm for gradient-based sampling methods" in future work section (Section 6) could confuse the reader (since DELTA is exactly such an algorithm). Perhaps a different wording, emphasizing room of improving over this new/existing development?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - In the experiments, there seems to be a variety of thresholds (70%, 54%, 85%, 80%) for different experiments and metrics in Tables 2 and 3. I would expect a more uniform way of presentation, some common threshold.
- On a related topic: Since for Table 2, 500 communication rounds were completed for all (dataset, algorithm) pairs why mean of maximum five accuracies? Are all these maximum values observed within the last x% of rounds? How do these accuracy profiles evolve? Have they saturated after 500 rounds or there is still room for improvements? I mean some short comment summarizing the full profile plots (similar to Figures 12a and 12b in the appendix) would be more than sufficient.
- How do you implement a sampling strategy like that of Equation (16) for FedPracDELTA in the experiments. There are parameters like alpha's, sigma's and zeta's in those: it would be great to include or refer to brief hints on how you choose values for those for conducting the empirical evaluation.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer yPdX, thank you very much for your appreciation of our work. Please find our responses to your raised questions below:
> Minor: "developing an efficient and effective practical algorithm for gradient-based sampling methods" in future work section (Section 6) could confuse the reader (since DELTA is exactly such an algorithm). Perhaps a different wording, emphasizing room of improving over this new/existing development?
>
Thanks for your suggestion, we would like to use the statement “further improve the efficiency of the existing gradient-based sampling methods” to enhance clarity.
> In the experiments, there seems to be a variety of thresholds (70%, 54%, 85%, 80%) for different experiments and metrics in Tables 2 and 3. I would expect a more uniform way of presentation, some common threshold.
>
Thanks for your suggestion, we are willing to provide results under some common thresholds in the below table, including 50% for CIFAR-10 and 80% for CelebA, to replace 54% for CIFAR-10 and 85% for CelebA.
In addition, we would like to clarify that we use these thresholds (like 54%) in our paper because they are the best integer accuracies that FedAvg can achieve in our experiments.
| | CIFAR-10 | | CelebA | |
| --- | --- | --- | --- | --- |
| Algorithm | Accuracy (%) | Communication Rounds for 50% | Accuracy (%) | Communication Rounds for 80% |
| FedAvg | 54.28±0.29 | 181 (1.0$\times$) | 85.92±0.89 | 339 (1.0$\times$) |
| Cluster-based IS | 54.83±0.02 | 187 (0.91$\times$) | 86.77±0.11 | 303 (1.11$\times$) |
| FedIS | 55.05±0.27 | 168 (1.07$\times$) | 88.12±0.71 | 261 (1.29$\times$) |
| DELTA | 55.20 ±0.26 | 151 (1.20$\times$) | 89.67 ±0.56 | 257 (1.32$\times$) |
> On a related topic: Since for Table 2, 500 communication rounds were completed for all (dataset, algorithm) pairs why mean of maximum five accuracies? Are all these maximum values observed within the last x% of rounds? How do these accuracy profiles evolve? Have they saturated after 500 rounds or there is still room for improvements? I mean some short comment summarizing the full profile plots (similar to Figures 12a and 12b in the appendix) would be more than sufficient.
>
We appreciate your suggestion. We would like to include comments about Table 2 in the experiment section as follows:
The maximum values reported in Table 2 are observed during the last 4% of rounds, where these algorithms have already reached convergence. The term 'maximum five accuracies' refers to the mean of the five highest accuracy values obtained within the plateau region of the accuracy curve. By computing the average of these values, we aim to minimize error and variance.
> How do you implement a sampling strategy like that of Equation (16) for FedPracDELTA in the experiments. There are parameters like alpha's, sigma's and zeta's in those: it would be great to include or refer to brief hints on how you choose values for those for conducting the empirical evaluation.
>
Thank you for your constructive suggestion. We would like to clarify the implementation of sampling algorithms:
We introduce the setting of parameters in Appendix H.2 in the submission. Specifically, for $\alpha$, the default value is 0.5, whereas $\sigma$ and $\zeta$ are not hyperparameters but rather can be implemented by computing the locally obtained gradients, as elaborated in L262, L263, and eq(16).
We are willing to include brief hints on how to set these hyperparameters in the main paper of our revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the clarifications and the clearly stated plans for incorporating suggested changes. I will keep my score.
---
Reply to Comment 1.1.1:
Title: To reviewer yPdX
Comment: Dear reviewer yPdX,
Thanks a lot for valuing our efforts. We sincerely appreciate your time and dedication in reviewing our paper. | Summary: This paper proposes a client sampling scheme in federated learning for faster convergence. The authors argue that the previous client sampling method based on importance sampling ignores gradient diversity. Convergence analysis is provided, as well as experiments on four image datasets.
Strengths: The problem of client selection in federated learning is interesting and worth investigating.
Weaknesses: [1] The paper overall is not well-written. Both organization and writing could be improved. There are multiple grammar errors even in the Introduction.
[2] Both theoretical and main method seems to rely on gradient information. However, gradient information is usually unavailable in a federated learning setting. It is unclear to me what's the contribution of this paper either theoretically or practically.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: See weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: The authors didn't really discuss limitation, even though it has a limitation session.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer stNZ, thanks for your time in reviewing our paper, please find our responses to your raised questions:
> Both theoretical and main method seems to rely on gradient information. However, gradient information is usually unavailable in a federated learning setting. It is unclear to me what's the contribution of this paper either theoretically or practically.
>
We have carefully checked our writing according to your suggestions.
**Regarding gradient information in FL:**
- This paper’s gradient information is the same as the model difference, differing only by a learning rate $\eta_L$ multiplier $\Delta_t^i=x_{t,K}^i-x_{t,0}^i=-\eta_L\sum_{k=0}^{K-1} g_{t,k}^i$. Our implementation aligns with the majority of works in FL community, which necessitate user-provided gradients (or model difference), such as [1,2,3,4,5,6,7,8].
- Gradient privacy protection stands as a distinct research direction from the sampling algorithm studied in this paper. Furthermore, privacy protection techniques like secret sharing and encryption protocols [9] are orthogonal to our sampling algorithm and can be directly employed for information preservation.
**Clarify our contribution:**
Please see “Reply to All reviewers”, in which we clarify our contribution item by item.
[1]Communication-efficient learning of deep networks from decentralized data. AIS 2017
[2]Tackling the objective inconsistency problem in heterogeneous federated optimization. NIPS 2020
[3]Federated learning under importance sampling. IEEE Transactions on Signal Processing 2022
[4]Can 5th Generation Local Training Methods Support Client Sampling? Yes! ICAIS 2023
[5]Optimal client sampling for federated learning. TMLR 2022
[6Fast heterogeneous federated learning with hybrid client selection. UAI, 2023
[7]Federated optimization in heterogeneous networks. MLS 2020
[8]Scaffold: Stochastic controlled averaging for federated learning. ICML 2020
[9]A privacy preserving federated learning scheme using homomorphic encryption and secret sharing. Telecommunication Systems, 2023
---
Rebuttal Comment 1.1:
Title: Rebuttal Reply
Comment: Thanks for the response.
I was confused by Algorithm 1. Lines 7 and 8 give me the impression that each client only performs one gradient descent step. On the contrary, methods like Fedavg would let each client run for a couple of epochs. Could the authors further clarify this, such as how many epochs did the authors use in the evaluation?
I would be happy to increase my score once I make sure I understand the work correctly.
---
Reply to Comment 1.1.1:
Title: To reviewer stNZ
Comment: Thank you for your response. We would like to clarify that each client performs multiple local epochs of gradient descents, similar to the FedAvg.
- In Algorithm 1, Lines 7 and 8 denote the local model update during local epoch $k$, with a total of $K$ ($k \in [0,K-1]$) local epochs. Therefore, Algorithm 1 lets each client run for $K$ epochs.\
As demonstrated in Line 263, the used gradient of DELTA is $\hat{g}^t_i= \sum_{k=0}^{K-1} \nabla F_i(x_{k,t}^i,\xi_{k,t}^i)=\frac{1}{\eta_L}(x_{t,0}^i-x_{t,K}^i)$, which represents the cumulative gradient descent from multiple local updates.
- Similar algorithmic procedures, akin to those presented in Lines 7 and 8, are used in various FL works [1,2,3,4], serving to represent the multiple local updates of FL.
- In our experiments, each algorithm executes 5 local epochs (5 gradient descent steps), as detailed in the experiment setup of Appendix H.2.
- In addition, FedIS (Algorithm 2) also uses gradient $\hat{g}_i^t$, the cumulative gradient descent from multiple local updates, as demonstrated in Line 147.
We would like to know if our responses have addressed your concerns and look forward to discussing with you. We would greatly appreciate it if you could reconsider the review score.
[1]Adaptive federated optimization. ICLR 2021.
[2]Achieving linear speedup with partial worker participation in non-iid federated learning. ICLR 2021.
[3]Scaffold: Stochastic controlled averaging for federated learning. ICML, 2020.
[4]Optimal client sampling for federated learning. TMLR, 2022.
---
Rebuttal 2:
Title: Thank you for raising our score.
Comment: Dear Reviewer stNZ,
We sincerely thank you for raising our score from 3 to 5.
Thank you again for the time and effort you have invested in reviewing our paper. | Summary: The paper proposes a novel unbiased client sampling scheme called DELTA for Federated Learning (FL). The authors address the issue of unrepresentative client subsets in FL, which can lead to significant variance in model updates and slow convergence. They show that existing unbiased sampling methods have limitations in terms of convergence speed and redundancy. In contrast, DELTA selects diverse clients with significant gradients without clustering operations. The authors also propose practical algorithms for DELTA and Importance Sampling (IS) that rely on accessible client information. The results are validated through experiments on both synthetic and real-world datasets.
Strengths: 1. The paper presents a novel unbiased sampling scheme for FL that addresses the limitations of existing methods. The approach of selecting diverse clients based on gradient diversity without clustering operations appears to be new and promising.
2. The paper provides theoretical analyses and experimental results to support the proposed sampling schemes. The convergence rates of the practical algorithms are shown to attain the same order as the theoretical optimal sampling probabilities.
3. The paper is well-organized and clearly presents the motivation, methodology, and results of the study. The figures effectively illustrate the concepts and comparisons between different sampling methods.
Weaknesses: 1. In Corollary 4.2, the convergence rate of the practical algorithm depends on a term $\tilde{U}$, which can be very large if $\left\|\nabla F_i\left(x_s\right)-\nabla f\left(x_s\right)\right\|$ is small.
2. Can the proposed client selection framework be applied to federated learning algorithms other than vanilla SGD. I believe a unified theoretical analysis would show the effectiveness of the proposed method.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See the weakness part above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper does not explicitly address the limitations and potential negative societal impacts of the proposed work, but I do not think that this is a major issue.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 1Yft, we sincerely appreciate your thorough review of our paper. We have diligently incorporated your feedback to enhance the quality of our work. Kindly find our comprehensive responses to your questions outlined below:
> In Corollary 4.2, the convergence rate of the practical algorithm depends on a term $\tilde{U}$, which can be very large if $||∇F_i(x_s)−∇f(x_s)||$ is small.
>
We would like to clarify that $\tilde{U}$ will not be very large. $\tilde{U}$ are determined by $U_1=\frac{||\nabla F_i(x_t)-\nabla f(x_t)||}{||F_i(x_s)-\nabla f(x_s)||}$ and $U_2=\frac{||\sigma_{L,i,t}||}{||\sigma_{L,i,s}||}$, where $t$ represents the current training round and $s$ represent the last participated round of client $i$. Specifically,
- $U_1=E(\frac{||\nabla F_i(x_t)-\nabla f(x_t)||}{||F_i(x_s)-\nabla f(x_s)||})$ represents the gradient diversity change ratio between rounds. As training progresses, gradient diversity typically diminishes, leading to a change ratio smaller than 1.
- Additionally, Figure 2 in the one-page response PDF illustrates the minimum and maximum norms of gradient diversity per round across all clients. Even in the most adverse scenario where $||\nabla F_i(x_t) - \nabla f(x_t)||$ is chosen as its maximum and $||\nabla F_i(x_s) - \nabla f(x_s)||$ is chosen as its minimum, $U_1$ is constrained. The minimum value across all rounds exceeds 0.5, while the maximum value remains below 8.
- In addition to $U_1$, the local variance-related term $U_2$ also contributes to the overall value of $\tilde{U}$. The precise expression for $\tilde{U}$ is $E\left|\left|\frac{R_i^t}{R_i^s}\right|\right|$, where $R_i^s = \sqrt{\alpha_1||\nabla F_i(x_s) - \nabla f(x_s)|| + \alpha_2\sigma_{L,s}}$. Therefore, due to the presence of $\sigma_{L,s}$, the denominator remains sufficiently nontrivial, preventing an excessive increase in the value of $\tilde{U}$.
The above discussion regarding $\tilde{U}$ is provided in Remark G.1 and eq (89) of Appendix G, and we refer to the discussion in the paper L283.
> Can the proposed client selection framework be applied to federated learning algorithms other than vanilla SGD. I believe a unified theoretical analysis would show the effectiveness of the proposed method.
>
We appreciate your suggestion. In addition to the results already presented in Table 2, which include momentum, prox, and VARP, we are willing to provide additional experimental results.
Specifically, we conduct experiments on LEAF (FEMNIST and CelebA) with other non-vanilla SGD algorithms, namely Adagrad and Adam. The results are shown in the below Table:
| Algorithm on FEMNIST | Adagrad | | Adam | |
| --- | --- | --- | --- | --- |
| | Accuracy (%) | Rounds for 80% | Accuracy (%) | Rounds for 80% |
| FedAvg | 80.93 ± 0.08 | 893 (1.0$\times$) | 80.04 ± 0.15 | 882 (1.0$\times$) |
| Cluster-based IS | 80.69±0.12 | 760 (1.17$\times$) | 79.11± 0.18 | - |
| FedIS | 80.96 ± 0.03 | 723 (1.24$\times$) | 80.10 ±0.25 | 787 (1.12$\times$) |
| DELTA | 81.79 ± 0.09 | 612 (1.46$\times$) | 80.92 ±0.07 | 600(1.47$\times$) |
| Algorithm on CelebA | Adagrad | | Adam | |
| --- | --- | --- | --- | --- |
| | Accuracy (%) | Rounds for 80% | Accuracy (%) | Rounds for 80% |
| FedAvg | 88.92 ± 0.08 | 329 (1.0$\times$) | 89.04 ± 0.22 | 244 (1.0$\times$) |
| Cluster-based IS | 89.71±0.10 | 329 (1.0$\times$) | 89.26± 0.19 | 164 (1.49$\times$) |
| FedIS | 90.14 ± 0.01 | 243 (1.35$\times$) | 89.92 ±0.05 | 140 (1.74$\times$) |
| DELTA | 90.38 ± 0.02 | 214 (1.54$\times$) | 90.58 ±0.07 | 109 (2.24) |
Analogous to [1], our analysis seamlessly extends to FedAdagrad and FedAdam. The divergence between these algorithms and SGD originates from the learning rate adaptation, which doesn't alter the convergence analysis steps.
Specifically, the difference in analysis arises with $x_{t+1}=x_t+\eta \frac{\Delta_t}{\sqrt{v_t}+\tau}$ replacing $x_{t+1}=x_t+\eta \Delta_t$, where $v_t=\beta v_{t-1}+\left(1-\beta\right) \Delta_t^2$. As demonstrated in Corollary 1 and Corollary 2 of [1], following the very similar steps of vanilla SGD, Adam's convergence rate order is the same as that of SGD.
[1] Adaptive federated optimization. ICLR 2021.
---
Rebuttal 2:
Title: To reviewer 1Yft
Comment: Dear reviewer 1Yft,
Thank you for your invaluable review that significantly improved our manuscript's quality. In addition, we have taken into account the feedback provided by other reviewers to further refine our paper. For a summary of these revisions, please refer to the "Author Rebuttal by Authors" global comment.
As the discussion period concludes, we would like to ensure that we have adequately addressed all your concerns. Please inform us of any further clarifications or experimental evaluations that could enhance our work.
We deeply appreciate your expertise and dedicated time reviewing our manuscript.
Best regards,
Authors | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and efforts in reviewing our paper.
We are encouraged they found our motivation and idea to be interesting(stNZ, yPdX), novel (1Yft, yPdX, 9Bi1), and promising (1Yft, yPdX). We have carefully considered their suggestions and incorporated them into our revised manuscript.
For convenience, we have prepared a one-page response PDF, containing two Figures and three Tables of ***new experiments***.
1. Figure 1 and 2 show the norm of gradient and gradient diversity on MNIST and FashionMNIST.
2. Table 1 and 2 are the results of sampling algorithms integrated with the non-vanilla SGD algorithms (Adagrad and Adam) on FEMNIST and CelebA.
3. Table 3 shows the results of sampling algorithms under common thresholds, i.e., 50% accuracy for CIFAR-10 and 80% accuracy for CelebA.
Our ***contributions*** are concluded as follows:
1. We are the first to propose DELTA, the diverse gradient-based unbiased sampling for FL. Our analysis shows DELTA surpasses the SOTA FedAvg in convergence rate by eliminating $\mathcal{O}\left(1 / T^{2 / 3}\right)$ term and a $\sigma_G^2$-related term of $\mathcal{O}\left(1 / T^{1 / 2}\right)$.
2. Our nonconvex FL convergence analysis with IS outperforms existing FedIS analysis, employing a more lenient assumption. Notably, we eliminate the $\mathcal{O}\left(1 / T^{2 / 3}\right)$ term from the convergence rate compared to existing unbiased sampling analysis, including FedIS and FedAvg.
3. We present a practical DELTA algorithm to mitigate the reliance on full gradients, along with a theoretical convergence guarantee.
4. Extensive experiments across datasets confirm DELTA's superiority over existing FL unbiased sampling methods and its compatibility with other optimization algorithms.
### ***Elaborate on the novelty and contribution of our analysis below:***
1. Regarding FedIS analysis,
1. **Existing challenges:** Despite the existence of existing convergence analysis of partial participant FL [1,2,3], including FedIS that builds on this analysis [4, 5], none of them take full advantage of the nature of unbiased sampling, and thus yield an imprecise upper bound on convergence.
2. **Our solution:** To tighten the FedIS upper bound, we first derive a tighter convergence upper bound for unbiased sampling FL. By adopting uniform sampling for unbiased probability, we achieve a tighter FedAvg convergence rate. Leveraging this derived bound, we optimize convergence variance using IS.
*Compare with existing unbiased sampling FL works, including FedAvg and FedIS (others), our analysis on FedIS entails:*
- **A tighter Local Update Bound Lemma:** We establish Lemma C.3 using Assumption 3, diverging from the stronger assumption $||\nabla F_i(x_t))-\nabla f(x_t)||^2 \leq \sigma_G^2$ (used in [1,2]), and the derived Lemma C.3 achieves a tighter upper bound than other works (Lemma 4 in [1], Lemma 2 in [2]).
- **A tighter upper bound on aggregated model updates $E||\Delta_t||^2$:** By fully utilizing the nature of unbiased sampling, we convert the bound analysis of $A_2=E||\Delta_t||^2$ equally to a bound analysis of participant variance $V\left(\frac{1}{m p_i^t} \hat{g}_i^t\right)$ and aggregated model update with full user participation (Eq. (33) and Eq. (34)). In contrast, instead of exploring the property the unbiased sampling, [1] repeats to use Lemma 4 and [2] uses Lemma 2 for bound $A_2$. This inequality transform imposes a loose upper bound for $A_2$, resulting in a convergence variance term determined by $\eta_L^3$, which reacts to the rate order being $\mathcal{O}(T^{-\frac{ 2}{3}})$.
- **Relying on a more lenient assumption:** Beyond the aforementioned analytical improvement, our IS analysis obviates the necessity for unusual assumptions in other FedIS analysis such as Mix Participation [3] and $\rho$-Assumption [4].
2. Regarding DELTA analysis,
1. **Existing challenge:** IS focuses on minimizing $V\left(\frac{1}{m p_i^t} \hat{g}_i^t\right)$ in convergence variance $\Phi$ (Eq. (4)), while leaving other terms like $\sigma_L$ and $\sigma_G$ unreduced.
2. **Our solution:** Unlike IS roles to reduce the update gap [5], we propose analyzing the surrogate objective for additional variance reduction.
*Compared with FedIS, our analysis of DELTA entails:*
- **Focusing on surrogate objective, introducing a novel Lemma and bound:**
1. We decompose global objective convergence into surrogate objective and update gap (Eq. (6)). For surrogate objective analysis, we novelly introduce Lemma E.8 to bound local updates.
2. Leveraging the unique surrogate objective expression and Lemma E.8, we link sampling probability with local variance and gradient diversity, deriving novel upper bounds for $A_1$ and $A_2$ (Eq. (57), Eq. (59)).
3. By connecting update gap's convergence behavior to surrogate objective through Definition E.1 and Lemma C.2, along with (Eq. 6), we establish $\tilde{\Phi}$ (Eq. (10),(11))) as the new global objective convergence variance.
- **Optimizing convergence variance through novel $\tilde{\Phi}$:**
FedIS aims to reduce the update variance term $V(\frac{1}{(mp_i^t)}\hat{g}_i^t)$ in $\Phi$, while FedDELTA aims to minimize the entire convergence variance $\tilde{\Phi}$, which is composed of both gradient diversity and local variance. By minimizing $\tilde{\Phi}$, we get the sampling method DELTA, which further reduces the variance terms of $\Phi$ that cannot be minimized through IS.
[1]Adaptive federated optimization. ICLR 2021.
[2]Achieving linear speedup with partial worker participation in non-iid federated learning. ICLR 2021.
[3]Scaffold: Stochastic controlled averaging for federated learning. ICML, 2020.
[4]Tackling system and statistical heterogeneity for federated learning with adaptive client sampling. IEEE INFOCOM 2022.
[5]Optimal client sampling for federated learning. TMLR, 2022.
Pdf: /pdf/80981c61aa74d2d89a7635280f8b510526dc395e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies the sampling schemes in federated learning. In particular, the authors first develop a new analysis method for important sampling (IS) strategy, which achieves a better convergence rate and then they propose a new sampling approach called DELTA, which can outperform the IS scheme. In addition, the authors also proposed a practically implementable version of DELTA and the performance is evaluated using experiments.
Strengths: 1. The authors develop a new convergence analysis for IS for nonconvex objective functions.
2. The authors propose a new sampling approach called DELTA and propose a practically implementable version of DELTA.
3. The proposed algorithm is evaluated using experiments.
Weaknesses: 1. The novelty of the analysis is unclear. It seems a standard "by round" analysis. Since the authors claim the analysis is one of the major contributions of this paper. I suggest the authors to elaborate this point in detail.
2. The Assumption 4 seems strong and can significantly simplify the analysis in general. In addition, it can be violated easily in practice.
3. The wring of the paper is not careful enough. Many parameters are presented without explanation although it is not very hard to figure them out. For example, in Theorem 3.1, many parameters such as K and T are not explained. In addition, instead of just mentioning some lemmas and assumptions in the appendix or other papers, it will be much better to state them in the main paper to make the reading easier.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The improvement of the proposed analytical approach is unclear. It seems all the approach achieve a $O(1/\epsilon^2)$ of the communication complexity.
2. When IS is analyzed, Assumption 3 is used, whereas DELTA is analyzed, it seems that Assumption 3 is not used while the gradient diversion $\zeta_{G,i,t}$ is used, since $\sigma_G$ is not part of Theorem 3.4. From Corollary 3.7 and compare it to Theorem 3.1, it seems the advantage of DELTA is to eliminate the term of $\sigma_G$. If it is correct, it seems that it is a little bit unfair to use different assumptions to analyze the two different approaches.
3. The novelty of the analysis has to be more elaborated. Please clearly state the main differences between the proposed analysis and other analysis in the literature such as IS and many works related to FedAvg. To me, it is a pretty standard "by round" analysis for nonconvex objective functions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discussed the limitations of the work in the end.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer 71Y7 , we would like to thank you for your time spent reviewing our paper and for providing constructive comments. Please kindly find our responses to your raised questions below:
> The novelty of the analysis is unclear. Please clearly state the main differences between the proposed analysis and other analysis in the literature such as IS and many works related to FedAvg.
>
Thanks for your suggestion, we would like to elaborate on the novelty of our analysis. Please see “Reply to All Reviewers” for details.
> Assumption 4 seems strong and can significantly simplify the analysis in general. In addition, it can be violated easily in practice.
>
Thank you for your comment. We would like to explain the soundness of Assumption 4:
- This assumption is essential for IS methods [1,2,3]. Furthermore, it finds common usage in the FL community for convergence analysis [4,5,6,7].
In particular, according to the definition of IS, $E_{q(x)}[h(x)]=E_{p(x)}\left[\frac{q(x)}{p(x)} h(x)\right]$, where $q(x)$ is the given sampling distribution, $p(x)$ is our proposed sampling distribution and $h(x)$ is the value function. With a little work (similar to our Corollary F.1), one can prove the variance is minimized when $p^*(x) \propto q(x)\|h(x)\|$. If $\|h(x)\|$ is not consistently bounded, $p(x)$ is meaningless.
- We would like to clarify that in our practical algorithm of DELTA, the used assumption is gradient heterogeneity bound: $E|| \nabla F_i(x_t)-\nabla f(x_t)||^2 \leq G^2$ instead of Assumption 4, as used in eq (89) of the convergence analysis of the practical algorithm. This is a looser assumption than Assumption 4.
- Figure 1 in the one-page response PDF demonstrates the gradient norm of FedIS on MNIST and FashionMNIST datasets, suggesting gradient can be bounded.
> The writing of the paper is not careful enough.
>
Sorry for any confusion arising from the simplicity of the symbols.
- While we define the terms in Algorithm 1, we will also add the meanings of K and T in Theorem 3.1: T is the total communication round and K is the total local epoch times.
- Due to the page limit, we present the lemma details in the Appendix. We will include the lemmas in the main paper along with streamlined explanations.
> The improvement of the proposed analytical approach is unclear. It seems all the approach achieve a $\mathcal{O}(\frac{1}{\epsilon^2})$ of the communication complexity.
>
We would like to clarify the concise communication complexity of these approaches is $\mathcal{O}(\frac{1}{\epsilon^2}+\frac{1}{\epsilon})$ or $\mathcal{O}(\frac{1}{\epsilon^2}+\frac{1}{\epsilon}+\frac{1}{\epsilon^{3/2}})$, as shown in Table 2 of [8] and Table 1 of our paper. The relevance of the additional terms becomes significant when the dominant term $\mathcal{O}(\frac{1}{\epsilon^2})$ is the same for all approaches.
We provide a convergence rate table for comparison, highlighting the improvement in our approach when all these algorithms share a dominant communication complexity of $\mathcal{O}(\frac{1}{\epsilon^2})$ ($\mathcal{O}(\frac{1}{\sqrt{T}})$).
| Algorithm | Convergence upper bound $\min_{t \in[T]} E\left[\|\|\nabla f\left(x_{t}\right)\|\|_{2}^{2}\right] \leq$ | Rate improvement |
| --- | --- | --- |
| FedAvg[9] | $\frac{1}{c}\left(\frac{f^{0}-f^{*}}{\sqrt{nKT}}+\frac{\sigma_L^2+3K\sigma_G^2}{2\sqrt{nKT}}+\frac{5(\sigma_L^2+6K\sigma_G^2)^2}{2KT}+\frac{15(\sigma_L^2+6K\sigma_G^2)}{2\sqrt{nKT^3}}\right)$ | -- |
| FedIS(others)[7] | $\frac{1}{c}\left(\frac{(f^{0}-f^{*})B^2}{\sqrt{n K T}}+\frac{2F \sigma_{L}^{2}+2F(1-n/m)K\sigma_G^2}{2\sqrt{n K T}}+\frac{B^2F}{T}+\frac{F^{2/3}\sigma_G}{T^{2/3}}\right)$ | reduce the coefficient of $\sigma_G$ |
| FedIS(ours) | $\frac{1}{c}\left(\frac{f^{0}-f^{*}}{\sqrt{n K T}}+\frac{\sigma_{L}^{2}+K \sigma_G^{2}}{2\sqrt{n K T}}+\frac{5(\sigma_L^2+4K\sigma_G^2)}{2T}\right)$ | remove $\mathcal{O}(\frac{1}{T^{2/3}})$ and reduce the coefficient of $\sigma_G$ |
| DELTA | $\frac{1}{c}\left(\frac{f^{0}-f^{*}}{\sqrt{n K T}}+\frac{\sigma_{L}^{2}}{2\sqrt{n K T}}+\frac{5(\sigma_{L}^{2}+4 K \zeta_{G}^{{2}})}{2K T}\right)$ | further improve the coefficient of $\mathcal{O}(\frac{1}{T^{1/2}})$ |
> When IS is analyzed, Assumption 3 is used, whereas DELTA is analyzed, it seems that Assumption 3 is not used while the gradient diversion $\zeta_G$ is used,it seems that it is a little bit unfair to use different assumptions to analyze the two different approaches.
>
Thank you for your suggestion. We would like to clarify that the term $\zeta_G$ of DELTA can be easily transformed into a $\sigma_G$ related term, thus it is fair.
In particular, by taking the expectation on $\zeta_G$, it equates to $E||\nabla F_i(x_t) - \nabla f(x_t)||^2$. As demonstrated in [10], one can derive $E||\nabla F_i(x_t)-\nabla f(x_t)||^2 = E||\nabla F_i(x_t)||^2 -||\nabla f(x_t)||^2 \leq A||\nabla f(x_t)||^2 + \sigma_G^2$. Shifting $A\|\nabla f(x_t)\|^2$ to the left side of the convergence result, $\zeta_G$ can be directly transformed into $\sigma_G$.
[1]Stochastic optimization with importance sampling for regularized loss minimization, ICML 2015
[2]Not all samples are created equal: Deep learning with importance sampling. ICML 2018
[3]Low-Cost Lipschitz-Independent Adaptive Importance Sampling of Stochastic Gradients. ICPR, 2021
[4]Diverse client selection for federated learning: Submodularity and convergence analysis. ICML 2021
[5]On the effectiveness of partial variance reduction in federated learning with heterogeneous data. CVPR. 2023
[6]Sharper convergence guarantees for asynchronous sgd for distributed and federated learning. NIPS, 2022
[7]Optimal client sampling for federated learning. TMLR, 2022.
[8]Scaffold: Stochastic controlled averaging for federated learning. ICML 2020.
[9]Achieving linear speedup with partial worker participation in non-iid federated learning. ICLR 2021.
[10]A unified theory of decentralized sgd with changing topology and local updates. ICML 2020.
---
Rebuttal 2:
Title: To Reviewer 71Y7
Comment: Dear Reviewer 71Y7,
I hope this message finds you well.
We would like to first express our profound gratitude for the time and expertise you've dedicated to the assessment of our submission. We appreciate your insights and have found your comments to be extremely beneficial in refining our work.
Regarding your concerns, we hope that our rebuttal was able to shed light on them. To summarize briefly, we clearly compare our analysis with existing literature, such as FedAvg and IS, showcasing the advancements in our approach. Meanwhile, we demonstrate that the practical DELTA does not need Assumption 4 but a looser assumption, whereas the practical FedIS relies on Assumption 4 due to its necessity as a condition for the importance sampling employed in deep learning.
Your constructive feedback has recommended that we refine our expression and pinpoint the limitations of our work. We deeply appreciate your expertise and we have refined our work based on your feedback. Thank you once again for your time and consideration.
Best Regards,
authors
---
Rebuttal Comment 2.1:
Comment: I would like to thank the authors for replying my questions. Most of my concerns were resolved. I increased my score to 5.
---
Reply to Comment 2.1.1:
Title: Thank you for raising the score
Comment: Dear reviewer 71Y7,
Thank you very much for raising our scores. We have incorporated the discussion of the theoretical analysis novelty and the soundness of Assumption 4 in our revised version. We appreciate your time and efforts in reviewing our rebuttal. | null | null | null | null | null | null |
Adversarial Model for Offline Reinforcement Learning | Accept (poster) | Summary: A fundamental principle in offline RL is pessimism, which however is not free from the performance degradation with respect to the baseline reference policies, i.e., the currently running policies in the system.
In viewing this issue, this paper proposes Adversarial Model for Offline Reinforcement Learning (ARMOR), which can robustly learn policies that improve upon an arbitrary reference policy, regardless of the data quality.
Specifically, in ARMOR, the authors extend the technique of relative pessimism by adversarially training an MDP model, to achieve robust policy improvement (RPI) with arbitrary reference policies, regardless of whether they collected the data or not.
Based on the theory, a scalable implementation of ARMOR is developed, which achieves competitive performance on D4RL benchmarks, while using only a single model rather than the widely-used ensembles.
Strengths: 1. The proposed method is theoretically supported.
2. Practical implementation, including the selection of key hyperparameters, is discussed in details.
3. Achieving competitive performance while using only a single model worth praising, since this is important for utilizing more sophisticated dynamics model.
Weaknesses: 1. The assumptions, in particular the realizability of the model class $\mathcal{M}$ and $\mathcal{M_\alpha}$, and the policy class $\Pi$, maybe demanding. It is unclear if or how these assumptions holds in practice. It is also unclear if the theoretical results still hold with a misspecified model and/or policy class, e.g., $M^* \notin \mathcal{M}$, $M^* \notin \mathcal{M_\alpha}$ , and/or $\pi_{\mathrm{ref}} \notin \Pi$.
2. The authors draw a distinction between the reference policy and the data-collecting behavior policy. But in the experiments, the reference policy is simply set as the behavior-cloned policy on the same or a very similar offline dataset, e.g., expert v.s. medium. Therefore, this conceptual distinction seems artificial.
* In practice/experiments, can the learned policy still improve upon the reference when the behavior policy (offline dataset) behaves significantly different from the reference policy?
* And more importantly, why couldn't we simply learn from the data generated by the "reference policy"?
3. The authors claim that the proposed method is robust to hyperparameters within a known set, i.e., the RPI property. But there is no experiment directly showing this. I would love to see more evidence and details pertaining to, for examples,
* How to select this set?
* How large can this set be?
* How many hyperparameters can this set incorporate?
4. Except the proposed method, the practical implementation (Section 4) shows many differences from the implementations of the baseline methods, such as MOPO, COMBO, CQL, IQL and so on. It is therefore unclear if the performance gain is due to the proposed method or due to these different choices of implementation details. This can be seen from the fact that the closely-related model-free method ATAC has performance similar to ARMOR and better than baselines on several datasets.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Can existing model-based methods, such as MOPO, MoREL or COMBO, utilize the reference policy as well? For example, a quick (and naive) thought may be adding an extra behavior-cloning term towards the reference policy on model-generated states (similar to L153-155). I am curious what preventing them from doing so? Otherwise, with such an augmentation, are they still underperform ARMOR?
2. Could you elaborate more on the distinction between the reference policy and behavior policy? Are there any practical setting where the reference policy is significantly different from the policy that collect the offline dataset?
3. In Eqn. (1) and (2), why do you introduce the $\alpha$ parameter? Maybe I miss something, but I don't find $\alpha$ being used later on, except something like "for $\alpha$ large enough".
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for the detailed questions. We hope that our answers below would address your concerns and clarify the importance of the contribution we are making.
**Weakness 1**
Thank you for bringing it up. We would like to emphasize that realizability is a standard assumption in the literature (see, e.g., [Uehara and Sun, 2021]). Furthermore, it is straightforward to handle the case of misspecified model and policy classes by additional additive terms. This is why we made the assumption. We will add this discussion in the camera-ready version.
**Weakness 2**
The distinction between the reference and the behavior policy is more than just conceptual. There are multiple practical scenarios where we can’t learn from data collected by the reference alone:
1. The data might be collected by a more exploratory policy. A priori, it's unknown whether the reference is good or safe.
Data comes from multiple policies (e.g., a system that iterates with the environment for a couple of rounds). The behavior policy is their mixture, but ARMOR allows us to set the reference to be the best.
2. A concrete scenario is the following. Consider using offline RL to improve a recommendation system. The product team has a running policy (i.e., the baseline policy) right now and has some offline data, which is collected by previous policies and perhaps some from the current policy. Naturally, we want the policy learned by offline RL to be at least no worse than the current policy under deployment (i.e. the reference policy). Such a scenario of learning to improve a baseline/reference policy using mixed offline data is common in offline RL applications.
In Sec 5.2, we explicitly construct experiments to simulate the scenario where the reference policy is distinct from the behavior policy, and demonstrate that robust policy improvement (RPI) still holds. We found that the learned policy can indeed improve over the reference even when the reference policy (the expert) behaves differently from the behavior policy (mediocre policies); see the gap in returns. There is significant improvement especially in the halfcheetah and adroit domains. In fact, we would argue that these are the major results of the paper, and Sec 5.1, where the reference is set to the behavior cloned policy is just a demonstration of ARMOR being used as a standard offline RL algorithm when no explicit reference policy is provided.
**Weakness 3**
As per the theoretical results and main claims of the paper, ARMOR has robustness to the pessimism hyper-parameter $\beta$ and not all hyperparameters. Our results in Sec 5.2 validate this RPI hypothesis. The theory suggests that RPI should hold for small values of $\beta$. In practice, hence a good general strategy is to start with small $\beta$ values, and gradually increase it to find the best performing policy.
**Weakness 4**
We agree with the reviewer that this is an important question. However, in general it is hard to directly compare model-based and model-free methods over implementation details. More generally, it is difficult to study the effects of implementation details and conceptual ideas in isolation when comparing two very different deep RL algorithms. Each algorithm’s code can make different low-level design choices. Consequently, performing a strict comparison between the conceptual ideas is often impossible, unless we reimplement one algorithm based on another’s low-level details (but one might call this reimplementation yet another new deep RL algorithm?). Therefore, it’s hard to do such an isolated and direct comparison between ARMOR and other baselines like MOPO, COMBO, CQL, IQL as the reviewer mentioned.
Nonetheless, we would like to highlight some (beneficial) effects due to the conceptual difference ARMOR introduces beyond what implementation details can provide.
1. Existing model-based methods such as MOPO and MOReL "have to" use ensembles of dynamics models for uncertainty quantification. But ARMOR does not, because it is based on adversarial training. As the reviewer mentioned, by *not* using an ensemble of models, ARMOR can be more suitable to train large models, when hosting ensembles of large models is too computationally expensive.
2. While ATAC and ARMOR share the same low-level implementation details, we show that they have different properties. ATAC only demonstrates RPI with respect to the behavior policy whereas ARMOR, due to its model-based nature has RPI with respect to arbitrary reference policies including those that are not covered by the dataset.
**Question 1**
This is an interesting idea but it is unclear how it can be done in a principled manner. The direction the reviewer suggested seems intuitive, but there are missing details (e.g., what the states to define the BC term in are and how to generate them) that can greatly affect the performance. Further it is unclear whether this design would have the same RPI guarantees as ARMOR in theory, as our current proof relies on the properties of adversarial training.
**Question 2**
Please refer to the response for Weakness 2.
**Question 3**
This $\alpha$ parameter is to account for the finite-sample errors arising from estimating the true expectation with samples. It's a standard slackness parameter in forming version space (see, e.g., $\xi$ in Eq (1) of [Uehara and Sun, 2021]). We note that if we choose $\alpha =0$, we are only considering the model with the best fit empirically, which may not be the true model. To make sure we include the true model, we need to set $\alpha$ to be a non-zero value ("large enough"), which can be informally thought of as a "variance" term that depends on sample size and the complexity of model class.
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: Dear authors,
Thank you so much for the responses, which clarify some of my previous concerns.
I do have some follow-up questions pertaining to your responses.
1. Regarding the statement in the global response: *"a policy cloned on the expert dataset ... is the hardest policy to show RPI against - a near-expert policy that lies outside the data support"*. I am a bit confused about why it is the "hardest" to show RPI against? And why such a near-expert policy lies outside the data support? For the latter, wouldn't "a policy cloned on the expert dataset" automatically lies in the data support?
2. For *"it is straightforward to handle the case of misspecified model and policy classes by additional additive terms"*, could you briefly discuss how would you plan to handle? I understand that there is a word-limit in the response, so it is fine to only give a brief overview.
3. Is the RPI shown on Figure 3 significant? Or, maybe a more fair question is: in what percentage of the settings the RPI is significant over the reference policy? I can see that on many settings, the error bars of the reference policy overlaps substantially with ARMOR (under various $\beta$). Examples may be hammer-human, door-human, hopper-med-replay, and so on.
* In any case, I highly recommend the authors to re-draw Figure 3 and set the $y$-range separately for different datasets so that the distinction between REF and the ARMOR variants can be more apparent.
4. *"in general it is hard to directly compare model-based and model-free methods over implementation details"* --- I agree with this. But I would like to kindly point out that both MOPO and COMBO are model-based methods. I think a more apple-to-apple comparison between ARMOR and these baselines can better demonstrate the gain of the proposed method. Otherwise, the current Table 1 looks like the improvement of ARMOR may just come from a better backbone, which ARMOR may not surpass either.
* If an apple-to-apple comparison is impossible, can the authors briefly discuss what are the main obstacles (except for using dynamic ensembles which should not be a major blocker)?
---
Reply to Comment 1.1.1:
Title: Response to the Reviewer (1/2)
Comment: Thank you for the response. Please find our responses to your additional questions below:
**Question 1**
Regarding the statement in the global response: "a policy cloned on the expert dataset ... is the hardest policy to show RPI against - a near-expert policy that lies outside the data support".
Before answering the reviewer’s question, we want to highlight that, in the RPI experiments, the data that ARMOR uses and the data that was used (by BC) to create the expert reference are **different**. In other words, ARMOR needs to learn to be competitive to the reference policy, **without** using the data that created the reference or using data running the reference would generate.
**Why the expert reference is the hardest**
Since the reference is near optimal, the learned policy also needs to learn to be near optimal to demonstrate the RPI property. This is highly nontrivial, because the learner doesn’t have the reference’s data (as highlighted above) and we have seen that running standard offline RL on the data the learner has access to cannot reach near optimal performance. If ARMOR achieves RPI here, it means that it must be doing something special (i.e., be effective in leveraging the reference policy). In contrast, if we were to use a reference policy with performance that is attainable by offline RL (ORL) using the training data, we wouldn’t be able to clearly establish whether the improvement over the reference is due to typical (ORL) effects, or due to RPI.
**Why the near-expert policy lies outside the data support**
ARMOR takes as input two things - an offline RL dataset and a reference policy. In the RPI experiments, the dataset is either the medium or medium-replay, and the reference is the “policy cloned on expert dataset”. As highlighted above, ARMOR does not have access to the actual expert dataset for training. We can further ascertain that the near-expert reference lies outside data support by looking at the performance of the best policy extracted by running ORL on medium and medium-replay datasets. This serves as a rough proxy for coverage because if the near-expert policy was covered by the dataset, ORL would recover it. Since, the best ORL policy is not near-expert we can approximately say that it lies outside data support. We realize that this is not a mathematically principled statement, but it is only meant to elucidate the overall point.
**Question 2**
The key idea for dealing with misspecified classes is to set the best in-class candidate as the learning goal, and pay for the gap between best in-class candidate and the ground truth (i.e., misspecification).
For adapting to the model misspecification (modifying Theorem 2): Define $M_{\rm class}^\star$ to be the best in-class candidate, which has small total variation error from the ground truth $M^\star$. Then it is straightforward to get a slightly different version of Theorem 2 which bounds $J_{M_{\rm class}^\star}(\pi^\dagger) - J_{M_{\rm class}^\star}(\widehat\pi)$ rather than $J_{M^\star}(\pi^\dagger) - J_{M^\star}(\widehat\pi)$ (original version), and we pay for the difference between $2\max_{\pi} |J_{M_{\rm class}^\star}(\pi) - J_{M^\star}(\pi)|$ (i.e., misspecification, bounded by the small total variation error assumed before) to obtain what we want.
Adapting to the policy misspecification (modifying Theorem 3) is similar. We define $\pi_{\sf ref}^{\rm class}$ to be the best in-class behavior estimator and then 1) using $\pi_{\sf ref}^{\rm class}$ as an intermediate comparator (i.e., $J(\pi_{\sf ref}) - J(\widehat\pi) = J(\pi_{\sf ref}) - J(\pi_{\sf ref}^{\rm class}) - [J(\widehat\pi) - J(\pi_{\sf ref}^{\rm class}])$); 2) $J(\pi_{\sf ref}) - J(\pi_{\sf ref}^{\rm class})$ bounded by policy misspecification; 3) $\pi_{\sf ref}^{\rm class}$ is in class now, we just need to follow the similar argument of dealing with model misspecification as above. | Summary: This paper introduces ARMOR (Adversarial Model for Offline Reinforcement Learning), a novel model-based framework for offline reinforcement learning (RL) that addresses the challenge of performance degradation. Offline RL allows learning decision-making policies from logged data without requiring new data collection. ARMOR robustly learns policies that improve upon a reference policy by adversarially training a Markov decision process (MDP) model. The framework optimizes policies for worst-case performance relative to the reference policy, ensuring robust policy improvement regardless of data coverage. The authors provide theoretical proofs that ARMOR competes with the best policy within data coverage and never degrades the performance of the reference policy, even when the reference policy is not covered by the dataset.
To validate their claims, the authors present a scalable implementation of ARMOR that achieves state-of-the-art performance on D4RL benchmarks without using model ensembles. This makes ARMOR suitable for using high-capacity world models. The empirical results support the robust policy improvement property of ARMOR.
Strengths: Overall, ARMOR offers a promising solution for offline RL, providing robust policy improvement over a broad range of hyperparameter choices and regardless of data coverage. The framework has practical implications for real-world problems where collecting diverse or expert-quality data is expensive or infeasible. The paper contributes theoretical analysis, a scalable implementation, and empirical validation.
Weaknesses: 1. Inadequate Discussion of Existing Limitations: The paper briefly mentions that offline RL algorithms have not been widely adopted beyond academic research, but it fails to provide a comprehensive discussion of the existing limitations and challenges that hinder their practical adoption. Without a clear identification and discussion of the existing limitations, the paper's motivation may appear less compelling.
2. Conflicting results: There is uncertainty regarding the claim that the policy learned by the proposed approach outperforms different hyper-parameter settings. In Figure 1, significant variations in performance can be observed for the different hyper-parameter choices. It is important to address and discuss this variation in more detail to provide a clearer understanding of the robustness and reliability of the proposed approach across different hyper-parameter settings.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time reviewing the paper and providing valuable feedback. We hope our answers below can address your concerns.
**Weakness 1**: “[the paper] fails to provide a comprehensive discussion of the existing limitations and challenges that hinder [offline RL’s] practical adoption”
As with any ML methodology, real-world adoption is always a complicated multi-faceted problem that comes with many challenges. For offline RL, there are many challenges, such as the difficulty of offline model selection, possible confounding in dataset, real-world non-stationarity, and the list can go on; it is virtually impossible to have a “comprehensive discussion” of all these issues in a technical paper (this is perhaps better suited for a survey/position paper).
Among all these challenges, we isolate a single important challenge and address it, namely
the risk of performance degradation from the current baseline policy. To illustrate, consider using offline RL to improve a recommendation system. The product team has a running policy (i.e., the baseline policy) right now and has some offline data, which is collected by the current running policy as well as other previous policies. Naturally, we want the policy learned by offline RL to be at least no worse than the current policy under deployment; otherwise, we should just continue running the current one. Such a scenario of learning to improve a baseline policy using mixed offline data is common in offline RL applications. Since these applications are often where making mistakes is costly (e.g., losing real money in this example), performance degradation is not allowed.
The extension of robust policy improvement (RPI) to improvement over any given reference/baseline policy is exactly motivated by this. However, no existing offline RL algorithm has this guarantee to our knowledge (that is, running them with an incorrectly picked hyperparameter can produce a policy worse than the baseline policy).
**Weakness 2**: Conflicting Results
We would like to point out that the empirical results in Fig. 1 and Sec. 5.2 demonstrate that ARMOR can match or outperform the reference policy for a range of $\beta$ values, **consistent** with the theoretical results. The theoretical analysis only states that we can improve upon the reference policy for a range of $\beta$ values. It does not say the policy learned with each $\beta$ would be the same or have similar performance, just that they should all be no worse than the reference policy. In Fig. 1 and Sec 5.2, ARMOR is consistently able to outperform the reference policy for a range of $\beta$, demonstrating insensitivity to $\beta$. Further, we would like to point the reviewer to our global response and the uploaded PDF, where we provide further empirical evidence to support that the RPI property holds for different reference policies as well.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I still don't find the reasoning for the weaknesses convincing, and will keep the score of 4. | Summary: The paper propose a new model-based offline RL framework, called ARMOR, which can robustly learn policies that improve upon an reference policy by adversarially training a Markov decision process (MDP) model. ARMOR aims to optimize for the worst-case relative performance over uncertainty. In experiment, ARMOR implementation achieves good performance on D4RL benchmarks using only a single model.
Strengths: 1. The idea is novel. The paper give us a new perspective to learn a dynamics model in offline RL, which is valuable to the community;
2. The paper is overall well-written and easy to follow for me;
3. The implementation is generally reasonble in intuition.
Weaknesses: 1. The description in Section 4.1 is quite confusing, with many details tucked away in the appendix. This arrangement is not particularly reader-friendly. I suggest that the authors better link the following points: (1) why the objective in Line 228 can substitute for Eq. (1); (2) how Line 228 is transformed into the goal of Line 199 and Algorithm 1. You don't need to elaborate on the relationship, but at least some intuition or motivation should be provided to the reader. Additionally, an optional suggestion: the authors might consider restructuring the overall arrangement by first introducing the part of Line 228 and then describing how it is implemented, which may result in a more natural discourse.
2. The notation used for expressing errors is confusing here, with many instances of $\mathcal{E}$ and $\mathcal{L}$ representing loss, lacking in distinctiveness. A classic example is the symbols in Line 228, $\mathcal{E}\_{D}$ and $\mathcal{E}_{\hat D}$, both error symbols share the same structure, only the parameters differ. But they represent completely different losses, with the former being the maximum likelihood, and the latter the Bellman error. I suggest the authors revise the notations used, making it clear to readers what kind of loss it is without having to inspect the parameter differences.
3. Figure 2 could be further polished: Without reading the main text, readers would not understand why ARMOR chooses the two models on the right, as the figure (including the caption) does not demonstrate that the reference policy is actually moving rightward. I believe the authors could add this information, allowing readers to better grasp the authors' motivation.
3. Although D4RL is a fairly popular benchmark, I think the tasks from D4RL do not adequately highlight the advantages of the proposed algorithm. As the authors mentioned, this algorithm is motivated by real-world application scenarios, like "Usually, the systems we apply RL to have currently running policies, such as an engneeded autonomous driving rule or a heuristic-based system for diagnosis, and the goal of applying a learning algorithm is often to further improve upon these baseline reference policies...". However, the data in the selected D4RL datasets mostly come from mixed policies. The medium dataset aligns with this scenario, but the baselines already perform well, making it hard to discern the advantage of the proposed method. The recently released neorl [1] dataset might be more suitable for this work, as it was specifically designed for the scenarios the authors proposed, with all datasets collected by a single working policy, a concept similar to the authors' reference policy.
[1] NeoRL: A Near Real-World Benchmark for Offline Reinforcement Learning
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The implementation of Equation (4) somewhat contradicts my intuition. Specifically,
- $M$'s initial goal was to enhance $\pi_{\rm ref}$'s $J$ and reduce $\pi$'s $J$. However, in the final implementation, none of $M$'s optimization aspects involve $\pi_{\rm ref}$. This makes it unclear how the losses employed in practice can meet the original optimization objective of Equation (1).
- I'm unclear on why the optimization process for the critic necessitates a pessimistic loss, e.g., $\mathcal{L}_{D_M}$? The need for pessimism about the critic's estimates isn't evident in the objective Equation (1).
In Table 1, I'm curious on why several cloned datasets don't work by ARMOR? This seems to be the datasets very suitable for ARMOR to show better performance.
In Figure 3, the algorithm shows sensitivity to $\beta$. It would be beneficial if the authors could elucidate the underlying reasons for this and provide any principles to guide researchers.
The experiments in Section 5.2 could be improved. The authors claim that "ARMOR can robustly learn policies that improve upon an arbitrary reference policy..." which is important. However, in the experiments, only performance improvements under a specific reference policy are demonstrated. It would be more constructive to verify this claim by picking various policies as reference policies and conduct experiments under one or two environments. I think DOPE [1] can be referenced as a work to conduct this experiment, which preserves many policies that can be used as reference policies.
Regarding related work, a discussion could be made with the work [2] that utilizes adversarial approaches for offline environment learning. The optimization objective of this work coincidently using an opposite objective to the Equation (1) of this paper for model learning, which is "$\max_M \min_\pi \ell(M, \pi)$".
Also suddenly inspired by the work [2], I would like to propose an open question for discussion: Can we also consider using adversarial policies further with ARMOR, such as $\max_\pi \min_M \max_{\pi_{\rm ref}}$? Since constructing $\pi_{\rm ref}$ from the offline data isn't a necessity, following Theorem 3, if we can create a sufficient number of $\pi_{\rm ref}$ in an adversarial manner, could we theoretically ensure that the final $\hat \pi$ approximates the optimal policy in any situation?
[1]Benchmarks for Deep Off-Policy Evaluation.
[2] Adversarial Counterfactual Environment Model Learning.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NAN
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback on improving the readability and constructive comments. We hope that our responses below would help resolve the remaining concerns you might have.
**W 1**: We thank the reviewer for pointing this out.
[How Eq.(1) → L228]:
We have a detailed discussion regarding this in Appendix C due to the space limit. The high-level overview is: (1) Relax constrained optimization to regularized optimization. (2) Utilize the learned model to generate augmented data to perform relative pessimism properly (this is why ARMOR could have stronger RPI than the model-free opponents, e.g., ATAC).
[How L228 → L199 and Algorithm 1]:
We would like to clarify that the goal of Algorithm 1 is essentially optimizing L229, whereas L199 denotes how each term on L228 is estimated in Algorithm 1.
**W 2**: Currently, we use $\mathcal{L}$ to denote the term that the policy and the model compete with, and $\mathcal{E}$ to denote all other losses for enforcing data-consistency on the model. Different $\mathcal{E}$ losses have different function signatures and we always include the input parameters the loss takes to make it clear. We will make them more visually different, such as by adding additional superscripts.
**W 3**: Please refer to the Global Response and the updated figure in our uploaded PDF.
**W 4**: Thanks for pointing this out. NeoRL seems like a great suite to test our algorithm in the future. However, it is relatively new and getting infrastructure setup to run experiments along with baselines takes a non-trivial amount of time. Also please note that our experiments are not limited to classic D4RL experiments. Here we have also constructed modified problems from D4RL to test RPI (sec 5.2 and Fig. 3): We specifically chose an expert reference policy that is beyond what the data can cover to showcase the benefit of RPI. In almost all cases, ARMOR using the expert reference can outperform the offline RL baseline and sometimes even the expert reference.
**Q 1**: The model and pessimistic losses are connected to each other; as a result, the model is learned to maximize the performance difference between the two policies. This is implemented in Alg. 1 by *jointly* optimizing the critic and the model to be pessimistic. We let both the critic and the model minimize the Bellman error (the critic: pessimistic loss+ *Bellman error*; the model: *Bellman error*+ model fitting loss). So when the critic minimizes the pessimistic loss (to maximize the performance difference), the model also becomes pessimistic due to the link of Bellman error. We explained this in Lines 210-216 of the main paper, where we discuss how setting the weight on model fitting loss to be zero makes ARMOR equivalent to Imitation Learning (this is also verified empirically in Appendix G). We chose to implement ARMOR this way since it is more computationally efficient than directly optimizing Eq.(1) which would require back-propagation through full length rollouts generated by the model for every policy update.
**Q Table 1**: The performance on the cloned datasets is on-par with the baselines. While in some cloned datasets ARMOR is not the best, no algorithm gives meaningful scores there (please see the scale in the -exp version). Since we implement the approximate version with neural networks, it is hard to always ensure perfect consistency between the theory and empirical results.
**Q Fig 3**: We would like to point out that the results in Fig. 1 and Sec. 5.2 in fact show insensitivity to hyper-parameters for the small $\beta$ regime, *consistent* with the theory. The theory states that we can improve upon the reference policy for a range of beta values. It does not assert that the learned policy with each beta would be the same, or have similar performance, just that they should all be no worse than the reference policy. In Fig 1, ARMOR is consistently able to outperform the reference policy for a range of beta, demonstrating insensitivity to $\beta$.
**Q Sec. 5.2**: Due to space constraints, we chose the hardest reference to compete with, i.e., a near-expert policy that lies outside the data support. This is because we know that running standard offline RL cannot achieve the expert level performance; as a result, if ARMOR achieves the expert performance, it must be due to RPI. Based on the review, we have conducted more experiments to show that ARMOR can achieve RPI with respect to arbitrary policies for a very wide range of $\beta$ values. Please refer to the Global Response for more details.
**Q Related Work [2]**: [2] considers a worst-case model fitting error (their Eq (3)), $ \min_M \max_{d^{\pi_\beta}} [ l_M( s,a,s’) ]$, where $l_M(s,a,s’)$ is the model fitting loss (e.g., the term in the sum of our Eq (3)), and then use the learned model for policy optimization. In contrast, ARMOR finds an adversarial model for each policy, and optimizes the policy that performs well under its adversarial model. We highlight that when the offline data does not have full coverage, their approach could lead to a poorly performing policy, because it does not consider the quality of model predictions outside data support, where the model can be arbitrarily bad. This makes their method inapplicable to offline RL under partial coverage setting here.
**Q Open question**: We think your proposal is quite interesting. We suppose you meant modifying our objective in Eq. 1 to $ \hat{\pi} = \arg\max_{\pi} \min_M \max_{\pi_{ref}} J_M(\pi) - J_M(\pi_{ref})$. We actually have a long, dedicated section on this in Appendix E (see line 641) and we call this approach Regret Minimization (as it tries to find the policy that has the smallest worst-case regret). Its worst case regret can be analyzed in a similar fashion as ARMOR. However, this algorithm can be overly conservative and does not necessarily have RPI to a given reference policy. We can discuss more in the final draft due to space limitation in rebuttal.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Most of my concerns have been addressed.
Regarding the open questions:
1. I reviewed Appendix E, but I'm still struggling to understand the relationship between $\max_\pi \min_M \max_{\pi_{ref}} J_M(\pi) - J_M(\pi_{ref})$ and Regret Minimization. How does this objective potentially lead to an overly conservative policy?
2. Assuming that $\max_\pi \min_M \max_{\pi_{ref}} J_M(\pi) - J_M(\pi_{ref})$ results in an overly conservative policy, I'm trying to reconcile this with my understanding of Theorem 3. Specifically, if we can generate a sufficient number of $\pi_{ref}$ adversarially, wouldn't we theoretically ensure that the final policy approximates the optimal policy in any situation? This appears to be a direct implication of your Theorem 3. Have I overlooked something?
---
Reply to Comment 1.1.1:
Comment: Thanks for your further question.
First apologies. We made a typo. What we intended to mean in the rebuttal was $\arg\max\_\pi \min\_M {\color{red} \min\_{\pi\_{ref}}} J\_M(\pi) - J\_M(\pi\_{ref}) = \arg\max\_\pi \min\_M J\_M(\pi) - \max\_{\pi\_{ref}} J\_M(\pi\_{ref})$, otherwise the innermost optimization would try to pick the easiest reference policy (i.e., the worst policy in $M$) to compare with and is no longer adversarial. We hope that this is the same as what the reviewer meant earlier.
After this correction, suppose the innermost $ \min\_{\pi\_{ref}}$ is over a sufficiently rich policy class, then it's easy to see that the minimizer is $\pi\_{ref} = \pi\_M^\star$, that is,
$\max\_\pi \min\_M { \min\_{\pi\_{ref}}} J\_M(\pi) - J\_M(\pi\_{ref})
= \max\_\pi \min\_M J\_M(\pi) - J\_M(\pi\_M^\star)$, thus reducing to the regret minimization case discussed in Corollary 10, bullet point 3. We said the “this algorithm can be overly conservative” because the term $J\_M(\pi) - J\_M(\pi\_M^\star)$ above is always greater than or equal to the performance difference for any fixed reference policy $\pi_{ref}$, i.e., $J\_M(\pi) - J\_M(\pi\_{ref})$. As a result, the policy learned in this way does not necessarily have RPI to a given $\pi_{ref}$.
And to reconcile with Theorem 3, note that Theorem 3 applies to ARMOR which has **a fixed $\pi\_{ref}$**, but the $\pi\_{ref}$ above corresponds to a policy that changes as we consider different models $M$ in the version space. When $\pi\_{ref}$ is fixed, it is easy to show (under the assumptions we make) that the objective value of ARMOR is never negative (i.e., RPI), so we never have degenerate performance. The problem with regret minimization and using multiple $\pi\_{ref}$ in general is that the objective value can become negative, which is a sign that the optimization is ill-posed and we are being overly conservative about the performance difference to $\pi_{ref}$.
Related to this point, the reviewer conjectured that "[with] sufficient number of $\pi\_{ref}$, [we can] theoretically ensure that the final policy approximates the optimal policy". This is not true. Here is a minimal counterexample:
Consider a 2-armed multi-armed bandit. Suppose our version space is $(1, 0), (0, 1)$, i.e., we know one arm is good (reward 1) and one arm is bad (reward 0), but we don't know which is which. In this case, if we choose the two deterministic policies as the 2 reference policies and plug them into the formulation above, the optimal objective is $-1/2$ and the best policy is choosing between the two arms uniformly randomly. Such a policy is suboptimal in both model instances, and in general no policy can yield good improvement in such a case. In general, being able to “approximate the optimal policy” without further assumptions is too good to be true in offline RL since there is model uncertainty that we cannot resolve without access to further (online) data. | Summary: The paper introduces the Adversarial Model for Offline Reinforcement Learning (ARMOR), a model-based offline RL framework. ARMOR uses adversarial training to robustly learn and improve policies over any given reference policy, regardless of data quality. The framework utilizes the concept of 'relative pessimism' for worst-case optimization, ensuring that it either maintains or improves upon the performance of the reference policy, a property known as Robust Policy Improvement (RPI). The authors have also shown, theoretically and empirically, that ARMOR is robust to hyperparameter choices, and can outperform or be on par with state-of-the-art offline RL methods.
Strengths: * The paper is well-written and technically sound. The RPI in this paper is stronger than the RPI property in the literature, which only guarantees to be no worse than the behavior policy that collected the data.
* The experimental results show the effectiveness of the proposed method and it outperforms existing baselines in many environments.
Weaknesses: * I think the proposed method highly depends on the quality of the MDP model $M$. For the results in Table 1, in some environments, the proposed method does not outperform baselines. I think some explanation and intuitions are needed. I am wondering if the $M$'s structure is not optimal in those cases.
* The model fitting loss in Equation (3) is not well-defined. Specifically, $r$ is not mentioned. Is it the same as $R^*$ as in Definition 1?
* Minor: Please refrain from only using color to distinguish bars in Figure 3, as it is not friendly to readers with color blindness. Also, in Figure 2, not all illustrations are valid MDPs. I suggest the authors re-draw those toy examples.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback. We will incorporate them in the revision. We have addressed your comments below.
**Weakness 1**
We would like to clarify that ARMOR (conceptually) maintains a set of models (i.e., the version space), not just a single one. We also assume that the model class is rich enough in theory such that it includes the true model $M^*$ (Assumption 1, realizability). Note that ARMOR doesn't need to explicitly recover the true $M^*$; by maintaining a model set that contains $M^*$ and performing worst-case optimization over the set, it automatically enjoys the robustness guarantees (i.e., not being worse than the reference policy). However, robustness here comes at a cost, as worst-case reasoning over *a set of models* may lead to a conservative policy that does not aggressively optimize the return. Hence, there is a fundamental robustness vs. performance trade-off. In Table 1, we see ARMOR is comparable to other baselines across most datasets, with the only exception being halfcheetah domains when compared with RAMBO (in halfcheetah, ARMOR still performs better than other non-RAMBO baselines).
**Weakness 2**
In Eq (3), $R_M$ denotes the predictions made by the model $M$ and $r$ denotes the reward labels in the data, as per the definitions in Sec. 2. We will make this clearer in the final draft.
**Weakness 3**
Fig. 3: Thank you for pointing this out. We will update in the final draft.
Fig. 2: We are unsure what the reviewer means by “in Figure 2, not all illustrations are valid MDPs.” It would be helpful if the reviewer could elaborate on this so we can make the appropriate changes if required. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for taking the time to review our work and providing their constructive feedback. Here we provide responses to some of the common concerns
**RPI Experiments**
Reviewers “suos”, “pF8E” and “sNsB” had questions about the RPI experiments, specifically regarding sensitivity to hyper-parameter values and the nature of the reference policy. We believe that the primary source of confusion could be a misunderstanding of the theoretical results and the main claims of the paper.
First, the theoretical analysis in Sec 3.2 and discussion in Sec. 6 state that ARMOR can improve upon the reference policy for a range of values for only the pessimism hyper-parameter $\beta$. They do not assert that the policy learned with each $\beta$ would be the same or have similar performance, just that they should all be no worse than the reference policy. Our results in Sec 5.2 and Fig. 1 clearly demonstrate this.
Second, we chose a policy cloned on the expert dataset as the reference in Sec 5.2 because of space limitations and the fact that it is the hardest policy to show RPI against - a near-expert policy that lies outside the data support. Based on reviewer feedback, we have conducted further experiments to demonstrate that RPI holds for other reference policies as well. In our uploaded PDF, we provide empirical evidence for RPI against a policy cloned on the random dataset as the reference, as well as a hand-designed bang-bang control policy for an even wider range of $\beta$ values. These new experiments further bolster the point that ARMOR can obtain RPI with respect to arbitrary reference policies (which might be out of support) and is robust to the pessimism hyper-parameter $\beta$.
**Illustrative Example Figure**
Reviewer “suos” pointed out the lack of clarity about the reference policy in Figure 2 and that the caption was not self contained. We have followed up on this input and have provided an updated figure and caption as part of our response PDF, to clearly denote what the reference policy is (changes in red). We also point the reviewers to a more comprehensive example in the appendix where we show that the same holds even when the reward function is learned.
We hope that our responses and new empirical results have alleviated the concerns the reviewers raised and they will consider updating their ratings accordingly.
Pdf: /pdf/fa6dfc1b10ce3fc4255f484fbf6f3535c0b718c7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Preference-grounded Token-level Guidance for Language Model Fine-tuning | Accept (poster) | Summary: This paper aims to tackle the misalignment between sequence-level preferences and token-level language model in NLG. The authors design an iterative training framework that integrates the sequence-level preference into token-level training guidance, mitigating the granularity mismatch. Experiments are conducted on two different LM tasks - discrete-prompt generation and text summarization, indicating its effectiveness.
Strengths: 1. The methodology of decomposing sentence-level preference into token-level preference presented in the paper is both intuitive and rational, providing a feasible approach to address the granularity mismatch in language model training.
2. Through comparative performance with baseline models, comprehensive ablation studies and discussions, the paper demonstrates the proficiency and potential benefits of the proposed model.
Weaknesses: 1. The paper lacks a precise definition of 'preference,' appearing to suggest it is human bias towards superior text quality. However, the method's experimental use of the METEOR score as a sequence-level reward contradicts this assumed understanding. This ambiguity may lead to confusion among readers and raises questions about the evaluation methodology. Why using METEOR as rewards can improve the evaluation in terms of ROUGE scores? Human evaluations should be conducted for a more comprehensive assessment.
2. The reported performance on CNNDM and XSum tasks, measured in ROUGE scores, is below benchmark levels. A basic BART model reportedly outperforms the proposed method (44 vs 40 ROUGE-1 score), which questions the effectiveness of the new training process. This raises concerns that the method may not improve upon existing architectures as suggested.
3. The absence of an accompanying code with the paper is a significant drawback. Without this, it is challenging to reproduce the results, especially given the complexity of applying reinforcement learning. This could limit the wider verification and applicability of the presented method.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Can the performance be enhanced by integrating the token-level feedback with RL-based baseline models? These baseline models appear to show promising performance as depicted in Figure 2.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer for your thoughtful review. We would like to draw your attention to our **General Response** on human evaluation results on CNN/DM summarization and a discussion on why our summarization results are below the SOTA.
Below, we address your remaining concerns in detail.
> **Q1.** About the definition of “preference” in our paper.
**A.** As discussed in Line 101-103, in this paper we make no assumption about the source of the preference --- it may come from human ranking or task-specific evaluation metrics. Our notion of preference is essentially an ordering of the text-sequences based on the evaluations of full text-sequences, where the evaluations come from automatic evaluation metrics or humans.
Therefore, we believe that using the Meteor score to obtain the sequence-level preference in our experiments aligns with our notion of preference.
We apologize for the confusion and will make this more clear in the next version of our draft.
> **Q2.** Why using METEOR as rewards can improve the evaluation in terms of ROUGE scores?
**A.** We are a bit confused about this question and will appreciate it if any further explanation can be provided. We provide our current best answer below.
Meteor measures the matching between the model-generated string and the reference string, so we believe that it is a valid preference source for the summarization task.
As shown in Section 4.2 and Table B in the added PDF, our method improves over the baselines in terms of several metrics: the ROUGE scores, Meteor score, and BertScore. As discussed in Section 4.2, we attribute our performance gain to our main algorithmic contribution: learning and utilizing the preference-grounded token-level guidance for LM training.
Besides, we note that the baseline results from RL4LMs [1] suggest that using Meteor as the environmental reward for training the RL methods, (Supervised+) PPO/NLPO, can lead to ROUGE scores competitive or even stronger than using ROUGE-based environmental rewards.
This further validates our use of Meteor as the preference source in the summarization task.
[1] Ramamurthy, Rajkumar, et al. "Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization." arXiv preprint arXiv:2210.01241 (2022).
> **Q3.** There is no accompanying code.
**A.** As stated in Line 214, to facilitate the reviewing process, our source code has been anonymously released. The clickable link in the submitted paper is under the red word “released”. Since links are not allowed in the rebuttal text, we respectfully refer the reviewer to our paper for the link.
We apologize for the confusion and will make the link more apparent in the next version of our draft.
> **Q4.** Can integrating our token-level feedback with RL-based methods enhance the performance?
**A.** Thank you for the great suggestion!
Since our LM training objectives in Section 2.2 are only minimalist, we believe that more sophisticated training approaches, such as RL-based methods, can further improve our performance.
On the other hand, as discussed in Appendix F.2, due to the granularity mismatch between the native sequence-level feedback and token-level LM training/generation, RL-based LM training can suffer from the delayed feedback issue. As a potential mitigation of this issue, it is promising to integrate our preference-grounded token-level guidance into RL-based methods for LM training.
Nevertheless, we kindly note that since our paper is not aimed at RL-based LM training, due to the page limit, the combination between our token-level feedback and RL-based methods is out of this paper’s scope. As discussed in Line 341, we will certainly pursue this direction in our future work.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I've read the rebuttal and I'll keep my score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 9EYo
Comment: Thank you so much for your time in reading our rebuttal! We wish our response could be helpful. | Summary: To fine-tune an LM, the paper proposes “token-level guidance” by leveraging sequence-level preference. The algorithm alternates between two stages: (1) learning token-level “guidance” (aka reward function) and (2) fine-tuning LM using the “guidance”/reward.
To aggregate token-level rewards, the authors propose aggregation functions that are different from the classical summation: average, soft maximum, and soft minimum.
Figure 1 is a good illustration of the learning algorithm. It’s similar to RLHF, except the preference is based on multiple generated sequences, the reward is token-level, and the aggregation function can be different.
Two “minimalist” LM training objectives are proposed:
When there’s no supervised data, the authors use a REINFORCE-style update (with a max-entropy regularizer).
When we have enough data, the authors use a reward-weighted MLE objective – the weight depends the importance of the token, which is essentially the normalized token-level reward.
Experiments are done on prompt generation (generating discrete prompts so that the accuracy of the corresponding task is good) and text summarization. The performance is competitive with respect to baselines shown in Table 1 and Table 2.
Strengths:
It’s great that the authors have done ablation that removes reward retraining, as shown in Section 4.3 (b). Good to see that the performance, after removing retraining, is still competitive with respect to baselines.
The token-level reward idea is worth pursuing, and I’m glad to see work in this direction. The experimental results should benefit the community.
In addition, I agree with the intuition that summation is not necessarily the only / optimal approach for aggregating different token-level rewards, and it’s great to see that the authors have attempted other aggregation functions.
Weaknesses:
Reward function retraining seems expensive. Is there an analysis on the compute cost?
I just want to make sure that the authors have ensured that their ROUGE computation is fair.
BART paper and the “Is Reinforcement Learning (Not) for Natural Language Processing: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization” paper have different ROUGE scores for lead-3 baseline, which is a bit weird. BART’s lead-3 result is higher. I just want to make sure in Figure 2, for example, the ROUGE scores are all comparable.
Related: Is there a reason why Table 2 results are run using T5-small instead of a larger model? Would the trend still hold when T5 gets a lot larger?
My understanding is that the reward is Meteor in summarization experiments. Although the summaries are evaluated by ROUGE, what are the Meteor scores (the actual rewards being optimized) on dev/test set? This detail seems to be missing, but apologies if I missed it.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: A few questions are in the "weaknesses" section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I don't see significant discussion on potential limitations. Please let me know if I missed those paragraphs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading of our paper and insightful comments.
We appreciate it if you can also consider our additional results and clarifications in the **General Response**.
Below are our detailed responses to your questions.
> **Q1.** The reward-retraining scheme seems expensive.
**A.** We thank the reviewer for raising this important concern.
In our algorithmic design, we reduce the computational cost of the reward-retraining scheme by only periodically retraining our reward model during the first half of the LM-training process, rather than over the entire training process.
By contrast, as discussed in Appendix G (Line 1007-1017), the baselines RLPrompt [1] and RL4LMs’ (supervised+) PPO/NLPO [2] retrain their value-functions in every optimization step, throughout the whole LM-training process. This can be more demanding and expensive than our method.
Therefore, compared with the baselines, we believe that our method is efficient and thrifty.
[1] Deng, Mingkai, et al. "Rlprompt: Optimizing discrete text prompts with reinforcement learning." arXiv preprint arXiv:2205.12548 (2022).
[2] Ramamurthy, Rajkumar, et al. "Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization." arXiv preprint arXiv:2210.01241 (2022).
> **Q2.** Make sure that the ROUGE computation is fair.
**A.** We thank the reviewers for this careful reminder.
We ensure that our ROUGE computation is fair by following the codebase of RL4LMs [2], to have a fair comparison with our baselines.
> **Q3.** Why results in Table 2 are run using T5-small instead of a larger model? Would the performance gain still hold when T5 gets larger?
**A.** As discussed in Line 263, results in Table 2 are run using the standard T5-small model because of our limited computing resources. Furthermore, using T5-small facilitates our comprehensive ablation study in Section 4.3 and Appendix A.2.
We verify the performance of our method under a larger LM by scaling up from T5-small to T5-base in Section 4.2 (Line 285-290), where the model is enlarged by about 3.5 times. The results are shown in Figure 2, with detailed numbers available at Table 4 in Appendix A.1 and Table B in the added PDF. It is clear that our method still performs favorably against the strong baseline results directly cited from RL4LMs [2], when T5 gets a lot larger.
Therefore, we believe that our experiments are sufficient to demonstrate the effectiveness and benefits of our method, based on our main results, ablation studies, and the newly uploaded PDF.
> **Q4.** My understanding is that the reward is Meteor in the summarization task. What are the Meteor scores on this task?
**A.** The reviewer is correct that in the summarization task, we use the Meteor score to obtain the sequence-level preference, which we ground into token-level guidance for LM training by our proposed method.
The Meteor scores on CNN/DM summarization under T5-base LM can be found at Figure 2 (Section 4.2), with detailed numbers at Table 4 (Appendix A.1) and Table B in the added PDF. By standard, these results are on the test set. The baseline results are directly cited from RL4LMs [2].
It is clear that our method outperforms these strong baselines in the Meteor metric as well.
> **Q5.** No significant discussion on limitations.
**A.** Due to the page limit, we defer the discussion on limitations to Appendix I (Line 1031-1041).
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for the rebuttal. I went through the details. Here are a few more comments.
Q1 (reward retraining): Thanks for the response. It's great that the authors have taken steps to make reward retraining more efficient. I would appreciate it if there's an analysis on
- how much compute is spent on reward retraining vs. other training,
- and how much compute the authors' techniques are saving, with respect to regular/vanilla reward retraining.
Q3: Good to know that the trends hold on T5-base as well. No need for experiments given that the authors say there is a lack of compute, but do the authors think that the trend will generalize to even larger models?
Additionally, I realized that there should be discussion on other weighted MLE approaches in NLP, as I don't think it's an extremely common technique. Just a few examples: RAML in https://arxiv.org/abs/1609.00150, weighted MLE in grammatical error correction for MT in https://arxiv.org/abs/1804.05940, weighted MLE in table-to-text generation in https://aclanthology.org/2021.acl-short.11/, GOLD in https://arxiv.org/abs/2009.07839.
Raising my score from 5 to 6, given that I'm satisfied with the other parts of the response. | Summary: The paper proposes to solve the issue of granularity match in preference-based tuning of LMs (RLHF for e.g.,), that is task-based preference is defined at the sequence level (via pairwise preference learning) while the reward model training and policy optimization is done at the token level. The paper proposes an extra step to ground the token-level reward into the sequence-level preference. Evaluation is done on two tasks (prompt generation and summarization) and some improvement is shown compared standard RL-based method such as PPO and NLP as well as vanilla supervised models
Strengths: - The paper is well written and explains the proposed method well and the experimental setup is well documented.
- The studied problem is interesting and relevant especially since RLHF methods are becoming more common nowadays.
- Evaluation is done against strong baselines.
- The proposed method seems to bring some improvements on the two studied tasks
Weaknesses: - **Limited tasks**: Evaluation is only two tasks and it's not clear why these two particular tasks were selected. I imagine other tasks should be used, where preference-based learning is relevant. Such tasks include toxicity avoidance and controllable/constrained generation.
- **No human evaluation**: No human evaluation is done on summarization: We know that RL methods are good at hacking metrics and I would expect human evaluation on the summarization task to support the improvements in ROUGE.
- **Some choices are not justified**: For example, in Algorithm 2, why is the reward model retrained only for the first half of the iterations? Why not for the entire LM training? Why is a max-entropy gradient added to REINFORCE in Eq 5? Did you add the same for the baselines?
- **Not clear where improvement compared to the baselines comes from**: For example in Algorithm 2, the reward model is trained alternatively with the LM. Are all baselines trained the same way? If not, then there is no way to know if improvement comes from this method of training or from the sequence-level grounding.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - See last two points in Weaknesses
- In the ablation in section 4.3, (b), How is this done? you just replace $f(\lbrace r_\phi(s_t^k, a_t^k)\rbrace)$ with $\mathcal{R(s^k_T)}$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I think the authors addressed some of the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and careful review. We first want to bring your attention to our **General Response** for our human evaluation results on CNN/DM summarization.
Below are our detailed responses to your other concerns.
> **Q1.** Evaluation is only on two tasks. It's not clear why these two tasks were chosen. Some other tasks can be used.
**A.** Due to the page limit and our limited computational resources, in this paper, we test our method on two carefully-chosen, distinct, and representative LM tasks.
Specifically, the prompt-generation task represents few/zero-shot learning since there are no ground-truth prompts. Text summarization represents a standard LM task, where there is a set of supervised samples and the generated sequences are of variable-length and are relatively long. Furthermore, on these two tasks, baseline methods/results within our computational budget are easily accessible.
Moreover, we conduct extensive ablation studies in Section 4.3 and Appendix A.2 to comprehensively demonstrate the efficacy and benefit of our method. Therefore, we believe that our experiments are sufficient to validate our method.
We agree with the reviewer that there are many exciting further applications of our method, such as toxicity avoidance and controllable generation that you suggest. We believe that the (potentially) wide applicability of our method is a merit and we will certainly test on these tasks in our future work.
> **Q2.** Why is the reward retraining only conducted in the first half of the iterations? Why not over the entire LM-training process?
**A.** We retrain the reward model only during the first half of the iterations in order to save computational cost. Our preliminary study suggests that retraining the model throughout the entire LM-training process doesn't yield substantial performance gains, making it less computationally worthwhile.
> **Q3.** Why do you add a max-entropy gradient to the REINFORCE-style update Eq. (5)? Did you add the same for the baselines?
**A.** As discussed in Line 161-163, since we want multiple generated texts in typical LM tasks, we add the max-entropy gradient so as to capture multiple good behavior-modes (good generated texts).
The REINFORCE-style update Eq. (5) is adopted in the prompt-generation task. The baseline method RLPrompt [1] is based on soft Q-learning [2], which is a maximum-entropy Q-learning method for text generation. By its design, it naturally contains a max-entropy gradient. We kindly note that the max-entropy gradient may not be applicable to other baselines due to their specific nature/designs. Their results are directly cited from the literature.
Therefore, we believe that the comparisons between our method and the baselines are fair.
[1] Deng, Mingkai, et al. "Rlprompt: Optimizing discrete text prompts with reinforcement learning." arXiv preprint arXiv:2205.12548 (2022).
[2] Guo, Han, et al. "Efficient (soft) q-learning for text generation with limited good data." arXiv preprint arXiv:2106.07704 (2021).
> **Q4.** Not clear if improvement comes from the reward-retraining scheme or from the main contribution: the preference-grounded token-level guidance.
**A.** We respectfully refer the reviewer to our ablation study in Section 4.3 (b), where we remove the reward-function retraining scheme. It is clear that without the reward-retraining scheme, our method still performs competitively against the strong baselines, which confirms the benefit of our preference-grounded guidance and is acknowledged by **Reviewer gmdH**.
As discussed in Appendix G (Line 1007-1017), the reward-retraining scheme does not give our method an unfair advantage over the baseline methods. In particular, the baselines RLPrompt [1] and RL4LMs’ (supervised+) PPO/NLPO [3] retrain their value-functions in every optimization step, which can be more demanding than our method.
Therefore, we believe that our improvement over the baselines should come from our main algorithmic contribution: the preference-grounded token-level guidance, rather than an unfair training scheme.
[3] Ramamurthy, Rajkumar, et al. "Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization." arXiv preprint arXiv:2210.01241 (2022).
> **Q5.** How is the ablation in Section 4.3 (b) done?
**A.** In our ablation study in Section 4.3 (b), we remove the reward-function retraining scheme. This can be seen from Algorithm 2 as removing the “if” block containing “Re-train $r_\phi$ by Algo. 1 without re-initialization.” In other words, we train the reward function $r_\phi$ only at the beginning of the LM-training process and fix it thereafter (never retrain it).
---
Rebuttal Comment 1.1:
Title: response to rebuttal
Comment: I acknowledge reading the author's rebuttal which aimed to address some of my concerns. Here's my response to the author's rebuttal:
> We retrain the reward model only during the first half of the iterations in order to save the computational cost.
I wonder how much compute can actually be saved by this. I'd still argue that this is a non-justified design choice.
> We kindly note that the max-entropy gradient may not be applicable to other baselines due to their specific nature/designs.
In this case, I would expect an ablation showing that your approach still outperforms the baselines without the max-entropy gradient. I understand that achieving a 100% fair comparison with the baselines is hard, but the authors add extra layers of complexity through some design choices, making a fair comparison much harder than it needs to be.
> We respectfully refer the reviewer to our ablation study in Section 4.3 (b), where we remove the reward-function retraining scheme.
Thanks for the reference. I can now see that your approach does not rely on the reward re-training scheme, which begs the question as to why it is a part of your approach since the improvement introduced as a result is extremely minor (0.08 ROUGE-1 points, for e.g.) Again, this points to some design decisions made by the authors that are not fully justified. Respectively, it seems like the proposed approach relies on so many moving parts that it has become unclear where the contribution actually is or whether these extra layers of complexity are justified.
Overall, I thank the authors for their rebuttal. However, nothing in the author's response merits raising my score so far.
---
Reply to Comment 1.1.1:
Title: Further Response to Reviewer xDED
Comment: Thank you so much for the quick response.
We respectfully disagree with your comment that our paper has many unjustified design choices. We address your concerns in details below.
> **Q1.** How much compute can be saved by only retraining the reward model during the first half of the training process?
**A.** In our preliminary study, we observed that “retraining the reward model only during the first half of the LM training process” can save about 25-30% computation, compared to “retraining the model throughout the entire LM-training process.”
Intuitively, “retraining only during the first half” can clearly reduce computation compared to “retraining the model throughout the entire LM-training process.” We believe that the computation-saving nature of this design can sufficiently justify itself. Therefore, we respectfully disagree with the reviewer’s argue that this is an unjustified design choice.
> **Q2.** Expect an ablation showing that your approach still outperforms the baselines without the max-entropy gradient.
**A.** We first kindly remind that as discussed in **Q3** of the previous Rebuttal, our most direct competitor in the prompt task RLPrompt (Line 246-249), has a max-entropy gradient as well.
As a recap, we test our method on the task of discrete-prompt generation. Apart from RLPrompt, the other discrete-prompt baselines use more demanding designs and requirements than our method (or RLPrompt), such as human handcraft and/or other forms of human efforts.
Specifically,
* “Manual Prompt” uses hand-crafted prompts.
* “In-Context Demo” requires (human) selecting one training example per class.
* “Instructions” requires manually created task descriptions and label definitions.
* “GrIPS” requires instructions designed for humans.
By contrast, our method does not require any of these demanding human efforts, and while less demanding, can still outperform those methods.
We clarify that the max-entropy gradient is part of our LM-training objective Eq. (5), which targets the few/zero shot setting, where we do not assume the availability of supervised data, e.g., the ground-truth prompts.
Due to this challenging setting, we believe that it is unfair for us to remove this component and compare with a baseline having max-entropy gradient (RLPrompt) or with baselines using more demanding designs, such as human handcraft and/or other forms of human efforts (e.g., “Manual Prompt”, “In-Context Demo”, “Instructions”, “GrIPS”).
Based on the above clarification, we believe that our comparison with the baselines is fair.
In addition to the above clarification, we kindly note that RLPrompt also has max-entropy gradient, and in its paper compares with the same set of baselines as our paper. Since RLPrompt’s experimental comparison has been perceived as fair by the community, we believe that our comparison with these baselines is fair as well.
> **Q3.** Why is the reward-retraining scheme a part of your approach since the introduced improvement is minor on the summarization task?
**A.** As discussed in Line 314-317 of our main paper, the gain of this scheme depends on the zero-shot ability of the initial LMs. Specifically, in the prompt task where the initial LM has little zero-shot ability, reward-function retraining can particularly be helpful to both improve performance and reduce variance. As a numeric example for Figure 4 in our paper, in the “SST-2” dataset of the prompt task, reward-retraining scheme improves the result from $90 \pm 2.9$ to $92.6 \pm 1.7$.
Meanwhile, we agree with the reviewer that *on the summarization task*, the reward-retraining scheme indeed may not help the results as much, since the initial LM has some zero-shot ability (Line 317-319).
Since our paper wants to build a general framework that is applicable to both of these distinct settings, we apply the reward-retraining scheme to both tasks in our main results. Then, the ablation study in Section 4.3 (b) serves as a further experimental explanation of this scheme.
***
Overall, based on this response and the previous Rebuttal, we believe that our method has similar complexity at least with our direct competitors RLPrompt and RL4LMs. Furthermore, we believe that we have sufficiently validated our method’s building blocks through our comprehensive ablation study in Section 4.3 and Appendix A.2, which is acknowledged by Reviewer **9EYo**.
We hope this additional clarification can address all your concerns and merit raising your score. Please kindly let us know if you have any remaining concerns. | Summary: This paper proposed break the pairwise sequence preference into token-level guidance signal by iterating between learning a token-level reward from sequence level preference and improving LM with the learned token level guidance. Experiments are conducted on two language generation tasks and competitive results are reported.
Strengths: - The paper propose to address a problem of how to effectively ground sequence-level preference into LM finetuning.
- studies different aggregation function to break sequence level preference into token level reward function
- the setup with supervised data to weight token level MLE by token level reward is interesting
Weaknesses: - in general the paper is difficult to follow, e.g. Figure 1 is not self explanatory, experimental setups and datasets could be use more details.
- a core motivation/hypothesis of the paper between sequence and token level losses is questionable. E.g. abstract mentions "a granularity mismatch between the preference and the LM training losses". It is wrong to categorize LM loss as token-level loss, because it is MLE over target sequence on both token level and sequence level. there are many unsupported assumptions around the mismatch of sequence and token level losses in the introduction section.
- in PPO, even though the reward function is sequence level, there is a token-level value network. Discussion and ablation of how that is trained is very relevant to choosing the aggregation of your token level reward model.
- the dataset to test your method is not proper. "simulate the sequence-level preference by the classical Meteor score and report the standard ROUGE scores" is questionable because it is not real sequence level feedback. On the other hand, there are datasets readily available, e.g. "Learning to summarize from human feedback" which contains real human preference data on summarization tasks.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: NA
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for raising several important questions. Please would you first check our **General Response**, which should address your concern on testing against the RLHF summarization paper. We answer your remaining questions in details below.
> **Q1.** The clarity of our paper.
**A.**
We thank the reviewer for raising this issue. We will carefully revise our paper and include more details into Figure 1 in the next version of our manuscript. Meanwhile, we would like to refer to **Reviewer gmdH**’s Summary section for a good overview of our paper.
We provided details on experimental setups and datasets in Sections 4.1 and 4.2. Due to the page limit, we deferred additional experimental details to Appendix B and additional details about the prompt task to Appendix D.
Finally, we notice that **Reviewer xDED** praised our paper as *“well written … and the experimental setup is well documented”*; while **Reviewer gmdH** thinks that our Figure 1 *”is a good illustration”*. Therefore, we will highly appreciate it if you can point out explicitly the confusing parts of our paper.
> **Q2.** About the validity of our core motivation and categorizing LM loss as a token-level loss.
**A.** We respectfully disagree with the reviewer that the core motivation of our paper is questionable and that it is wrong to categorize LM loss as token-level loss.
As discussed in Line 17, we consider LM (cross-entropy) loss as token-level because each token position has a corresponding term in the overall training loss.
By contrast, the preference/feedback is sequence-level because there is only one feedback for the entire text-sequence, rather than densely at each intermediate timestep.
Intuitively, the granularity mismatch comes from the fact that the LM needs to decide each token, while the preference/feedback is available only after the entire sequence has been generated and is only at the sequence level.
As discussed in Section 3 (Line 191-199), some prior studies have attempted similar/related problems, but are under more restricted/ideal settings.
Therefore, we believe that our core motivation is valid and sound, which is also supported by the other reviewers.
> **Q3.** In the introduction section, there are many unsupported assumptions on the mismatch of sequence- and token-level losses.
**A.** We will be deeply appreciated if you can explicitly point out the unsupported assumptions in our introduction section, apart from the above **Q2** that we have clarified.
> **Q4.** Despite the sequence-level feedback, there is a token-level value network in PPO. And how that is trained is very relevant to choosing the aggregation of your token-level reward model.
**A.** We agree with the reviewer that there is a token-level value network in PPO. However, the main problem lies in learning this value function.
Specifically, the mismatch between sequence-level feedback and token-level LM training/generation leads to the problem of delayed feedback (sparse reward-signal) when applying RL methods to LM tasks. It is known in the literature [1, 2] that with only sparse rewards, it can be difficult to estimate the token-level value functions in RL methods, including PPO. In particular, [1] points out that the standard value-function learning can suffer from *“the unstable per-step bootstrapping-style training with sparse reward signals.”*
In Appendix F.2, we provide a detailed discussion on this delayed feedback problem in RL-based LM training, which we believe applies to PPO. In Section 4.2 and Table B in the added PDF, we show that our method outperforms the benchmarking results of (Supervised+) PPO cited from RL4LMs [3].
We are confused about the second part of your comment. We are unaware of other ways to train the value network in PPO, apart from the standard one implemented in our baseline RL4LMs [3]. We are also confused about why that will be relevant to choosing the aggregation of our token-level reward model. As a gentle reminder, in our paper, the aggregation function is used to train the token-level reward function $r_\phi$ (Line 104-109).
We will highly appreciate it if you can provide more details on your comment so that we can respond better, rather than only taking an educated guess.
[1] Guo, Han, et al. "Efficient (soft) q-learning for text generation with limited good data." arXiv preprint arXiv:2106.07704 (2021).
[2] Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018.
[3] Ramamurthy, Rajkumar, et al. "Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization." arXiv preprint arXiv:2210.01241 (2022).
> **Q5.** Simulating the sequence-level preference by the Meteor score is questionable because it is not real sequence-level feedback.
**A.** We respectfully disagree with the reviewer. Since the Meteor metric scores the entire generated text-sequence, it provides a sequence-level feedback on LM generation.
Therefore, we believe that it is valid to use Meteor to simulate the sequence-level preference, i.e., the higher score the better generated sequence, as discussed in Line 101-103. Hence, our summarization experiments on datasets CNN/DM and XSum are proper.
In general, we believe that automatic evaluation metrics can be a valid and rich source of sequence-level preference, apart from real humans.
---
Rebuttal Comment 1.1:
Comment: Thank you authors for the responses. I increased my scores after reading your responses and other reviewer's comments.
Q2: It is the autoregressive factoring of the LM that ties sequence probability to token probability.
Viewing LM cross entropy loss as token-level loss is one view of the problem.
An equivalent view is that, if we conduct MLE on sequence probability, which is autoregressively factored to the product of token probabilities, seq_prob = \pi_i token_prob_i, we would get the exact same loss function as summation of token level MLE.
Using this view, the cross entropy loss can be explained without mention of token-level losses, and also from data perspective, the SFT data we collected are full sequences (one argument could be that we do not collect token/prefixes but always collect full sequences).
Equivalently, any signals that are collected on the sequence level are propagated to token-level because of the autoregressive factoring.
I acknowledge that sequence level losses and token level losses are often mentioned in literature, however in my opinion, the there is no mathematically difference on sequence-level and token-level losses for autoregressive LMs.
---
Reply to Comment 1.1.1:
Title: Further response to Reviewer wRKD
Comment: Dear Reviewer wRKD,
Thank you so much for your response and your raising the scores.
We agree with the reviewer that the token probabilities in LM have connection with the sequence probability. But, as per you commented, *”Viewing LM cross entropy loss as token-level loss is one view of the problem”*. Therefore, we believe that our core motivation is solid, and it is valid to categorize LM loss as token-level loss. We agree that there are other views of the same problem, but we do not preclude those other views and we do not aim at drawing a dichotomy. We will add a clarification on this in our revised manuscript.
We would like to reiterate that the goal of this paper is to ground sequence-level feedback into (dense) token-level guidance for LM training (Section 1). In our experiments (Section 4), we demonstrate that our viewpoint and method can be beneficial on two distinct representative LM tasks/settings.
Specifically, with supervised data, even though the classical supervised MLE can work, we show in Section 4.2 that our token-level reward-weighted MLE can perform better. Intuitively, the reward/guidance-weighting scheme emphasizes the important tokens in the supervised sequences and downweights the unimportant ones, and therefore can better utilize the LM capacity and the optimization budget (Line 167-171).
In the challenging setting of without supervised data, the LM needs to discover each token by itself. We show in Section 4.1 that our dense token-level guidance can improve the performance/quality of the generated sequences.
In both settings, we experimentally show that our dense token-level guidance can be more effective for LM training than the delayed/ungrounded *native* sequence-level feedback (Line 246-249 and Line 276-278).
***
We wish this discussion can clarify the contribution of our paper and merit raising your rating further. Please kindly let us know if you have any remaining concerns. | Rebuttal 1:
Rebuttal: ## General Response
We thank all reviewers for the valuable comments. Below are our additional results and responses to common concerns.
> **Q1.** Human evaluation results on the CNN/DM summarization.
**A.** We conduct human evaluation on the quality of the generated summaries on the CNN/DM dataset under the T5-base LM. We generally adopt the protocol in [1] to evaluate the overall summary quality.
Our model is compared with the baselines Supervised, Supervised+PPO, and Supervised+NLPO in RL4LMs [2]. We also present the result of the reference summaries, which is intended for sanity check rather than method comparison.
Specifically, we randomly picked 100 articles in the test split of CNN/DM and showed to 20 qualified evaluators the summaries generated from each method, along with the article. The method names were anonymized. The evaluators were asked to read the article and score each summary.
Table A in the added PDF shows the average human ratings. We see that our method outperforms all baseline methods Supervised and Supervised+PPO/NLPO. This aligns with our results in Section 4.2 and **Reviewer xDED**’s expectation that human evaluation on the summarization task supports the improvements in ROUGE scores by our method.
[1] Stiennon, Nisan, et al. "Learning to summarize with human feedback." Advances in Neural Information Processing Systems 33 (2020): 3008-3021.
[2] Ramamurthy, Rajkumar, et al. "Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization." arXiv preprint arXiv:2210.01241 (2022).
> **Q2.** Results on an additional evaluation metric: BERTScore [3].
**A.** As suggested by **Reviewer XL7b**, we add an additional metric BERTScore, which can *“correlates better with human judgments and provides stronger model selection performance”* [3].
Table B in the added PDF shows the results expanded from Figure 2 (Section 4.2) and Table 4 (Appendix A.1).
It is clear that on BERTScore, our method again relatively significantly outperforms the strong baselines from RL4LMs [2].
[3] Zhang, Tianyi, et al. "Bertscore: Evaluating text generation with bert." arXiv preprint arXiv:1904.09675 (2019).
> **Q3.** Test against the task/models/results in the paper “Learning to summarize with human feedback” [1].
**A.** In the review, **Reviewer wRKD** suggests us to test our method against the RLHF summarization paper [1].
The main reason for not using this task/dataset is *evaluation*. Specifically, [1] mostly uses large-scale human study to evaluate the models. This requires hiring a large number of human evaluators, which is beyond our budget and scope.
As shown in Section 4 of [1], the (baseline) models in [1] are of sizes ranging from 1.3 billion parameters to 12.9 billion, which are beyond our computational budget for making a fair comparison. In fact, these model sizes are even much larger than the T5-base model used in the RL4LMs paper [2] (220 million parameters).
We are unaware of benchmarking results on this task/dataset on standard automatic metrics, evaluating a wider variety of algorithms and (smaller) LMs. The limited benchmarks further complicates testing our method on this task.
We agree that it is important to test our method using human feedback. We will certainly conduct this study once resources are ready.
> **Q4.** Why are our summarization results below the SOTA?
**A.** We clarify that the goal of this paper is not chasing the SOTA ROUGE-scores on the summarization task. Rather, our goal is to ground sequence-level feedback into token-level guidance for LM training (Section 1). Therefore, we believe that our direct baselines are methods using ungrounded feedback but the same LM backbone, such as RLPrompt and RL4LMs, which we have carefully compared in our experiments.
Further, apart from summarization, we also validate our method on the task of discrete-prompt generation. We believe that our experiments can clearly show the improvement of our method on existing architectures.
**Reviewer 9EYo** mentions that a BART-Large model can give a higher ROUGE score. However, BART-Large is ~7 times the size of our T5-small backbone, and ~2 times of our T5-base. We kindly argue that it is unfair to compare our results with other methods that use a much larger backbone.
In Section 4.2 (Line 285-290), we provide the results on CNN/DM when scaling up our LM from T5-small to T5-base. It is clear that scaling up the LM size further improves our results, e.g., ROUGE-1 increases from 40.9 to 43.1 while ROUGE-L from 38.2 to 40.0. Detailed results of our method under T5-base LM are in Table 4 (Appendix A.1) or Table B in the added PDF. It is also clear that under a T5-base backbone, our method performs more-significantly better than the strong Lead-3 baseline on CNN/DM.
We respectfully argue that apart from using a much larger LM backbone, SOTA summarization methods can contain some specifically-designed techniques, which may interfere with our method and make it hard to separate out the gain of our method.
For example, the BRIO [4] mentioned by **Reviewer XL7b** proposes a training paradigm that assumes a non-deterministic target distribution, which may not easily integrate into the general paradigm of preference-based LM training that our paper considers.
We will appreciate it if our results can be compared against summarization baselines that are more fair and relevant, such as those in the RL4LMs paper. Meanwhile, we appreciate that **Reviewer xDED** considers our evaluation as *“done against strong baselines;”* and **Reviewer gmdH** thinks our experimental results *“should benefit the community.”*
Finally, we are unsure about **Reviewer XL7b**’s comment that “it’s easier to boost the performance on a small model.” We would love to know details/references.
[4] Liu, Yixin, et al. "BRIO: Bringing order to abstractive summarization." arXiv preprint arXiv:2203.16804 (2022).
Pdf: /pdf/6636130819eaf580ee2f92c3daf3ddc2a46a0bcf.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper presents a new approach to training language models that address the mismatch between coarse-grained sequence-level preferences and fine-grained token-level rewards. With more fine-grained rewards, the proposed framework reduces the reliance on supervised data. Specifically, given the preference of a set of sequences, it first trains a reward function such that the aggregated rewards over tokens can satisfy the preference. Then, it applies REINFORCE to update the language model. The framework is evaluated on two text generation tasks: generating text prompts for few-shot text classification and text summarization. Compared with baselines, the proposed method improves the performance on both tasks.
Strengths: Revisiting the mismatch between sequence-level and token-level feedback in fine-tuning language models (LLMs) is of particular interest to me. In scenarios where supervised data for a target domain downstream task is scarce, and fine-tuning LLMs is computationally expensive, exploring data-efficient methods becomes important. The proposed method is straightforward and reasonable to me.
Weaknesses: 1. Using discrete-prompt generation to evaluate the proposed method is not convincing. The chosen task itself lacks meaningfulness, and a better way to achieve good performance may be self-training as introduced in [1]. I can understand the goal of the experiment is to evaluate the quality of the generated sequence, but there are other more reasonable generation tasks such as personalized chatbot.
2. The performance of the proposed method of text summarization falls significantly short of the SOTA, which leaves me unconvinced by the results. For instance, the ROUGE-L score for the current SOTA on CNN/DailyMail has surpassed 44, and for XSum it stands at 40.4 [2]. In contrast, the proposed method achieves scores of 38.17 and 26.33 respectively. For CNN/DailyMail, the results are only slightly better than the trivial lead-3 baseline. Considering that the proposed method is complex, I expect more improvement. I can understand the low performance may be partially due to model size. However, without using the model size of a state-of-the-art model, it’s hard to evaluate the effectiveness of the proposed method because It’s easier to boost the performance on a small model. In [3], they challenge the effectiveness of RL and argue that using a few human-curated prompts and responses is more effective for finetuning LLMs.
3. Using ROUGE as the only evaluation metric for summarization is not convincing. At least human evaluation or language model scores such as BERT-score/BART-score should be used. Also, is it possible to obtain preference via language model scores?
4. The proposed method lacks novelty. Previous studies have extensively discussed the mismatch between token-level and sequence-level rewards. Using a token-level reward function to estimate the reward for each token is not a new idea. Several related works are missing [4][5].
[1]ExploitingClozeQuestionsforFewShotTextClassificationandNatural LanguageInference, Schick et. al.
[2]BRIO:Bringing Order to Abstractive Summarization, Liu et. al.
[3]LIMA: Less Is More for Alignment, Zhou et. al.
[4]SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient, Yu et. al, 2016.
[5]Unsupervised Text Style Transfer using Language Models as Discriminators, Yang et. al.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Are there any generated samples for the discrete prompts for zero-shot classification? Qualitative analysis may make the method more convincing.
2. How does the choice of reward function impact performance? Does using different pre-trained models as reward functions significantly impact the performance?
3. Why the summarization results are significantly below SOTA results? The paper mentions it uses the full training set for finetuning the model, so I expect the performance should at least not be far from SOTA results.
4. How to obtain the preference for discrete prompts of zero-shot classification?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Please refer to weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer for the careful comments.
We would like to draw your attention to **General Response** for additional results and common responses. The other questions are answered in detail below.
> **Q1.** About the meaningfulness of discrete-prompt generation task. May test on other tasks.
**A.** We respectfully disagree your comment that *“the chosen task (discrete-prompt generation) itself lacks meaningfulness.”* We believe that while the self-training in [1] may give good results, it doesn’t preclude other methods to this problem, such as the recent RL-style method [6] and the recent baselines therein.
We kindly argue that this task is meaningful at least in better human understandability of the generated prompts, compared to soft prompts. We discuss related work in Appendix E (L857-864).
In this paper, we use discrete-prompt generation to test the applicability of our method to the challenging setting of no supervised examples, i.e., no ground-truth prompts.
As discussed in L191-199, this setting makes it infeasible to apply many prior works on learning token-level guidance for LM training, since they typically require abundant supervised examples.
In Section 4.1, we show that our preference-grounded token-level guidance is not only applicable to this task, but can be more effective than many strong recent methods, such as RLPrompt [6].
As discussed in L342, we agree with you that dialog systems can be a further application of our method. But we believe that discrete-prompt generation is an equally reasonable and important task.
[6] Deng, Mingkai, et al. "Rlprompt: Optimizing discrete text prompts with reinforcement learning." arXiv preprint arXiv:2205.12548 (2022).
> **Q2.** LIMA [3] challenges the effectiveness of RL for finetuning LLMs.
**A.** We want to clarify that our method is not a RL method, though some related works and baselines are.
We thank the reviewer for raising LIMA [3]. However, LIMA was submitted to arXiv on 18 May 2023, which is after the NeurIPS submission deadline (17 May 2023). Thus, a discussion on LIMA is out of the scope.
> **Q3.** Is it possible to obtain preference via language model scores?
**A.** Yes, it is possible, as long as LM scores can desirably reflect sequence quality in the specific task.
As discussed in L101-102, we make no assumption on the preference source.
In our summarization task, we obtain preference by Meteor, to have a fair comparison with the benchmarks in Table 17 of RL4LMs [7] and to avoid overfitting the ROUGE evaluation metrics (L258-259).
[7] Ramamurthy, Rajkumar, et al. "Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization." arXiv preprint arXiv:2210.01241 (2022).
> **Q4.** About lacking novelty and prior works on using token-level rewards, such as [4, 5].
**A.** We thank the reviewer for suggesting [4, 5]. We will certainly include them in our revised draft.
We agree that some prior works have attempted to learn token-level guidance for LM training. But as we discussed in Section 3 (L191-205), those prior works typically require abundant expert data, making them infeasible for the few/zero-shot settings, such as discrete-prompt generation, where there are no ground-truth prompts.
The suggested related works SeqGAN [4] and Unsupervised Text Style Transfer [5] both fall into this category. Specifically, [4] requires a set of “real sequence data” to pre-train the generator by MLE at the beginning, and [5] needs real target-domain sentences to train LM discriminators. Thus, both [4, 5] are infeasible to our few/zero-shot prompt task.
Further, similar to our related work “Lin et al. [64]” (L195), the intermediate guidance in [4] comes from Monte Carlo search, which can have high variance and be compute-demanding.
In [5], the token-level feedback is essentially the token-level probability from the target-domain LM, which can be less general/flexible than our preference-grounded guidance.
Apart from these, both [4, 5] do not consider the preference relation among multiple generated sequences.
By contrast, our method grounds the sequence-level preference into task-specific token-level guidance for LM training; and is suitable for both standard LM tasks and the low-data regime.
> **Q5.** Samples of the generated discrete prompts?
**A.** Some samples of the generated discrete prompt are in Table 3 (Appendix A.1). A brief discussion is on L250-254.
> **Q6.** How does the choice of reward function impact performance?
**A.** In Appendix A.2 (c) (L740-757), we simulate the preference on the summarization task by two other automatic metrics “Rouge-avg” and “Rouge-avg2”, rather than the Meteor in Section 4.2. We show that the efficacy of our method is not tied to using Meteor as the preference source.
We appreciate it if you could further elaborate this question: *“Does using different pre-trained models as reward functions significantly impact the performance?”* Overall, we think that the answer depends on how different pre-trained models are used as reward functions, and how the performance is measured.
We note that in summarization, our preference is from Meteor or Rouge-style scores. Both we and the results in RL4LMs [7] do not use pre-trained models as reward functions. Thus, a discussion of that may be beyond scope.
In the prompt task, we obtain the preference by the stepwise metric in RLPrompt [6] (L226-227), to ensure a fair comparison with RLPrompt and the baselines in its paper. We are unsure how different pre-trained models can be used or if changing this preference-obtaining metric will still make fair comparisons.
> **Q7.** How do you obtain the preference for discrete prompts?
**A.** As discussed in L224-225, it’s obtained by the stepwise metric in RLPrompt [6], i.e., the higher the metric value the better. Details of this metric are in our Appendix D (L842-851). | null | null | null | null | null | null |
Robust Lipschitz Bandits to Adversarial Corruptions | Accept (poster) | Summary: This work studies Lipschitz bandits robust to adversarial corruption, where both strong and weak adversaries are considered. Robust Zooming along with its cumulative regret upper bound is proposed for strong adversary with known corruption budget. For unknown corruption budget, this work proposes RMEL for weak adversary and a two-layer framework BoB for strong adversary. The lower bounds for both adversaries are also proposed, showing Robust Zooming is theoretically optimal for strong adversary. Experimental results are provided to show the practical performance of RMEL and BoB.
Strengths: - This is the first work that studies robust Lipschitz bandit.
- This work considers multiple settings in terms of adversary type and whether the corruption budget is known.
- Both algorithmic upper bounds and lower bounds are provided.
Weaknesses: - For the case of unknown budget, there is a gap between the upper and lower bounds for both weak and strong adversaries.
- In the experiment section, although RMEL and BoB outperform the baseline, their cumulative regrets do not converge (have large slope) in several settings.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - This work assumed Lipschitz constant one and $\Delta(x)\le 1$. I wonder how the cumulative regret would scale when we have a higher Lipschitz constant and a higher function magnitude.
- In figure 1, even the 'no corruption' trials have positive simple regret near T. Does it imply that all the algorithms fail to find the optimal arm?
- I'm more interested in the practical performance of Robust Zooming, as its cumulative regret upper bound is already theoretically optimal.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The limitations have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments on our work. We are happy to know you think our contributions are solid in theory. Please see our response to your concerns as follows:
**Weakness 1: There is a gap between upper and lower bounds for both weak and strong adversaries:**
We also listed this point as a limitation of our work in the Conclusion section. As we mentioned, even for the stochastic multi-armed problem under the weak adversary and unknown corruption budget, a regret gap still exists in terms of $C$ [16]. (The lower bound is $C$ while BARBAR could only attain regret of order $kC$ where $k$ is the number of arms.) Since the Lipschitz bandit is a much more complicated setting with an infinite number of arms $k = +\infty$, it is very challenging to fully close the gap for the more difficult Lipschitz bandit problem.
**Weakness 2: Do not converge in several settings:**
Thank you very much for your careful review of our results. We cut off the iterations early in Figure 1 since the gaps between different algorithms are often clear. For the triangle mean and sine mean, we can clearly see the fast regret explosion of the Zooming algorithm at the early stage. For the two dim mean, we can already see the better performance of our proposed algorithms at iteration 60,000, and hence we just cut off the time.
We re-ran three algorithms with the most challenging two dim mean reward functions until 140,000 rounds, and listed the cumulative regret of algorithms at iterations 80,000,100,000,120,000 and 140,000. Table 7-8 in the PDF in the above Global Rebuttal display the long-time results of used algorithms under each attack. We can observe that convergence of regrets of our algorithms in the long run, and the Zooming algorithm suffers from linear cumulative regret in the end.
**Question 1: I wonder how the cumulative regret would scale when we have a higher Lipschitz constant and a higher function magnitude:**
In our work, we assume the Lipschitz constant is $1$ since it is a common and default assumption under the Lipschitz bandit literature [23]. Given the Lipscthiz constant $U$, we could simply divide the observed rewards by $U$ at each round before using the Lipschitz bandit algorithm, and then the final regret bound of algorithms would be multiplied by the constant U, which implies that the order of regret bounds doesn’t change.
The magnitude of reward functions doesn’t affect the cumulative regret and the implementation of our algorithms. Note the cumulative regret could be represented as $Regret_T = T \Delta(x_t)$. And $\Delta(x), x \in X$ is independent with the magnitude of the Lipschitz reward function $\mu(\cdot)$ since the function values will be uniformly large under a higher function magnitude. For example, $\Delta(x), x \in [-1,1]$ will be the same for $\mu_1(x) = 1 - |x|$ and $\mu_2(x) = 1,000-|x|$.
**Question 2: even the 'no corruption' trials have positive simple regret near T. Does it imply that all the algorithms fail to find the optimal arm?**
1. Our problem setting is under the Lipscthiz bandit with infinitely many arms, and hence it is impossible to detect the single best arm. We can just guarantee that all the pulled arms in the end are very close to the optimal one.
2. All the Lipschitz bandit algorithms choose arms with some randomness, e.g. Zooming algorithm randomly activates new arms to pull, and our RMEL randomly selects one layer at each round to use. This randomness is used to control the exploration-exploitation tradeoff and can guarantee the efficiency of algorithms even under the worst problem instance. But they will naturally lead to minor regret even in the end.
3. Under our problem setting, the stochastic rewards are mixed with random noise and malicious attacks, and hence it is inevitable to expect some little perturbation in the end.
**Question 3: I'm more interested in the practical performance of Robust Zooming**
Although the Robust Zooming algorithm obtains the minimax optimal regret bound, we didn’t include it in experiments since it requires the value of $C$ but in reality this value would never be revealed to the agent. During the rebuttal period, we implemented the Robust Zooming algorithm under the identical six settings of Figure 1. And for the second term of $r(x)$ in this algorithm, we used $\min${$1, C/n(x)$} as suggested in line 166 for better practical efficiency. The Table 9-11 in the PDF in the above Global Rebuttal display the performance of the Robust Zooming algorithm against the original Zooming algorithm under the same six settings of Figure 1.
From the three tables, we can see the Robust Zooming algorithm performs robustly and is significantly better than the original Zooming algorithm under corruption. Compared with RMEL and BoB Robust Zooming algorithms (Figure 1 or Table 3 in Appendix), we can see that the Robust Zooming algorithm with known C is slightly better than the BoB Robust Zooming algorithm in general. And the RMEL is still the best one overall. We believe it is because for Lipschitz bandits elimination-based method turns out to be more powerful in practice.
Thank you very much for your valuable comments on our work. Please do not hesitate to let us know if you have any other questions or concerns of our work.
---
Rebuttal Comment 1.1:
Title: Table 7-11
Comment: For your convenience, in addition to the PDF in the above global Rebuttal, we also put Table 7-11 corresponding to our responses to your insightful questions in this Comment section:
**Table 7**: Oracle attack (Plot 3 in Figure 1)
| Algorithm | Budget (C) | 60000 | 80000 | 100000 | 120000 | 140000 |
|--------------------|------------|---------|----------|----------|----------|----------|
| Zooming | 0 | 3248.54 | 3616.95 | 3705.41 | 3765.19 | 3800.08 |
| Zooming | 3000 | 8730.73 | 11100.13 | 13355.59 | 15374.61 | 17390.93 |
| Zooming | 4500 | 9496.83 | 11893.03 | 14287.24 | 16622.42 | 19015.25 |
| RMEL | 0 | 2589.32 | 2741.95 | 2874.10 | 2941.18 | 2977.19 |
| RMEL | 3000 | 5660.10 | 6515.12 | 6941.57 | 7135.24 | 7205.52 |
| RMEL | 4500 | 6265.09 | 7135.25 | 7564.51 | 7756.00 | 7800.41 |
| BoB Robust Zooming | 0 | 3831.94 | 4135.14 | 4314.68 | 4399.01 | 4430.20 |
| BoB Robust Zooming | 3000 | 6310.29 | 7035.70 | 7413.45 | 7603.51 | 7656.35 |
| BoB Robust Zooming | 4500 | 6932.09 | 7864.41 | 8216.56 | 8401.66 | 8444.31 |
**Table 8**: Garcelon attack (Plot 6 in Figure 1)
| Algorithm | Budget (C) | 60000 | 80000 | 100000 | 120000 | 140000 |
|--------------------|------------|---------|----------|----------|----------|----------|
| Zooming | 0 | 3248.54 | 3616.95 | 3705.41 | 3765.19 | 3800.08 |
| Zooming | 3000 | 8149.79 | 10199.39 | 11285.41 | 13459.72 | 15230.06 |
| Zooming | 4500 | 13672.00 | 18086.69 | 19441.02 | 22555.63 | 26106.42 |
| RMEL | 0 | 2589.32 | 2741.95 | 2874.10 | 2941.18 | 2977.19 |
| RMEL | 3000 | 2590.77 | 2738.01 | 2879.56 | 2955.07 | 2979.75 |
| RMEL | 4500 | 2872.64 | 3067.71 | 3121.51 | 3193.85 | 3224.00 |
| BoB Robust Zooming | 0 | 3831.94 | 4135.14 | 4314.68 | 4399.01 | 4430.20 |
| BoB Robust Zooming | 3000 | 4217.74 | 4515.63 | 4684.21 | 4751.72 | 4798.32 |
| BoB Robust Zooming | 4500 | 4380.19 | 4645.22 | 4800.05 | 4864.16 | 4901.48 |
**Table 9**: Triangle Mean function:
| Algorithm | Budget (C) | Oracle | Garcelon |
|----------------|------------|----------|----------|
| Zooming | 0 | 366.58 | 366.58 |
| Zooming | 3000 | 10883.51 | 10660.17 |
| Zooming | 4500 | 11154.78 | 11487.59 |
| Robust Zooming | 0 | 366.58 | 366.58 |
| Robust Zooming | 3000 | 510.59 | 509.00 |
| Robust Zooming | 4500 | 1211.09 | 749.43 |
**Table 10**: Sine Mean function:
| Algorithm | Budget (C) | Oracle | Garcelon |
|----------------|------------|---------|----------|
| Zooming | 0 | 315.94 | 315.94 |
| Zooming | 3000 | 5289.65 | 3174.26 |
| Zooming | 4500 | 5720.30 | 3174.29 |
| Robust Zooming | 0 | 315.94 | 315.94 |
| Robust Zooming | 3000 | 539.21 | 822.47 |
| Robust Zooming | 4500 | 1662.58 | 1048.10 |
**Table 11**: Two dim Mean function:
| Algorithm | Budget (C) | Oracle | Garcelon |
|----------------|------------|---------|----------|
| Zooming | 0 | 3248.54 | 3248.54 |
| Zooming | 3000 | 8730.73 | 8149.79 |
| Zooming | 4500 | 9496.83 | 13672.00 |
| Robust Zooming | 0 | 3248.54 | 3248.54 |
| Robust Zooming | 3000 | 5495.21 | 3770.04 |
| Robust Zooming | 4500 | 6439.43 | 3999.21 | | Summary: The article focuses on the problem of Lipschitz bandits in the presence of adversarial corruptions, where an adaptive adversary corrupts the stochastic rewards up to a total budget $C$. Both weak and strong adversaries are considered, where the weak adversary is unaware of the current action before the attack, while the strong one can observe it. This article presents the first line of robust Lipschitz bandit algorithms that can achieve sub-linear regret under both types of adversary, even when the total budget of corruption $C$ is unrevealed to the agent.
Strengths: This work studies Lipschitz bandits in the presence of adversarial corruptions, which are naturally motivated by a broad range of applications with corrupted data. The innovation of the proposed algorithm is that they combined a couple of existing techiniques to design the algorithm, which help a lot to improve the performance. For example, the authors adopted a removal procedure with multi layers, which could adaptively remove regions at different rates such that the algorithm can torelate corruptions in different levels. This techinique was first proposed in [29] to improve the efficiency of the algorithm for multi-armed bandit in the presense of corruption. Enough intuitions are provided through the presentation of the algorthms and I can easily follow the ideas.
The authors show the advantages of the proposals over past algroithms both theoretically and numerically. The analysis and proofs look correct and intuitive to me.
Weaknesses: 1. In additon to the overall budge constraint $C$, the authors require $c_t(x)$ to be bounded by 1. It is stange to put it in Section 3 as an assumption, since the authors claim it is can be easily addressed. Moreover, $c_t(x) > 1$ does not mean the adversary always has enough power to make a suboptimal arm optimal, so I feel it is not straightforward to ignore the case $c_t(x) > 1$.
2. The poposed algorithms (at least some important components) rely on existing techiniques, that somehow reduces the novelty of this work.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments on our work. We are pleased to know you think our work is solid and our presentation is clear and intuitive. Please see our response to your concerns as follows:
**Weakness 1: $c_t(x) > 1$?**
The condition $|c_t(x)| < 1$ is used for the proof of Lemma. A.7 in Appendix. We can further relax it by $|c_t(x)| < c$ for any constant $c > 0$ and the same regret bound order in our Theorem 5.1 will still hold. So for simplicity, we just take $c=1$.
Moreover, since we assume that the underlying function $\mu(\cdot)$ is a 1-Lipschitz function and the diameter of space $X$ is no more than 1 (these are common and default assumptions in Lipschitz bandits, e.g. [23]), we know $\Delta(x) \leq 1$ for any $x \in X$. And this fact implies that with $|c_t(x)| = 1$ we could make any arm $x \in X$ optimal or worst in expectation.
In addition, this condition is also used in robust multi-armed bandit literature [16, 29] against adversarial attacks. Specifically, they make a stronger assumption that the corrupted and stochastic rewards always lie in [0,1] at each round, which naturally implies that the attack value $c_t(x)$ can not exceed 1 in magnitude.
Thank you very much for your careful review. We will remark it in the revision.
**Weakness 2: The poposed algorithms (at least some important components) rely on existing techiniques:**
To the best of our knowledge, our work is the first to study the robust Lipschitz bandits in the presence of adversarial corruption. We acknowledge that our method relies on some existing techniques, e.g. as you mentioned RMEL borrows the multi-layers elimination idea from [29], but we’d also like to highlight that the extension of our paper is highly non-trivial. As we mentioned in lines 203-207, RMEL has to deal with three potential sources of error, and the error from approximation bias is new and rooted in the difficult nature of Lipschitz bandits. Moreover, we developed lower bounds under two types of adversaries in our work, which laid a solid foundation for the future development of this field.
We really appreciate your valuable insights and comments on our work, and we are more than happy to engage in any further discussion with you.
---
Rebuttal Comment 1.1:
Title: Thanks for your responses
Comment: I do not have further questions. Though I will not increase my current rating based on the assessment of the technical novelty and contributions of this work.
---
Reply to Comment 1.1.1:
Title: Thank you very much for your review
Comment: We sincerely appreciate your careful review and value your insightful feedbacks. | Summary: The paper considers a model of Lipschitz bandits with adversarial corruptions. Two types of adversaries are analysed hand-in-hand: weak adversary which may not have knowledge of the current action of the learner before injecting its corruption into the reward that is actually observed by the learner, and strong adversary which has knowledge of the current action of the learner and is hence more challenging to defend against. Irrespective of its nature, the adversary may only inject a total corruption of $C$ over $T$ rounds of sampling. The learner may or may not have prior knowledge of the value of $C$, and is tasked with the goal of minimizing the cumulative regret over the horizon $T$.
The paper analyzes the following scenarios.
1. The learner has prior knowledge of $C$ (Section 4).
2. The learner has no prior knowledge of $C$, and must deal with a weak adversary (Section 5.1).
3. The learner has no prior knowledge of $C$, and must deal with a strong adversary (Section 5.2).
For each of the above scenarios, the paper presents a lower bound on the cumulative regret and one or more algorithms for cumulative regret minimization. For scenario 1, the paper proposes a robust zooming algorithm, inspired by the UCB algorithm, albeit for a continuum of arms. For scenario 2, a robust multi-layer elimination algorithm is proposed. For scenario 3, three different algorithms are proposed. The paper derives upper bounds on the cumulative regret for each of the algorithms under every scenario. A noteworthy aspect of the lower and the upper bounds are their dependence on the corruption budget $C$, regardless of whether $C$ is known or unknown.
Strengths: 1. The paper extends the analysis of prior works on adversarial corruptions in stochastic multi-armed bandits to handle a continuum of arms; most prior works in bandits deal with only finitely many arms.
2. The authors' presentation of the key results of the paper in Table 1, as a quick, one-stop summary of all the important results of the paper, is commendable. A mere cursory glance of Table 1 suffices for an expert reader to recall the important contributions of the paper.
3. The authors' effort towards extracting the key ideas from several prior works (notably [23], [29], [31], and [11]) and applying these ideas to solve the problem at hand (with a continuum of arms) is truly commendable.
4. The dependence of the lower and the upper bounds on the corruption budget $C$, the zooming dimension $d_z$, and the covering dimension $d$ is noteworthy.
Weaknesses: The section on Preliminaries (Section 3) needs to be supplemented with additional details and rewritten to correct some glaring errors. Below are some points I have mentioned based on my detailed reading of the paper.
1. In the definitions of the covering dimension and the zooming dimension, $\exists c>0$ should be replaced with $\exists \alpha>0$.
2. For any $r \in (0,1]$, the "$r$-covering" of a compact set must be defined formally, noting that it appears frequently in Algorithm 2 (where phrases such as "$1/2$-covering", "$1/2^{m_l+1}$-covering" are used).
3. It is better to denote the covering dimension by $d_c$ instead of $d$; the latter could be referred to as the Euclidean dimension, following the lines of [23]. This avoids strange (yet correct) phrases such as "$d=d$" appearing on line 136, which as per the suggested notations would read "$d_c=d$".
4. It is not quite clear why $d_z \leq d$. The definition of the covering dimension $d$ entails using balls of radius at most $r$ for the covering, whereas that of the zooming dimension $d_z$ entails using balls of radius at most $r/16$ for the covering. The authors must clarify why $d_z\leq d$, as this is crucial for comparing the upper bound of Theorem 5.3 with the lower bound of Theorem 4.2.
5. In continuation to the previous point, the paper [23] alludes to the zooming dimension as "the smallest $q$ such that for every $r>0$, only $O(r^{-q})$ sets of radius $r/16$ are required to cover the set $\lbrace x \in \mathcal{X}: r\leq \Delta(x) \leq 2r \rbrace$ (this is rephrased in terms of the notations used in the current paper); see, for instance, the last paragraph in [23, pp. 30:5]. In contrast, the authors define the zooming dimension as "the smallest $q$ such that for every $r>0$, only $O(r^{-q})$ sets of radius $r/16$ are required to cover the "$r$-optimal set" $\lbrace x \in \mathcal{X}: \Delta(x) \leq r \rbrace$", which differs from the one in [23]. Can the authors elaborate on why they chose to define the zooming dimension differently from [23], and what are the implications of this difference on their results?
6. I am not very convinced about the nature of adversarial corruptions used for the simulations. I cannot readily see if "Oracle" and "Garcelon" attacks respect the adversarial attack model $\max_{x \in \mathcal{X}} |c_t(x)| \leq 1$ for all $x,t$ that forms the heart of the paper. In fact, in Garcelon, using a Gaussian random variable as the corrupted reward has the implication that the corruption is unbounded, thereby not adhering to the proposed adversarial attack model. In my opinion, it is quite difficult to judge in a fair manner the performance of the proposed algorithms on adversarial attack models that differ from the underlying model proposed in Section 3.
7. Continuing on the above point, why do the authors not consider in their simulations an explicit corruption function that satisfies the requirement $\max_{x \in \mathcal{X}} |c_t(x)| \leq 1$ for all $x,t$? For instance, say $c_t(x)=\sin(2\pi xt/T)$, $x \in [0,1]$, $t \in \{1, \ldots, T\}$. Is incorporating such functions into the simulations challenging? If so, what exactly are the challenges? Simulating an attack model that matches exactly with the description of Section 3 would significantly enhance the purpose of simulations, lead to a fair evaluation of the performances of the proposed algorithms, and demonstrate the strength/utility of the proposed algorithms.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The authors provide separate algorithms for the case when the adversary is weak and when the adversary is strong. While I appreciate the effort put into analysing both of these scenarios separately in the paper, I have the following question: in practice, is the algorithm (more specifically, the learner) aware of the nature of the adversary beforehand? Are there practical scenarios where the learner has prior knowledge about whether the adversary is weak/strong? I would imagine that it is very unlikely that such prior knowledge may actually be available in practice, in which case it would be reasonable for the learner to simply design an algorithm that works against the strong adversary. Can the authors comment about this (if possible also include a sentence or two on this point in the paper)?
2. The paper [23] alludes to the zooming dimension as "the smallest $q$ such that for every $r>0$, only $O(r^{-q})$ sets of radius $r/16$ are required to cover the set $\lbrace x \in \mathcal{X}: r\leq \Delta(x) \leq 2r \rbrace$ (this is rephrased in terms of the notations used in the current paper); see, for instance, the last paragraph in [23, pp. 30:5]. In contrast, the authors define the zooming dimension as "the smallest $q$ such that for every $r>0$, only $O(r^{-q})$ sets of radius $r/16$ are required to cover the "$r$-optimal set" $\lbrace x \in \mathcal{X}: \Delta(x) \leq r \rbrace$", which differs from the one in [23].
Can the authors elaborate on why they chose to define the zooming dimension differently from [23], and what are the implications of this difference on their results?
3. This is more of an observation than a question. The upper bound on cumulative regret derived in Theorem 5.1 relies on lower bounding the probability of the event $\Phi$ as defined in the proof sketch. In the definition of $\Phi$, the layers whose tolerance values exceed the unknown corruption budget $C$ are considered. To me, this seems like an "artificial" way of introducing $C$ into $\Phi$ in order to derive an upper bound that depends on $C$. However, this also suggests that without bringing $C$ into the definition of $\Phi$, an upper bound on cumulative regret that does not depend on $C$ may be derived. Such an upper bound may be relevant to problems where $C$ is potentially infinite (i.e., the adversary may corrupt the rewards in an unbounded fashion), thereby broadening the scope of the current work (when $C=+\infty$, the upper bounds in the paper become degenerate). However, I envisage that the analysis of the case $C=+\infty$ could be hard. Perhaps, this could be a direction of future work.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See points 6,7 under "weaknesses".
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments on our work. We are happy to know you think our contributions are commendable and our presentation is clear. Please see our response to your concerns as follows:
**Weakness 1-3: Typos and notations:**
We are grateful for your meticulous review. We will correct the typos and clarify our notations in the revision.
**Weakness 4: Why $d_z \leq d$?**
For recall, we write relevant notations here in a metric space $(X,D)$:
r-zooming number $N_z(r)$ is the minimal number of balls of radius not more than $r/16$ required to cover the $r$-optimal region defined as {$ x\in X:\Delta(x) \leq r $}. r-covering number $N_c(r)$ is the minimal number of balls with radius of no more than $r$ required to cover $X$. Zooming dimension: $d_z = \min$ { $q \geq 0: \exists \alpha_1>0, N_z(r) \leq \alpha_1 r^{-q}, \forall r \in (0,1]$}. Covering dimension: $d_c = \min$ \{ $q \geq 0: \exists \alpha_2>0, N_c(r) \leq \alpha_2 r^{-q}, \forall r \in (0,1]$}.
First, we know $N_z(16r)$ is no larger than $N_c(r)$ for arbitrary $r > 0$ since {$x\in X:\Delta(x) \leq 16r$} is a subset of $X$. Therefore, by the definition of the covering dimension $d_c$, there exists some $\alpha_2 >0$ s.t. $N_z(16r) \leq \alpha_2 r^{-d_c}, \forall r > 0.$ This is identical to $N_z(16r) \leq \alpha_2 16^{d_c} (16r)^{-d_c}, \forall r > 0,$ which means that
$$N_z(r) \leq \alpha_2 16^{d_c} (r)^{-d_c} = \alpha_1 (r)^{-d_c}, \forall r > 0, \alpha_1 = \alpha_2 16^{d_c}.$$
Note $16^{d_c}$ is also a constant free of $r$, and based on the definition of the zooming dimension $d_z$, it holds that $d_z$ is at most $d_c$.
In conclusion, the zooming dimension will be the same using balls of radius at most $cr$ for the covering for arbitrary $c>0$ of constant scale. This fact is commonly used in the Lipschitz bandit literature and is directly illustrated in [23] (right after Theorem 1.3), where different radii for covering number and zooming number are used.
Thank you for your insightful question, and we will put this detailed explanation in our revision.
**Weakness 5, Question 2: Different notations about the Zooming dimension:**
First, we explain why we use the set {$x \in X: \Delta(x) \leq r$} in our paper: we use this notation for a clear and easy-to-follow proof. Specifically, based on Lemma A.12 in Appendix, we can show that any arm played after the epoch $m$ would at most incur a regret of $16/2^m$, and hence we could bound the number of active regions at epoch $m$ by the order $O(2^{d_z m})$ with the definition. And then the total regret incurred at that epoch could be bounded.
Second, we can show that the zooming dimension under these two different sets is equivalent. To keep consistent with the notations in our paper, we further denote the r-zooming number used in [23] as $N_z’(r)$ for the set {$x \in X: r/2 < \Delta(x) \leq r$}, and the corresponding zooming dimension for $N_z’(r)$ as $d_z’$. Naturally, it holds that $d_z’ \leq d_z$ since {$x \in X: r/2 < \Delta(x) \leq r$}$ \subset$ {$x \in X: \Delta(x) \leq r$}, and hence it suffices to show that $d_z’ \geq d_z$ holds as well.
If $d_z = 0$ it naturally holds that $d_z’ = 0 = d_z$. If $d_z > 0$, note by definition there exists $\alpha_1, \alpha_2 >0$ s.t. $N_z(r) \leq \alpha_1 r^{-d_z}, N_z’(r) \leq \alpha_2 r^{-d_z’}$. Since {$x \in X: \Delta(x) \leq r$}$ = ${$x \in X: r/2 < \Delta(x) \leq r$}$ \bigcup ${$x \in X: \Delta(x) \leq r/2$}, and the coverings of A and the coverings of B will cover A $\cup$ B, we have that $N_z(r) \leq N_z’(r) + N_z(r/2), \forall r \in (0,1]$. This implies that
$$(1/r)^{d_z’-d_z} \geq (\alpha_1/\alpha_2) \times (1-0.5^{d_z})), \forall r \in (0,1], d_z > 0.$$
And hence we have $d_z’ \geq d_z$. Conclusively, it holds that $d_z’ = d_z$. Note our definition of zooming dimension is the same as “near-optimality dimension” introduced in another most-cited Lipschitz bandit paper [8], where the authors also mention the equivalence between the zooming dimension used in [23] and the near-optimality dimension from section 5.2 and the proposed lower bound. We will include a remark on this topic in the revision. Thank you very much for helping improve our paper.
[8] X-armed Bandits. S. Bubeck et al., JMLR 2011.
**Weakness 6: Experiments are not consistent with $|c_t(x)| \leq 1$:**
Thank you very much for your suggestion. To be consistent with our theory, we modified both attacks in our simulations, and restricted that the attack volume to be at most one (and if the attack volume was greater than 1, we truncated it into 1). For example, if the stochastic reward is 0.8, and the Garcelon generated a corrupted reward of -0.5, then we will use 0.8-1=-0.2 as the observed reward after corruption.
We re-ran all six simulations in Figure 1 and then output the final cumulative regrets under both the original attacks used in our paper and the modified attacks in Table 1-3 in the PDF in the above Global Rebuttal. From the tables, we can observe that algorithms behave similarly under truncated Oracle and Garcelon attacks and hence we can reach the same conclusion as in our paper in lines 335-344: Zooming becomes futile while our proposed algorithms, especially RMEL, are still robust to the contaminated environment.
**Due to the space limit and a single allowed Rebuttal, we will respond to your other insightful questions by posting new Comments in the very beginning of Discussion phase. We are sincerely grateful for your valuable comments.**
---
Rebuttal Comment 1.1:
Title: Responses to your remaining valuable questions
Comment: **Weakness 7: Why not $sin()$ attack?**
First, we’d like to explain why we choose Oracle and Garcelon attacks in our paper: This is because these two attacks are well studied in the previous literature with theoretical guarantees for multi-armed bandits and linear bandits, and it has been verified that both attacks could destroy the performance of stochastic bandit algorithms in practice. As the Lipscthiz bandit is an extension of a multi-armed bandit with infinitely many arms in a metric space, we just naturally extend these two attacks into our setting, and we expect these two attacks will also be detrimental to stochastic Lipschitz bandits. From Figure 1 and the additional results in our response to weakness 6 (truncated Oracle and Garcelon attacks), we could also observe the effectiveness of these two types of attacks since the Zooming algorithm becomes futile and achieves linear regret under both attacks. Intuitively, these two attacks are malicious since they contaminate the stochastic rewards generated from “good arms” by pushing them to the very bottom or modifying them into random noise.
Moreover, we just implemented the explicit corruption function $c_t(x) = \sin(2 \pi x t/T)$ as the adversarial attack in our experiments under the three underlying functions in Figure 1, and we inject the attack with probability 0.1 at each round. Tables 4-6 in the above PDF in the global Rebuttal show the cumulative regrets of different algorithms under three reward functions. From the tables, we can observe that both our algorithms, especially RMEL, still yield more robust performance under different volumes of corruption compared with Zooming does. We can also observe that this type of attack is not as malicious as Oracle and Garcelon, and we believe it is because the attack volume $c_t(x)$ is very small in the beginning (sin(0)=0), and hence the Zooming algorithm has sufficient time to learn the underlying reward function in the early stages.
**Question 1: strong vs weak adversaries in practice:**
From the theoretical side, We study two types of adversaries in our paper to be consistent with the existing literature on stochastic bandits under adversarial corruption.
In practice, we feel we can have a better understanding of this problem through an analogy with adversarial attacks in deep learning [10]: a white-box attacker has complete knowledge about the target model before the attack, which is like the strong adversary in the bandit knowing which arm the target model is going to pull before the attack. On the other hand, a black-box attacker doesn’t know the target model, which is like the weak adversary in the bandit unaware of the current action from the target bandit algorithm. In practice, a defender can consider defense against either black-box or white-box attacks according to whether the learning algorithm can be revealed to the attacker. Most of the practical systems won't reveal the learning model to the attacker, so in practice one often considers black-box attacks. However, defenses against white-box attacks are also widely studied since they provide a better safety guarantee.
Following the above arguments, for the robustness of bandit algorithms, we believe which adversary to use depends on whether the bandit algorithm can be potentially revealed, and how much safety guarantee we want to have. For example, if model developers want to perform robustness testing, then the strong adversary can be used since it can evaluate the worst-case performance of a model. On the contrary, for attackers that are not able to hack into the system and observe which arm our underlying model is going to choose, we can regard them as weak adversaries instead and make corresponding defense.
**Question 2: derive an algorithm whose regret bounds are free of $C$**
Thank you very much for your insightful idea and careful review on our work. We also believe there is some improvement space regarding our Theorem 5.1 where we artificially construct the set $\Phi$, and we feel a direct and natural extension is to study the case when we don’t have $|c_t(x)|$ is bounded. Since we believe a malicious attacker will try to contaminate as many rounds as possible and hence will not allocate too many budgets in any single round. In other words, we think a clever adversary may bound $|c_t(x)|$ itself. Therefore, it might be “easier” if $|c_t(x)|$ is unbounded. However, our analysis in Theorem 5.1 hinders us from solving this “easier” case.
We also believe it is interesting to study the case when C is infinitely large, or $C > T$. And we feel the first natural question is how to define a reasonable metric, like a different type of cumulative regret. Since the original cumulative regret $Regret_T = T \mu(x_*) - \sum_{t=1}^T \mu(x_t)$ may be unbounded when the adversary could contaminate every round. Due to the time limit, we will leave it as an interesting future work.
---
Rebuttal Comment 1.2:
Title: Table 1-6
Comment: For your convenience, in addition to the PDF in the global Rebuttal, we also put Table 1-6 w.r.t. our responses to your valuable questions here:
**Table 1**: Triangle (Figure 1, 4)
| Algorithm | Budget (C) | Oracle | Garcelon | Modified Oracle | Modified Garcelon |
|--------------------|------------|----------|----------|------------------|-------------------|
| Zooming | 0 | 366.58 | 366.58 | 366.58 | 366.58 |
| Zooming | 3000 | 10883.51 | 10660.17 | 10824.85 | 10530.29 |
| Zooming | 4500 | 11153.78 | 11487.59 | 11140.29 | 11491.21 |
| RMEL | 0 | 512.46 | 512.46 | 512.46 | 512.46 |
| RMEL | 3000 | 921.95 | 504.78 | 918.29 | 512.10 |
| RMEL | 4500 | 928.27 | 1542.17 | 925.91 | 1540.89 |
| BoB Robust Zooming | 0 | 461.16 | 461.16 | 461.16 | 461.16 |
| BoB Robust Zooming | 3000 | 495.06 | 531.37 | 499.17 | 526.41 |
| BoB Robust Zooming | 4500 | 1323.97 | 736.85 | 1330.02 | 739.61 |
**Table 2**: Sine (Figure 2, 5)
| Algorithm | Budget (C) | Oracle | Garcelon | Modified Oracle | Modified Garcelon |
|--------------------|------------|----------|----------|------------------|-------------------|
| Zooming | 0 | 315.94 | 315.94 | 315.94 | 315.94 |
| Zooming | 3000 | 5289.65 | 3174.26 | 5284.19 | 3169.01 |
| Zooming | 4500 | 5720.30 | 3174.29 | 5710.35 | 3178.96 |
| RMEL | 0 | 289.86 | 289.86 | 289.86 | 289.86 |
| RMEL | 3000 | 442.66 | 289.29 | 438.17 | 294.20 |
| RMEL | 4500 | 862.90 | 332.71 | 855.38 | 335.17 |
| BoB Robust Zooming | 0 | 435.44 | 435.44 | 435.44 | 435.44 |
| BoB Robust Zooming | 3000 | 414.54 | 746.96 | 410.92 | 748.60 |
| BoB Robust Zooming | 4500 | 1887.35 | 1148.09 | 1875.11 | 1158.62 |
**Table 3**: Two Dim (Figure 3, 6)
| Algorithm | Budget (C) | Oracle | Garcelon | Modified Oracle | Modified Garcelon |
|--------------------|------------|---------|----------|------------------|-------------------|
| Zooming | 0 | 3248.54 | 3248.54 | 3248.54 | 3248.54 |
| Zooming | 3000 | 8730.73 | 8149.79 | 8723.18 | 8153.53 |
| Zooming | 4500 | 9496.83 | 13672.00 | 9496.83 | 13672.00 |
| RMEL | 0 | 2589.32 | 2589.32 | 2589.32 | 2589.32 |
| RMEL | 3000 | 5660.10 | 2590.77 | 5665.44 | 2587.09 |
| RMEL | 4500 | 6265.09 | 2872.64 | 6274.96 | 2869.74 |
| BoB Robust Zooming | 0 | 3831.94 | 3831.94 | 3831.94 | 3831.94 |
| BoB Robust Zooming | 3000 | 6310.29 | 4217.74 | 6312.19 | 4222.60 |
| BoB Robust Zooming | 4500 | 6932.09 | 4380.19 | 6928.63 | 4388.97 |
**Table 4**: Triangle:
| Algorithm | Budget (C) | Regret |
|--------------------|------------|--------|
| Zooming | 0 | 366.58 |
| Zooming | 3000 | 588.02 |
| Zooming | 4500 | 1499.11 |
| RMEL | 0 | 512.46 |
| RMEL | 3000 | 508.94 |
| RMEL | 4500 | 642.17 |
| BoB Robust Zooming | 0 | 461.16 |
| BoB Robust Zooming | 3000 | 511.14 |
| BoB Robust Zooming | 4500 | 642.08 |
**Table 5**: Sine:
| Algorithm | Budget (C) | Regret |
|--------------------|------------|--------|
| Zooming | 0 | 315.94 |
| Zooming | 3000 | 539.62 |
| Zooming | 4500 | 1072.21 |
| RMEL | 0 | 289.86 |
| RMEL | 3000 | 284.11 |
| RMEL | 4500 | 296.50 |
| BoB Robust Zooming | 0 | 435.44 |
| BoB Robust Zooming | 3000 | 455.72 |
| BoB Robust Zooming | 4500 | 531.13 |
**Table 6**: Two dim:
| Algorithm | Budget (C) | Regret |
|--------------------|------------|---------|
| Zooming | 0 | 3248.54 |
| Zooming | 3000 | 4412.59 |
| Zooming | 4500 | 5983.09 |
| RMEL | 0 | 2589.32 |
| RMEL | 3000 | 2594.16 |
| RMEL | 4500 | 2670.15 |
| BoB Robust Zooming | 0 | 3831.94 |
| BoB Robust Zooming | 3000 | 3900.31 |
| BoB Robust Zooming | 4500 | 4198.11 |
---
Rebuttal Comment 1.3:
Title: Response to authors' rebuttal
Comment: I thank the authors for their detailed response to my comments and for carrying out additional experiments heeding to my suggestions.
Indeed, the new experiments, while conforming to the system model outlined in Section 3, affirm that the algorithms proposed in the paper have superior performance over the classical Zooming algorithm.
I thank the authors for providing a justification for why $d_z$ is independent of the "radius" used in the definition of the zooming number.
In connection with a proof of this fact, I only have the following minor suggestion for the authors.
The authors note that $(1/r)^{d_z^\prime-d_z} \geq (\alpha_1/\alpha_2) \times (1-0.5^{d_z}), \forall r \in (0,1], d_z > 0$, and use this to conclude $d_z^\prime \geq d_z$. However, I do not readily see how their conclusion follows from the preceding relation.
In my opinion, a more straightforward way to arrive at the authors' conclusion is to note that
$$
N_z(r) \leq N_z’(r) + N_z(r/2) \leq \alpha_1 r^{-d_z^\prime} + \alpha_2 (r/2)^{-d_z} \leq (\alpha_1+ \alpha_2 \\, 2^{d_z}) r^{-d_z^\prime} \quad \forall r >0,
$$
where the last inequality follows because $d_z \geq d_z^\prime$ trivially. From the inequality above, noting that $\alpha_1 + \alpha_2 \\, 2^{d_z}$ is independent of $r$, we get $d_z \leq d_z^\prime$.
---
In light of the authors' detailed rebuttal and the results of the new experiments, I am willing to increase my score from 6 to 7.
---
Reply to Comment 1.3.1:
Title: Thank you for your careful review and valuable suggestion
Comment: We sincerely value the time you've dedicated to reviewing our work and deeply appreciate your thoughtful suggestions, which have significantly enhanced the quality of our paper. | Summary: This paper studies Lipschitz bandits with adversarial corruptions, where the reward of the pulled arm can be maliciously corrupted by an adversary. The authors consider both weak adversary and strong adversary and present robust algorithms for each setting. Under the weak adversary setting, the paper proposes an elimination-based algorithm and derives a regret bound with nearly optimal dependence on the time horizon. For strong adversary, the paper shows that a simple modification of the Zooming algorithm can attain sub-linear regret when the corruption budget is known in advance and applies several model selection methods to handle unknown corruption budget. The paper also provide numerical results to demonstrate the advantage of the proposed algorithm in corrupted scenarios.
Strengths: The paper is the first to study Lipschitz bandits with adversarial corruptions and proposes several algorithms with both sub-linear regret bounds and promising empirical performance.
Weaknesses: 1. For weak adversary, the regret bound of the proposed algorithm has a multiplicative dependence on the corruption budget, which is unsatisfactory and an additive dependence is desired.
2. For strong adversary, the proposed algorithm is mainly a combination of existing methods with little novelty. The derived regret bounds are also weak as they depend on the covering dimension rather than the zooming dimension.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In the experiment with two dim mean function under oracle attack, why the performance of RMEL is better than Zooming?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitations of the proposed algorithms.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable questions, and please see our responses to your concerns below:
**Weakness 1: For weak adversaries, the regret bound of the proposed algorithm has a multiplicative dependence on C, an additive dependence is desired.**
We also state this point as a limitation of our paper in lines 243-245. And we believe it is very challenging to introduce an algorithm whose regret bound has an additive dependence on the corruption budget C under the Lipschitz bandits. For the stochastic linear bandit whose regret is $d \sqrt{T}$, we can control its regret by $d \sqrt{T} + dC$ under corrupted value C. However, unlike the linear bandit, the regret of stochastic Lipschitz bandit is $T^{(d_z+1)/(d_z+2)}$, which scales exponentially with the zooming dimension under the base $T$. Therefore, it is naturally very challenging to separate the zooming dimension $d_z$ and $T$ into two terms for the regret analysis of the corrupted Lipschitz bandit. Therefore, whether it is possible to propose an algorithm that has an additive dependence on $C$ in a separate term free of $T$ is still unclear.
We’d like to re-emphasize that the regret analysis of our RMEL algorithm is highly non-trivial. As we mentioned in lines 203-207, it is difficult to apply the elimination idea on the Lipschitz bandit under corruption since there are potentially three sources of error. Besides, both the zooming dimension $d_z$ and corruption $C$ are unrevealed to us, but our RMEL could adaptively attain the regret bound depending on these two terms, and could achieve the optimal regret bound in the uncorrupted case up to some logarithmic terms (line 241).
**Weakness 2: For strong adversary, the proposed algorithm is mainly a combination of existing algorithms. Regret depends on the covering dimension rather than the zooming dimension.**
We present a line of algorithms under the strong adversary and unknown budget $C$ for the completeness of our work.
For the strong adversary, we first studied the case when the corruption budget $C$ is known, and propose an algorithm whose regret bound matches the lower bound we deduce in Theorem 4.2, which implies that our proposed algorithm is minimax optimal. For the unknown corruption budget case, we developed the following lower bound under the suggestion from Reviewer DQs2:
Theorem: For any algorithm, when there is no corruption, we denote $R^0_T$ as the upper bound of cumulative regret in T rounds under our problem setting described in Section 3, i.e. $Regret_T \leq R^0_T$ with high probability, and it holds that $R^0_T = o(T)$. Then under the strong adversary and unknown attacking budget $C$, there exists a problem instance on which this algorithm will incur linear regret $\Omega(T)$ with probability at least $0.5$, if $C = \Omega(R^0_T/4^{d_z}) = \Omega(R^0_T)$.
And we will put it and its proof in the next revision. The main contribution of our work is to propose lower bounds of different cases and the RMEL algorithm, which works efficiently in theory and in practice. For completeness, we introduce two algorithms under the strong adversary and unknown budget setting. Introducing algorithms under this setting with better regret bound will be a challenging future direction.
**Question: In experiment with two dim mean function under oracle attack, why RMEL is better than Zooming?**
We are not sure if we fully understand your question (we apologize for that), and hence we offer two answers from different understandings:
Why RMEL is better than Zooming under corruptions:
The Zooming algorithm is designed for the purely stochastic Lipschitz bandit setting, and will become futile in both theory and practice if the stochastic rewards are contaminated by adversaries. This can be observed by all the experimental results in Figure 1, where the Zooming algorithm suffers from linear cumulative regret under a mild volume of attacks. On the contrary, our RMEL algorithm is designed to defend against the corruption from adversaries in the Lipschitz bandit problem, and can achieve a decent sub-linear regret bound that is adaptive to the attacking budget $C$. From Figure 1, we can see that RMEL yields very robust results under various scenarios with different volumes of attacks.
Why RMEL is better than Zooming without corruptions:
Note in theory, our RMEL could attain the same optimal regret bound as Zooming up to some logarithmic terms when there is no corruption as we mention in lines 240-242 in our paper. In other words, the regret bound of our RMEL is adaptive and no worse than that of the Zooming algorithm under no corruption up to some log terms. Therefore, we can expect RMEL and Zooming to perform similarly when there is no attack, which also coincides with our experimental results: Under the non-corrupted case, in two settings (triangle, sine) Zooming is better, while in one setting (two dim) RMEL outperforms Zooming.
Thank you very much for your valuable insights. Please let us know if our response has decently resolved your concern and improved your opinion of our work. And we are more than happy to take any additional questions from you.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. It addresses my concerns and I will increase the score accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you for your careful review
Comment: We sincerely appreciate your comprehensive feedback and valuable insights on our study. We are delighted to engage in further discussions with you to address any concerns and enhance your perspective on our work. | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and effort you dedicated to evaluating our manuscript, and we value the insights provided to enhance the quality of our work. We are happy to know that you find our work studies an essential and timely topic, with solid theory analysis and promising performance in practice
The Attached PDF contains the tables used in our Rebuttal.
Specifically,
Tables 1-3 are for Reviewer ecqm (Weakness #6), where we reran our experiments in Figure 1 by restricting that the corruption values from both attacks are at most $1$ at each round.
Table 4-6 are for Reviewer ecqm (Weakness #7), where we implement the explicit corruptions function $c_t(x) = \sin(2 \pi x t/T)$ as the adversarial attack in our experiments under the three underlying functions in Figure 1.
Table 7-9 are for Reviewer 4cuU (Weakness #2), where we show the convergence of regrets of our algorithms in the long run under the two dim mean reward functions.
Table 10-11 for Reviewer 4cuU (Question #3), where we show the promising performance of Robust Zooming algorithm given the value of $C$ is revealed to the agent.
Pdf: /pdf/a0feca8f82c4eebf9afabf16294bf81fe83f2189.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper studied robust Lipschitz bandits under two types of corruptions - a weak adversary who perturbs the reward function before observing the selected action; and a strong adversary who perturbs the instantiated reward of the selected action. When the attack budget is known, the paper proposed to use enlarged confidence budget, and the regret upper bound of the corresponding algorithms attains the lower bound derived in the paper. When the attack budget is unknown, the paper developed two robust algorithms targeting the weak and strong adversary separately. For the weak adversary, both upper and lower regret bound are derived. There is a gap between these two bounds. For the strong adversary, the paper derived upper bound of the regret. Experiments on synthetic datasets demonstrate the effectiveness of the proposed robust algorithms.
Strengths: (1). The topic of studying robustness in bandits is a timely topic. This extends prior works on robust bandit to the Lipschitz bandit scenario, which pushes the frontier in this domain.
(2). The work is relatively complete - two types of adversaries are considered, and the attack budget are defined accordingly; two threat models are considered, including known and unknown attack budget. Besides that, both upper and lower regret bounds are derived, which shows that the robust algorithm designed in this paper is close to optimal.
Weaknesses: I don't see major weaknesses with the paper. A minor weakness is that for the strong adversary and the unknown attack budget setting, the paper is missing a regret lower bound.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Can the authors provide some discussions on whether it's possible to derive regret lower bound for the strong adversary and unknown attack budget setting?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments on our work. We are happy to know you find our work pushes the frontier on a timely topic and is relatively complete. Please see our response to your question as follows:
**Can the authors provide some discussions on whether it's possible to derive regret lower bound for the strong adversary and unknown attack budget setting?**
Thank you for your constructive question. Inspired by Theorem 4 in [1], which claims the regret lower bound of linear bandits in the two-dimensional space with two arms (d=2,k=2) under strong adversary and unknown budget C, in the past few days we successfully extended their result to the continuum-armed Lipschitz bandit setting and deduced the following Theorem:
Theorem: For any algorithm, when there is no corruption, we denote $R^0_T$ as the upper bound of cumulative regret in T rounds under our problem setting described in Section 3, i.e. $Regret_T \leq R^0_T$ with high probability, and it holds that $R^0_T = o(T)$. Then under the strong adversary and unknown attacking budget $C$, there exists a problem instance on which this algorithm will incur linear regret $\Omega(T)$ with probability at least $0.5$, if $C = \Omega(R^0_T/4^{d_z}) = \Omega(R^0_T)$.
Proof sketch: we use a similar proof structure as in the proof of our Theorem 4.2 and Theorem 5.2. Specifically, we first study the case $d_z = 0$, and use the problem setting $A_1$ where the Lipschitz function $f(x) = 0.25-|x-0.25|$ if $x \in [0,0.5]$ and $f(x) = 0$ otherwise on the space ([0,1], ||), and we assume there is no random noise. For any algorithm with $Regret_T \leq R^0_T$, with Markov inequality we know that $P(\text{the number of pulls in }(0.5,1] \leq 8 R^0_T) \geq 0.5$. We introduce a new problem instance $A_2$: $g(x) = 0.25-|x-0.25|$ if $x \in [0,0.5]$ and $g(x) = 0.5-|x-1|$ if $x \in (0.5,1]$, and we have a strong adversary to attack as follows: whenever the arm in $(0.5,1]$ is pulled, the adversary changes the reward to 0 before the total corruption $C= 4 R^0_T = \Omega(R^0_T)$ is used up. Then the agent can not tell the difference between $A_1$ and $A_2$ until the number of times to select arms in $(0.5,1]$ reaches at least $C/0.5=8 R^0_T$. Therefore, with probability at least $0.5$, the cumulative regret is at least $0.25(T - 8 R^0_T) = \Omega(T)$.
For the case $d_z > 0$, we can similarly construct the instance in the space $([0,1]^{d=\lceil 2 d_z \rceil}, ||\cdot||)$ where the norm $||\cdot|| = ||\cdot|| _{\infty}$. And the function $f_1(x) = 4^{-\frac{d}{d-d_z}} - ||x-(0.25,\dots,0.25)||^{\frac{d}{d-d_z}}, x_i \in [0,0.5]$. And then we could use the Pigeonhole principle to argue that with probability at least $1/2$ the number of times to pull arms in some region can be upper bounded by $O(R^0_T/2^d)$. By constructing a new instance where the reward function takes maximum value in this region, we can prove our Theorem using a similar argument as above.
We know the optimal worst-case regret in the stochastic Lipschitz bandit problem is of order $R^0_T=O (\ln(T)^{(1)/(d_z+2)}T^{(d_z+1)/(d_z+2)}) = \tilde O (T^{(d_z+1)/(d_z+2)}) $, and hence this Theorem suggests that for any algorithm (e.g. Zooming algorithm) that could attain this regret bound under the non-corruption setting, it must suffer linear regret if unknown corruption is of order $\Omega(\ln(T)^{(1)/(d_z+2)}T^{(d_z+1)/(d_z+2)})$ and comes from the strong adversary.
We will put this Theorem and its detailed proof with explanations in the revision. Thank you very much for helping improve our work.
[1] Stochastic Linear Bandits Robust to Adversarial Attacks, I Bogunovic et al., AISTATS 2021. | null | null | null | null | null | null |
ComSL: A Composite Speech-Language Model for End-to-End Speech-to-Text Translation | Accept (poster) | Summary: Authors propose a speech-text pretrained model for spoken language tasks which leverages already existing pre-trained speech and language models. Modality mapping/alignment is based on a concatenation of paired speech-text (no need of word level alignment). The model is evaluated on a speech-to-text translation (S2TT) task (on CoVoST2 dataset) and slightly outperform previous models such as Whisper for S2TT.
Strengths: -Good performance reported on CoVoST speech-to-text translation tasks
-An approach that leverage existing pre-trained models (whisper speech encoder; mBART text2text) to build efficient ST systems
Weaknesses: -Difference/positioning related to previous speech-text pretrained models should be improved: in what aspect does your method really differ from SpeechT5, SLAM, etc ?
-Improvement over Whisper Large when model is trained without pseudo ST data is tiny (and maybe not significant) so does the improvement really comes from the new approach/losses proposed or does it come only from the use of pseudo ST data ?
-Only Whisper speechencoder is used, why ? Authors could have also chosen wav2vec multilingual (XLS-R) or HuBERT speech encoders
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: -In what aspect does your method really differ from SpeechT5, SLAM, etc ?
-"Speech Transformer blocks are initialized with the encoder parameters of Whisper model" why using Whisper speech encoder only and not XLS-R (multilingual wav2vec) or HuBERT ?
-"Different from the previous speech-text alignment approaches that rely on externally forced-alignment methods to determine word or other unit boundaries in speech sequences," => Do SpeechT5 & SLAM need external word-level forced-alignment (i don't think so)
-(tab 1) does the improvement really comes from the new approach/losses proposed or does it come only from the use of pseudo ST data ?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitation mentioned are the use of mBART encoder decoder instead of large decoder only models (with bigger architectures)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful questions. We have provided our responses right after each question.
1. In what aspect does your method really differ from SpeechT5, SLAM, etc ?
Answers:
Our methods differ from those used in SpeechT5, SLAM, etc. in the following ways:
A) The cross-modality learning (CML) is employed in different training stages. In previous works such as SpeechT5 and SLAM, CML is implemented in the pre-training stage, while ours is used in the fine-tuning stage together with fine-tuning task losses.
B) Different CML methods/objectives are utilized. In SpeechT5, its technique for joint pre-training involves utilizing vector-quantized embeddings as a bridge to align the speech and text representations through a shared codebook and mixing up contextual representations and quantized latent representations to guide the quantizer to use cross-modal information. In SLAM, although it also includes a joint input of speech and text, it predicts masked text or speech spans with BERT or w2v-BERT on top of the encoder.
In contrast, our approach does not require quantization for speech input or an MLM loss for encoders. Therefore, our approach is completely different from SpeechT5 and SLAM. Our approach is more flexible, requires less data as it is applied during the fine-tuning stage, and can be used for adaptation between any composite transformer-based speech and language model.
2. "Speech Transformer blocks are initialized with the encoder parameters of Whisper model" why using Whisper speech encoder only and not XLS-R (multilingual wav2vec) or HuBERT ?
Answers: We believe that our methods are also applicable to XLS-R or HuBERT. HuBERT was pre-trained using Libri-light (60k hours), a collection of spoken English audio without labels. As a result, HuBERT is not a multi-lingual model and requires much more multi-lingual fine-tuning data to produce decent translation performance. On the other hand, XLS-R was pre-trained on massive multi-lingual data in a self-supervised learning manner. However, Whisper prevails in multi-lingual tasks recently since it was trained with much more supervised data and achieved better performance. We chose to fine-tune Whisper as a very strong baseline and explore the potential of our approaches. Considering that Whisper uses a standard transformer architecture, we believe it is representative of transformer-based speech encoders. If given adequate computational resources, we will try HuBERT or XLS-R as well in the future.
3. "Different from the previous speech-text alignment approaches that rely on externally forced-alignment methods to determine word or other unit boundaries in speech sequences," => Do SpeechT5 & SLAM need external word-level forced-alignment (i don't think so)
Answers: You are correct. SpeechT5 and SLAM do not require external word-level forced alignment. We will rephrase this sentence to make it clear that it refers to the MML loss in USM and the word-aligned contrastive loss in WACO, which are used for comparisons in Table 3.
4. (tab 1) does the improvement really comes from the new approach/losses proposed or does it come only from the use of pseudo ST data?
Answers: The improvement comes from both the new approach/losses and the use of pseudo ST data. Please refer to the attached PDF file in the “global” response to all reviewers jointly for the results of significance tests, which show that, regardless of whether pseudo ST data is added, our ComSL Medium/Large models significantly outperform their corresponding baseline Medium/Large models (i.e., finetuned Whisper) on high-resource and mid-resource languages, with a significance level of p<0.01. In addition, we did not use any pseudo ST data for high-resource languages. For more information on the statistics of the pseudo data, please refer to the appendix.
---
Rebuttal Comment 1.1:
Title: updated score after rebuttal
Comment: hi
tks for your detailed answers to my questions/comments
i updated my score from 5 to 6 (Weak Accept)
best
---
Reply to Comment 1.1.1:
Comment: We are delighted that our response addressed your concerns and are grateful for your insightful feedback and the updated score. | Summary: This paper proposes a composite speech-language model for speech-to-text (ComSL) translation. ComSL first leverages existing pre-trained models for initialization, including Whisper speech recognition model and mBART machine translation model. And then proposed several modality alignment methods that do not require additional tools. Extensive experiments on CoVoST2 demonstrate the superiority of the proposed method, which also achieves a new SOTA performance.
Strengths: 1. This paper provides insight into designing large-scale spoken language models.
2. This paper proposes several methods based on concatenated embeddings to align the representation of speech and text.
3. The proposed method outperforms previous work and achieves a new SOTA performance.
Weaknesses: 1. The training process may be unstable. As shown in the appendix, multiple losses are distributed by various weights from 0.1 to 0.8. The authors do not explain how these hyperparameters are determined, and how much the impact is. A solid system is expected.
2. Although this paper achieves a new SOTA performance, it mainly relies on the combination of several common techniques. Therefore, the contribution is limited. To achieve SOAT performance, ComSL is trained with pseudo ST data, resulting in unfair comparisons.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Not applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful questions. We have provided our responses right after each question.
1. The training process may be unstable. As shown in the appendix, multiple losses are distributed by various weights from 0.1 to 0.8. The authors do not explain how these hyperparameters are determined, and how much the impact is. A solid system is expected.
Answers: Thank you for bringing this to our attention. We will make sure to clarify it in the revised version of our paper. Unlike training from scratch, our ComSL model weights were initialized with pre-trained speech and language models, which greatly stabilized the training process. Although we employed various weights for multiple losses, we set most weights empirically or referred to values used in other studies. Our consideration for relatively high or low weights depended on the contributions of the losses. For example, we set a high weight for DDM since the performance gap between AST and MT was large, and relatively smaller weights for other tasks like MT since MBart had already been fine-tuned before being combined with Whisper. We also tried tuning the weights within a small range, such as DDM from 0.8 to 1, and observed that its impact on performance was marginal. We did not perform a “brute force” hyperparameter tuning as it was unaffordable.
2. Although this paper achieves a new SOTA performance, it mainly relies on the combination of several common techniques. Therefore, the contribution is limited. To achieve SOAT performance, ComSL is trained with pseudo ST data, resulting in unfair comparisons.
Answers:
a) Our contributions are outlined in the final paragraph of the introduction section. With the emergence of pre-trained speech and language models, we proposed an approach that combines two publicly available models and fine-tunes them directly for downstream tasks, along with cross-modality learning, which is typically used during the pre-training stage. We hope that our approach will offer a new avenue for research in academia, where large model pre-training is not feasible due to limited computational resources.
b) The improvement of ComSL comes from both the new approach/losses and the use of pseudo ST data. Please refer to the attached PDF file in the “global” response to all reviewers jointly for the results of significance tests, which show that, regardless of whether pseudo ST data is added, our ComSL Medium/Large models significantly outperform their corresponding baseline Medium/Large models (i.e., finetuned Whisper) on high-resource and mid-resource languages, with a significance level of p<0.01. Nowadays, it is very challenging to conduct apple-to-apple comparisons due to differences in the amount of training data and model architecture/size. Most leaderboards/benchmarks do not require the listing of training data. Additionally, the Pseudo ST data we used was extracted from the Common Voice corpus, which was also included in USM pre-training in the form of self-supervised learning, according to the data description in the USM paper.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer 7Di5,
We greatly appreciate your time reviewing our paper. We hope that our rebuttal has addressed your concerns. If you have any further questions or comments, we would be more than happy to provide clarification.
Thanks!
Authors
---
Rebuttal Comment 1.2:
Comment: I appreciate the detailed explanations. I will update the score from 5 to 6. | Summary: This work presents a speech-language model built from both speech-only and language-only pretrained models. By compositing pre-trained models from 2 modalities, the authors show that a data-efficiency for spoken language tasks can be achieved. In particular, the authors proposed a few cross-modality loss functions which are designed to build a strong relationship between 2 modalities. The authors demonstrated that their method is able to achieve a new sota BLEU on the CoVoST2 evaluation task.
Strengths: This paper is well written. It explains the motivations and methods clearly. Besides great overall results, it also presents a detailed analysis including ablation study and in-detail examination of usage of the cross-modality.
Weaknesses: - It is a bit disappointing to see from the result the cross modality tasks, which is the main contribution in this work, does not give significant difference in terms of BLEU score (Table 2, 29.40 --> 29.69 due to CML loss). I believe the follow up improvement from pseudo data can be obtained without CML loss.
- It would be better if the authors can clearly state how many parameters are in the pre-trained ASR encoder, the LM (mBART) and in the adapters; and which parts are finetuned or fixed during training.
- Though the model is trained with both ASR and AST loss, the authors only evaluate the model on the AST / MT tasks. It would be great if the author can provide a benchmark on ASR as well.
- It is not immediately clear to me that whether the comparison of this work to USM / whisper is fair. It seems to be both whisper and USM does not finetune on CoVoST2 tasks, while they both use CoVoST2 tasks as an out-of-domain evaluation. On the other hand, whisper is able to perform both ASR and AST using the same model with just different prompt, while the authors did not reveal their ASR performance.
There are also a recent paper relevant to this work:
Rubenstein, Paul K., et al. "AudioPaLM: A Large Language Model That Can Speak and Listen." arXiv preprint arXiv:2306.12925 (2023).
Understandably, this paper appears after NeuralPS submission deadline. The authors may consider to cite it since it achieves a new SOTA on CoVoST2.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - After the finetuning on CoVoST2, I think the model can still perform ASR ? But it would be great that the authors to clearly state that (or explain why it cannot).
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I think the authors have addressed limitations adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful questions. We have provided our responses right after each question.
1. After the finetuning on CoVoST2, I think the model can still perform ASR ? But it would be great that the authors to clearly state that (or explain why it cannot).
Answers: You are correct. The ComSL can perform ASR after the finetuning on CoVoST2. Due to space limitations in the full paper, we reported the ASR performance in the appendix enclosed in the supplementary materials, along with the ST and MT tasks for each of the 21 languages in the CoVoST 2 evaluation set. We also included statistics on pseudo data and experimental details in the appendix. If the lack of an evaluation of the ASR task affected your score on our paper, we hope you will consider adjusting it.
Table 6 in the appendix shows that ComSL outperforms finetuned Whisper on the ASR task for some languages, but its overall performance is slightly worse. This phenomenon has also been observed in other studies, and as a result, pretrained models are often finetuned for the ASR task only (instead of multi-task such as both ASR and AST tasks) to achieve better performance, as was done on USM (Reference [44]).
2. It is a bit disappointing to see from the result the cross modality tasks, which is the main contribution in this work, does not give significant difference in terms of BLEU score (Table 2, 29.40 --> 29.69 due to CML loss). I believe the follow up improvement from pseudo data can be obtained without CML loss.
Answers:
We have done a significance test on comparing with/without CML, which shows CML has a significant impact on performance for high-resource and mid-resource languages with a significance level of p<0.05, but not on low-resource languages. During training, multiple tasks, including AST, ASR, MT, and CML, are conducted. The shared model parameters are updated not only using the speech task but also the text task. This reduces the modality gap between speech and text to a certain extent, and as a result, the CML loss might not be as significant as expected, especially after the DDM (AST and MT distribution matching) loss was employed as complementary method for reducing the modality gap. In addition, we suspected that limited training data of low-resource languages might be causing issues, such as limited vocabulary coverage, which could affect the efficiency of our proposed methods.
Our experimental results show that the CML loss can significantly improve performance on high-resource languages, while adding more pseudo-ST data after applying the CML loss does not provide additional benefits. However, we have not yet tested the alternative approach of adding pseudo-ST data first and then applying the CML loss on low-resource languages. We will consider conducting this comparison in future research.
3. It would be better if the authors can clearly state how many parameters are in the pre-trained ASR encoder, the LM (mBART) and in the adapters; and which parts are finetuned or fixed during training.
Answers: The parameter sizes for different components are as follows: 1)Whisper medium encoder: 0.3B; 2) Whisper large encoder: 0.7B; 3)mBART (encoder+decoder): 0.6B; 4) Adapter: 30M. During the first third of the training steps, we only fixed the parameters of pre-trained ASR encoder (i.e., Whisper encoder). Other than that, all parameters were fine-tuned.
4. Though the model is trained with both ASR and AST loss, the authors only evaluate the model on the AST / MT tasks. It would be great if the author can provide a benchmark on ASR as well.
Answer is the same as that of first question.
5. It is not immediately clear to me that whether the comparison of this work to USM / whisper is fair. It seems to be both whisper and USM does not finetune on CoVoST2 tasks, while they both use CoVoST2 tasks as an out-of-domain evaluation. On the other hand, whisper is able to perform both ASR and AST using the same model with just different prompt, while the authors did not reveal their ASR performance.
Answers: Actually, the BLEU scores for Whisper and USM reported in Table 1 were achieved by finetuning the models on the CoVoST 2 training set. We finetuned Whisper models and achieved BLEU scores of 28.6 and 29.7 for Whisper medium and Whisper large, respectively, which are higher than those reported in the original Whisper paper (Reference [26]). As for the USM model, since it is not a public model, we cited the BLEU scores from its paper (Reference [44]). Section 3.3.2 of this paper indicates that the USM model evaluated on CoVoST2 was finetuned using the CoVoST2 training set and text translation data such as WMT or TED talks as available. Therefore, our comparison is fair in the context of in-domain finetuning/evaluation. Additionally, we reported the ASR performance in the appendix enclosed in the supplementary materials.
6. There are also a recent paper relevant to this work: Rubenstein, Paul K., et al. "AudioPaLM: A Large Language Model That Can Speak and Listen." arXiv preprint arXiv:2306.12925 (2023). Understandably, this paper appears after NeuralPS submission deadline. The authors may consider to cite it since it achieves a new SOTA on CoVoST2.
Answers: Thank you for bringing this to our attention. We will be sure to cite this most recent and relevant work on leveraging LLMs to further improve performance in our paper.
---
Rebuttal Comment 1.1:
Comment: Thanks very much for the detailed response and explanation. I am satisfied with the author's response.
---
Reply to Comment 1.1.1:
Comment: We are joyed that our response has successfully addressed your concerns and appreciative of your valuable feedback. | Summary: With the goal to improve speech translation task, this work, leverages a multi-task training approach optimizing weighted sum of ASR, ST, MT and cross-modality learning (CML) objectives. Using mBART encoder and decoder Transformer blocks, the CML objective is purposed to better align speech and text modalities. CML involves a masked token prediction (MTP), speech to text mapping (STM), and encoder representation matching (ERM) objectives.
Overall, the proposed arc (ComSL) includes a speech and textual transformer blocks, and a two layered bride adapter module. Before the multi-task step, pre-trained models (mBART and Whisper) are fine-tuned using training data for 50 languages, mainly involving CoVoST2 data set. Results are provided for low, medium and high-resource languages show ComSL's performance improvements over recently proposed multi-modal (text and speech) models such as USM and Whisper.
Strengths: - This work, is well motivated suggesting an cross-modality learning objectives, to address the modality gap in the next frontier for a working composite speech and textual model.
- The results for ST are encouraging, performing similarly or slightly better than recently proposed multi-task based speech-text models like Whisper and USM.
Weaknesses: - Although, ComSL is well motivated to focus on bridging the modality gap, the pre-training and fine-tuning using in-domain data doesn't seem to deliver significant improvements. With close results, and lack of significance testing, its rather difficult to interpret if the proposed approach is better performing.
- Given the focus on multi-task learning, the provided evaluation (only for ST task) is unexpected. Adding at least ASR task in to the comparison could have provided more substance to the report.
- Discussion and analysis part could have focused on ST cases (or examples) where CML improves over other models without using the cross-modality objectives. This would show the importance of introducing the objective.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - What was the motivation for not evaluating other tasks, as this model is trained in a multi-task setting, at least the ASR evaluation makes sense.
- Given the number of objectives involved, particularly for the CML part, how does model training and inference complexity compares?
- Table 1, all models ... are fine-tuned with CoVoST2 data, does this include USM model? if not, what is the justification for a fair comparison?
- Table 1, for Non-E2EST, does it make more sense to use Whisper + and a standard MT model trained separately?
- Given most numbers are quite close, in table 1-3, adding statistical significance could make the results more meaningful, and to the support SOTA assertion.
- L103 fix unfinished sentence
- S3.3 how does the split is formulated for z^s and z^x before passing it to mBART decoder? Obviously the split is necessary to optimize the MTP objective using z^x, have you considered directly using e^x without the concat step?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations are included, reflecting the main content of the work. I suggest authors to consider a separate limitation part and to consider societal impact if available.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful questions. We have provided our responses right after each question.
1. What was the motivation for not evaluating other tasks, as this model is trained in a multi-task setting, at least the ASR evaluation makes sense.
Answers:
Due to space limitations in the full paper, we have reported the ASR performance in the appendix of the supplementary materials. If the lack of evaluation on the ASR task affected your score, we hope you will consider adjusting it.
Table 6 in the appendix shows that ComSL outperforms finetuned Whisper on the ASR task for some languages, but its overall performance is slightly worse. This phenomenon has also been observed in other studies, and as a result, pretrained models are often finetuned for the ASR task only (instead of multi-task such as both ASR and ST tasks) to achieve better performance, as was done on USM (Reference [44]).
2. Given the number of objectives involved, particularly for the CML part, how does model training and inference complexity compares?
Answers:
During training, objectives other than speech-to-text translation loss are included. However, these additional objectives are not necessary during the inference stage, so the complexity remains the same as a conventional speech-to-text translation model. Our observations show that training time increases by a factor of 1.2, from 2.5 hours per epoch to 3 hours per epoch using 32 V100 GPUs, when compared to a model trained without CML losses.
3. Table 1, all models ... are fine-tuned with CoVoST2 data, does this include USM model? if not, what is the justification for a fair comparison?
Answers:
Yes, the USM model has been finetuned. USM is not a public model, but we have cited the BLEU scores from its paper (Reference [44]). Section 3.3.2 of this paper indicates that the USM model evaluated on CoVoST2 was finetuned using the CoVoST2 training set and text translation data as available. Additionally, USM for ASR and ST tasks were finetuned separately, so the results of the ASR and ST tasks shown in the USM paper were obtained from different finetuned USM models.
4. Table 1, for Non-E2EST, does it make more sense to use Whisper + and a standard MT model trained separately?
Answers:
We are not entirely sure if we understand your question correctly. The mBART-50 model (please check footnote 3 on page 7) that we used is already a MT model. We finetuned this model on CoVoST2 text data. For the first row under Non-E2EST in Table 1, we fed the ground truth transcription to the mBART model to measure its performance, simulating a cascade system with a perfect ASR model output. For the second and third rows, we used Whisper as the ASR model for fair comparison. To the best of our knowledge, mBART-50 is the best pretrained translation model on CoVoST text data, even better than most LLMs (most LLMs are not finetuned for translation).
5. Given most numbers are quite close, in table 1-3, adding statistical significance could make the results more meaningful, and to the support SOTA assertion.
Answers:
Thank you for your suggestions. We have done significance tests for the results in table 1-3.
For Table 1, please refer to the attached PDF file in the “global” response to all reviewers. Our tests show that, regardless of whether pseudo ST data is added, our ComSL Medium/Large models significantly outperform their corresponding baseline Medium/Large models (i.e., finetuned Whisper) on high-resource and mid-resource languages, with a significance level of p<0.01. For low-resource languages, where the training data for most languages is around or less than two hours, the addition of pseudo ST data has a significant impact on improving ST performance. We suspected that limited training data for a specific language might be causing issues, such as limited vocabulary coverage, which could affect the efficiency of our proposed methods. This is why we added pseudo ST data only to mid-resource and low-resource languages. Please refer to the appendix for statistics on the pseudo data.
Table 2 shows that adding only the MT task can negatively impact performance. So we conducted significance tests for all tasks/losses except for the MT task. The test results show that the CML loss can significantly improve ST performance on high-resource and mid-resource languages, with a significance level of p<0.05, but not on low-resource languages. All other tasks/losses significantly improve performance on all languages.
For Table 3, our significance tests show that minimizing the modality gap has a significant impact on performance for high-resource and mid-resource languages. However, the differences between various methods of modality gap minimization are not significant. In our multi-task learning approach, we conducted several tasks, including AST, ASR, MT, CML, and others. The shared model parameters are updated using both speech and text tasks, which reduces the modality gap between speech and text to a certain extent. As a result, the explicit modality gap minimization loss may not be as significant as expected, especially after the DDM (AST and MT distribution matching) loss was employed as a complementary method for reducing the modality gap.
6. L103 fix unfinished sentence
Answer:
We will fix it.
7. S3.3 how does the split is formulated for z^s and z^x before passing it to mBART decoder? Obviously the split is necessary to optimize the MTP objective using z^x, have you considered directly using e^x without the concat step?
Answers:
Yes, we have considered this. The mBART used in our ComSL has been finetuned using text from the CoVoST 2 training set, so the improvement from adding a text task in ComSL training is marginal. For the MTP task, the goal is to match speech and text representations through self-attention. If we were to directly use e^x, it would become a pure text denoising task and would reduce the contribution to minimize modality gap.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer tyJC,
We appreciate the time and effort you have put into reviewing our paper. It is our hope that our rebuttal has successfully addressed all of your concerns. We would like to draw your attention to the fact that we had provided the ASR evaluation results in the appendix of the supplementary materials. We sincerely hope that this clarification will positively influence your opinion on the quality of our submission. If you have any further questions or comments, we would be more than happy to provide further clarification.
Thank you!
Authors | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for taking time and effort to review our paper. We appreciate all your valuable comments and suggestions. To the common concerns and questions, our responses are listed here.
1. ASR evaluation
Due to space constraints in the full paper, we have included the ASR performance results in the appendix of the supplementary materials. This appendix also contains the ST and MT task results for each of the 21 languages in the CoVoST 2 evaluation set. Please review the ASR results in the supplementary materials. We will consider including a summary of these results in the revised version of the full paper. If the absence of an evaluation of the ASR task had an impact on the score you gave our paper, we hope you consider adjusting it accordingly.
Calculating WER for multi-lingual ASR outputs depends on the methods of text normalization and word segmentation for some languages. We were unable to find a standard and publicly available tool for this purpose, so we developed our own method and used it to compare the ASR performance of the models we built, namely Whisper Large finetuned and ComSL Medium/Large. It is important to note that the WER numbers we reported cannot be directly compared with those reported in other publications.
2. Statistical significance test
We have updated Table 1 to include the results of our statistical significance tests, which can be found in the attached PDF file. Our tests show that, regardless of whether pseudo ST data is added, our ComSL Medium/Large models significantly outperform their corresponding baseline Medium/Large models (i.e., finetuned Whisper) on high-resource and mid-resource languages, with a significance level of p<0.01. For low-resource languages, where the training data for most languages is around two hours, the addition of pseudo ST data has a significant impact on improving ST performance. We suspected that limited training data for a specific language might be causing issues, such as limited vocabulary coverage, which could affect the efficiency of our proposed methods. This is why we added pseudo ST data only to mid-resource and low-resource languages. Please refer to the appendix for statistics on the pseudo data.
3. Main contributions/differences
In addition to the main contributions summarized in the last paragraph of the introduction section, we would like to highlight our proposed approach that combines two publicly available pre-trained speech and language models and fine-tunes them directly for downstream tasks. This approach also incorporates cross-modality learning, which is typically used during the pre-training stage. We believe that our approach offers a new avenue for research in academia, where large model pre-training may not be feasible due to limited computational resources.
Pdf: /pdf/456e95b46183be36bcda233460db05996ad97f9c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Probabilistic Weight Fixing: Large-scale training of neural network weight uncertainties for quantisation. | Accept (poster) | Summary: This paper discusses a method of quantizing weights based on Bayesian neural networks (BNN) by clustering them as closely as possible to a set of powers-of-two values. Unlike the weight fix network (WFN) that quantizes by clustering the fixed weights of the network around the nearest centroid, the proposed Probabilistic weight fix network (PWFN) makes use of a probabilistic BNN model. Weights are assumed to follow a gaussian distribution. The $\mu$'s and $\sigma$'s of the weights are learned using Bayes-by-backprop (BBP). Quantization is performed iteratively by clustering around a specific centroid using the learned $\mu$ and $\sigma$ from the model’s codebook.
Strengths: By expressing the existing network as a weight-shared BNN, several advantages can be obtained:
- It allows for uncertainty quantification.
- Using the Mahalanobis distance to determine the centroid seems more reasonable than using the Euclidean distance or Relative movement distance.
In many existing papers, codebooks were maintained at the layer or channel level, requiring significant storage capacity. However, this paper achieves higher quantization efficiency by utilizing a codebook at the model level.
Weaknesses: - The amount of computation is significantly increased compared to the conventional WFN. It is also harder to train than a regular network, which also increases the time for hyperparameter exploration.
- Because of its complexity (BNN), it is unclear whether it can be applied to large models.
- The choice of prior and the learning of mu and sigma can greatly influence the results. This can lead to inconsistent performance with each training and cases where performance drops significantly. Using ensembles seems like it would require too much inference time.
- Sigma is set as the distance from the actual parameter to the power of two, but this could be different from the actual sensitivity of the parameter and could be a wrong choice of prior.
- Reference format should be easily distingusiable from that of equation numbering. Since both uses parenthesis (), it looks quite odd at first sight.
- According to the NeurIPS 2023 guide, all table numbers and titles should always appear before the table. https://media.neurips.cc/Conferences/NeurIPS2023/Styles/neurips_2023.pdf
- Some symbols in the equations are insufficiently explained regarding their dimension, type, or meaning. Line 192 Θ, Line 193 S, Line 201 δ. Also, δ is duplicated with the threshold δ on Line 237.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Q1) How was 0.05 determined in equation 4? Is it a value that can be generally applied to all networks, or is it a hyperparameter that should be applied differently for each model?
- Q2) Can you provide the standard deviation of accuracy in ensemble models? And can you provide the maximum and minimum accuracy of training?
- Q3) How is the Top-1 accuracy in Table 1 determined, since the results may vary for each training?
- Q4) It seems like using BNN would greatly increase training time; how much? Can you tell us how much the training time increases for ResNet-18 and DeiT compared to WFN?
- Q5) How much time/MACs are required for training and inference?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the comprehensive feedback provided on our paper. Herein, we address your concerns systematically:
**Q1) The Determination of 0.05 in Equation 4**
The value 0.05 was empirically determined through experiments on Cifar-10 using ResNet18. While this parameter demonstrated consistent performance across our tested networks, further exploration may optimize it for different architectures.
**Q2)** Can you provide the standard deviation of accuracy in ensemble models? And can you provide the maximum and minimum accuracy of training?
Thank you for highlighting the importance of this metric. Here are the detailed results:
| Model | Max | Min | Std |
|-------------|-------|-------|------|
| ResNet18 | 70.23 | 69.99 | 0.08 |
| ResNet34 | 74.99 | 74.34 | 0.16 |
| ResNet50 | 79.08 | 77.33 | 0.42 |
| DeiT-Small | 78.86 | 78.00 | 0.23 |
| DeiT-Tiny | 71.79 | 71.38 | 0.17 |
**Q3)** How is the Top-1 accuracy in Table 1 determined, since the results may vary for each training?
For the Top-1 accuracy, we fixed the quantized weights to their means (mu values). PWFN allows for both a compressed network for efficient inference and, optionally, an ensemble for uncertainty estimation.
**Q4)** It seems like using a BNN would greatly increase training time; how much? Can you tell us how much the training time increases for ResNet-18 and DeiT compared to WFN?
Indeed, BNN adds an overhead, but it's manageable with current GPU resources. PWFN regularization also lessens the sigma-collapse challenge, making it feasible even for larger models. We've added a general rebuttal comment to further elaborate.
**Q5)** How much time/MACs are required for training and inference?
For inference in the quantized network, the time matches other quantized networks of similar compression rates. Uncertainty estimations introduce additional costs but only if deemed beneficial by the practitioner. These costs are comparable to running inference multiple times on the same network (aside from the random sampling step), and will likely have much less overhead when compared to ensembling separately trained models.
**On the Complexity of BNNs**
As we show in the general comment, the overhead introduced training a BNN is manageable on consumer GPUs when using the PWFN regularisation term to avoid sigma-collapse. This means that PWFN can indeed be applied to large models. As far as we are aware, we are the first to successfully train a BNN on imagenet-scale problems, demonstrating strong performance on large and small models from the ResNet family as well as vision transformer models, and we see no reason why this can not be scaled further, we're excited to see how far training BNNs in this way can be pushed forward in future works.
**On Prior Choice and Consistency of Performance**
While different priors might influence results, our choice is backed by previous works and targets optimal compression for hardware accelerators. The notion of inconsistency has been addressed in our response to Q2.
**Clarifications on Symbols**
• Θ – this is the Heaviside step function. defined on Line 192.
• S – a constant as defined in Lines 194-196 and equation 3.
• δ – We've revised the text to clearly differentiate the context of its multiple usages.
**Choice of Prior**
While we acknowledge that other priors might optimize performance further, our choice is informed by previous works and the hardware efficiencies of mapping weights to powers-of-two.
**Table Formatting**
We thank you for pointing out the format discrepancy. It's now corrected as per the NeurIPS 2023 guide.
**Conclusion**
We value the feedback and believe the clarifications offered reinforce the strength and novelty of our methodology. We remain hopeful that our approach's advantages, as detailed in the paper and this rebuttal, and the utility of the work to the wider community can be seen. | Summary: PWFN (Probabilistic Weight-Shared Fusion Network) is a technique that combines weight-sharing quantization with Bayesian neural networks to achieve highly compressed and quantized neural networks. It models each weight as a draw from a distribution, allowing for the quantification of uncertainty in weight values. PWFN consists of iterative stages of training and clustering to optimize the weight distributions and determine optimal cluster configurations.
PWFN offers a probabilistic approach to weight-sharing quantization, providing enhanced compressibility, flexibility, and noise resilience in neural networks. It allows for uncertainty quantification and demonstrates improved performance compared to existing methods.
Strengths: (1) By representing weights as probability distributions, PWFN provides flexibility in weight values and enhances noise resilience. The distributions capture uncertainty in weight values, enabling robust performance even in the presence of noise.
(2) PWFN incorporates a regularization term and iterative clustering procedure, leading to improved performance compared to other methods
(3) By sampling weights from their distributions and observing changes in predictions, the model's confidence and uncertainty can be assessed.
Weaknesses: (1) PWFN needs additional complexity in the training process including regularization terms and clustering iterations.
(2) PWFN may require modifications to existing neural network frameworks to incorporate the probabilistic weight-sharing framework. It may also require additional expertise in Bayesian neural networks and variational inference techniques. For the reproducibility, the code needs to be open.
(3) Modeling weights as distributions and performing sampling during training can introduce additional computational overhead compared to traditional point estimate-based methods. This may result in increased training time and inference latency.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and constructive feedback on our submission. We appreciate the time and effort you've taken to review our paper.
**Complexity in Training Process**: We understand your concern about the additional complexity introduced by PWFN during the training process and have added a general comment with regards to this which we hope alleviates your concerns.
**Framework Modifications and Expertise**: We're pleased to share that we'll be releasing the code, and the process of mapping a traditionally trained network to a BNN will be automated - a user need only pass to the code a pre-trained model. This will significantly reduce the barriers to adoption, allowing those unfamiliar with Bayesian neural networks or variational inference techniques to still benefit from PWFN.
**Computational Overhead**: You rightly pointed out the potential computational overhead when modeling weights as distributions. However, it's crucial to note that this overhead is mainly seen during training and isn't any larger than existing re-training quantization approaches.
For standard inference tasks, our method uses quantized point estimates for weights, ensuring no added latency.
When leveraging the BNNs for uncertainty estimation there are additional costs, but these are comparable to running inference multiple times on the same network (aside from the random sampling step), and will likely have much less overhead when compared to ensembling separately trained models.
Once again, thank you for your review, we hope our revisions address your concerns adequately. | Summary: This paper discusses the Weight-sharing quantization technique to reduce DRAM read costs. To address this issue, an iterative weight fixing scheme was employed, which involved alternating between network training and weight clustering. This paper proposes a BNN-based framework that utilizes a new initialization setting and regularization term. With this method, millions of weights can be represented by hundreds of unique values.
Strengths: Probabilistic approach for weight-sharing quantization is novel and interesting.
The method improves the accuracy while minimizing the entropy and reducing the number of unique values used to represent the weights.
The prior initialization method using relative distance from powers-of-two consistently improves the accuracy, especially on resnet architectures.
Weaknesses: There is no explanation in the manuscript about how the weight position described in the abstract was handled.
Lack of experimental support for regularization scheme (x)
Lack of clarification – No explanation about Figure 2 and 3 in the manuscript
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: (49) There is no explicit clarification for the abbreviation of PWFN.
There is no explanation for Figure 2 and 3 in the manuscript.
(220) what is the meaning of j index?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper does not address the limitation of the method and any the negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the feedback and constructive criticisms provided by the reviewer. Your insights are invaluable to improving our manuscript. Here, we address the concerns raised:
**Concern 1: Weight Position Handling**
The weight position is indeed utilised in our methodology when determining the cluster assignments. As you rightly pointed out, each weight is associated with an individual sigma value that aids in determining its cluster assignment. This unique approach ensures that weights, even with the same value but in different positions, can be assigned to distinct clusters. We realized the need for more clarity in our manuscript regarding this aspect and have amended the methodology section to elucidate this further.
**Concern 2: Lack of Experimental Support and Clarification**
Thank you for pointing out the ambiguities associated with Figures 2 and 3. We've now expanded the captions for both figures to provide clarity. Specifically, Figure 2 showcases how increased alpha value (the scaling of the regularization term) plays a pivotal role in preventing sigma values from collapsing to zero. Avoiding this collapse is important, both in terms of using the network as a Bayesian Neural Network (if sigma is zero, then we have a point-estimate) and in our downstream use of using sigma to determine the quantization mapping. We hope this clarifies that experiments have been conducted and presented to explore the value of the regularization term in the methodology. Figure 3 has also been revised to enhance readability and comprehension.
**Concern 3: Clarification on Abbreviations**
Our sincere apologies for the oversight. We have now clearly defined the abbreviation as "Probabilistic Weight Fixing Networks (PWFN)" in the relevant sections of the manuscript.
**Conclusion**
We believe our work on the bayesian weight-sharing quantization technique is a valuable contribution to the field, and we're confident that the changes we've made, following your feedback, will further clarify our methodology and its significance. Once again, thank you for your constructive feedback, and we hope our revisions address your concerns adequately.
---
Rebuttal Comment 1.1:
Comment: As the authors reasonably address our concerns and issues, there is no further requests except for adding the detailed explanation of figure 2 and 3 in the main body of the manuscript by citing them instead of adding it in the captions.
---
Reply to Comment 1.1.1:
Comment: That's great - thank you for clarification on this, we'll add the references to the figures in the main body as you suggest. | Summary: This paper presents a novel quantization scheme based on iterative training and clustering of the weights into a very limited and finite set of choices. The paper follows a Bayesian approach to assign weights to clusters. Experiments are demonstrated on ImageNet for ResNet and Transformer architectures.
Strengths: The paper addresses an important topic of improving quantization. The paper is very clearly written, easy to understand and the idea is simple and elegant. Experiments seem sufficient to demonstrate the power of the method.
Weaknesses: I do not see any immediate weakness of the paper. One thing which could make the paper stronger is to explore the application on more complicated downstream tasks of for example DeiT or DeiT-small. This is just to understand the scalability of the method. Further, authors could also provide details on the added training costs to also challenges faced in the training involved for the approach. Additionally, detailed ablation is missing on how sensitive the results are w.r.t. the chosen prior, number of training+clustering combination steps, etc.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses reported above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitation in the context of hardware has been stated by the authors. However, it would be nice to understand additional limitations w.r.t. training time compared to other quantization methods, and other features that play role in the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and valuable insights on our paper.
**Training Costs**: We appreciate the concern raised about the training costs. Given its recurrent mention in the reviews, we've expanded on this aspect, providing a comparative analysis between PWFN and other prevalent quantization approaches as general comment. We hope this will shed light on the efficiency of our method in terms of training overheads.
**Number of Training+Clustering Steps**: Your observation on the potential optimization of the number of training+clustering steps is apt. While we initially set the number of weight-fixing steps based on the success of WFN, we recognize the scope for fine-tuning. Post your review, we experimented by reducing the steps (down to 7) for CIFAR-10, which yielded encouraging results. However, it remains to be seen if this can be generalized to larger datasets. We acknowledge this as a potential area for future investigation.
**Expansion to Other Domains**: We concur with your suggestion on diversifying the application domains. Our focus on image classification was largely inspired by preceding works, but we did venture into ViT-type architectures. Notably, with Bayesian Networks, PWFN stands out as the first approach to train image-net scale models. As for quantization, PWFN is one of only a few methods to successfully quantize DeiT-family models.
**Future Research on BNN Quantization**: Your idea of expanding the scope of BNN Quantization to a broader range of tasks is indeed promising. We hope our work paves the way for subsequent research endeavors in this direction.
Thank you again for your constructive feedback.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thanks for responding with the explanation. I am convinced and I retain the rating. | Rebuttal 1:
Rebuttal: ### General Comments
Thank you to all the reviewers for their effort in reviewing our paper, kind comments, and suggestions for improvement. We have taken the comments and suggestions into consideration.
As a general confirmation, it is true that our work is the first to train Bayesian Neural Networks (BNNs) for the purpose of Quantization. To the best of our knowledge, we can make the even stronger claim that we are the first to train a BNN on ImageNet and achieve results that match (or exceed) standard training for the ResNet family of models, and exceed the SOTA accuracies for quantized transformer-based vision models.
Before addressing individual comments, we wanted to address a shared concern around the training costs incurred with training a BNN using the PWFN method.
### Training Costs
It is indeed true that PWFN incurs additional training compared to the original baseline, but when we compare it to other post-training quantization methods:
| Method | Num of additional epochs |
|--------|--------------------------|
| ApoT | 120 |
| PWFN | 27 |
| WFN | 27 |
| LSQ | 90 |
| QviT | 300 |
We can see that PWFN requires substantially fewer additional training epochs than all methods but WFN.
WFN uses a regularizer that calculates the relative distance between all free weights and the existing cluster centers, and then penalizes weights depending on the distance to their closest center (in a soft way). This incurs computational costs in the backpropagation calculation for every iteration. In PWFN, we have a much simpler regularization term that penalizes sigma to increase - with costs that match that of l0 regularization. We do have memory overhead in terms of the number of parameters at training (sigma and mu) and the random number generation to sample, but this is only at training time. The simplicity of the regularization term also means that we experience a speed-up over the previous BNN training procedure outlined in the original bayes-by-backprop paper, making much more complex model-dataset pairings tractable.
This is not to say there are no costs; we find that a single training epoch with ResNet-18 on the ImageNet dataset takes ~1 hour 30mins on 4 consumer GPUs (GTX1080's) for PWFN, compared with 40 minutes for standard training and ~1 hour 20mins for WFN (but for WFN this increases as the number of clusters increases through training).
### Inference
At inference, the weights are fixed to mu values and treated in the usual way; there are no additional computation costs, and quantization speed-up on supported accelerators can be achieved. The core idea of PWFN is to reduce the number of unique weights and weight-space entropy so that the resultant networks can be used on accelerators (such as EIE) which use Huffman coding as a compression scheme.
### Key Points
Re-highlighting the core contributions of this work:
* This is the first work to train BNNs on imagenet-scale problems
* PWFN reduces the param count and entropies of transformer-based architectures with SOTA quantization performance
* We achieve this with 27 epochs of additional training
* Mapping to a BNN allows for both highly compressed networks and uncertainty estimation probing if required through sampling
* The point-estimates can be used in accelerator designs in conjunction with Huffman encoding to reduce computationally expensive DRAM reads
* We use a single codebook for the entire network whereas most previous works needed multiple codebooks for each layer/in-channel/attention head to maintain performance
Thank you again for taking the time to review our work and we hope to have covered all your points in the individual responses. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces a novel approach to weight-sharing quantization, a technique aimed at reducing energy costs associated with inference in deep neural networks. The authors propose a method that takes into account the context of weights in the network, arguing that this strategy can better preserve the network's representational capacity while reducing its complexity. They employ a probabilistic framework to capture the flexibility in weight values, which guides clustering decisions to reduce the network's entropy and lower the unique parameter count without compromising performance. The authors also suggest a novel initialization setting and a regularization term to prevent the collapse of the weights' distribution variance to zero. The paper demonstrates superior compressibility and accuracy in ResNet family models trained on the ImageNet dataset and transformer-based architectures.
Strengths: To the best of my knowledge, this paper is the first to utilize Bayesian Neural Networks for model quantization.
The authors conduct experiments on powers-of-two and additive powers-of-two quantization, which are more hardware-friendly than other types of quantization.
Weaknesses: The paper's writing style is somewhat difficult to follow, particularly in the method presentation and the explanation of Table 1 (In-ch?).
I recommend the authors use the term 'Quantization' instead of 'Quantisation' for consistency with most literature in the field.
The authors should address the training difficulty associated with Bayesian Neural Networks and discuss whether this approach is simpler for model quantization compared to other methods.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See Weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First and foremost, we genuinely appreciate your constructive feedback on our paper. It is our aim to present our research in the clearest possible manner, and your insights are invaluable in this pursuit.
**Bayesian Neural Networks (BNNs) for Model Quantization**: You're right in pointing out the uniqueness of utilizing BNNs for model quantization. We feel it is essential to highlight this, and we're pleased that you recognized its novelty.
**Explanation of Table 1**: We agree that a more explicit explanation of Table 1 is needed and have amended this. in-ch refers to whether the approach uses a different codebook for each channel. So if we had a layer with weights with the dimensions: (in-ch, out-ch, filter-width, filter-height) then for each in-ch there would be a different set of unique weights used in the quantisation procedure. This both increases the complexity of inference (usually in the form of different scale/shift constants) and increases the memory requirements.
**Consistency in Terminology**: We had used the British spellings throughout the paper, but will swap this over to the American spelling to be consistent with the literature.
**Training Costs of BNNs**: Given that multiple reviewers have questions around the training costs associated with Bayesian Neural Networks, we've incorporated a response addressing this aspect in the general rebuttal comments.
Once again, thank you for your review, we hope our revisions adequately address your concerns. | null | null | null | null | null | null |
TempME: Towards the Explainability of Temporal Graph Neural Networks via Motif Discovery | Accept (poster) | Summary: The work presents TempME, an explanation methodology for temporal graph neural networks that incoporates the idea of temporal motifs. By incoporating temporal motifs, the method demonstrates better cohesiveness, since motifs are constructed to be cohesive, i.e., connected and localized, as well as better explainability compared to prior explainability works.
Strengths: - The work presents an excellent contribution to TGNN explainability by proposing the use of temporal motifs.
- The work presents solid technical material to support the proposed methodology.
- The work further presents good and extensive results that covers many interesting aspects.
Weaknesses: There are some unclarity in the paper. In particular, the goal of the evaluation and the evaluation setting for the explanation performance is not clear. In particular, the statement "ACC-AUC, which is the AUC value of the proportion of generated explanations that have the same predicted label by the base model over sparsity levels from 0 to 0.3" is hard to comprehend. Currently, the reviewer thinks that:
- "generated explanations" are the explanation subgraphs extracted by the expalantion models.
- "sparcity level" idicates the ratio of sizes between the the extracted explanation subgraph and the whole temporal graph
- perhaps the "predicted label by the base model" is the prediction of the event/edge that the original target model gives when given the extracted explanation subgraph?
It could be clearer if at Section 3, it is noted that the budget "K" will be represented as "sparsity" and the objective of Equation 1 is spelled out more intuitively.
Besides:
- The inline equation for "Fidelity" has typos and the paranthesis does not match up. It also lacks intuitive explanations. Besides, it seems that Fidelity follows that constructed by TGNNExplainer, and a proper citation should be given.
- norm of the gradient being utilized for Grad-CAM is essential to understanding the treatment of baseline and should be stated in the main paper.
Miscellaneous:
The figures (especially the color shades and the lighter colors, when printed out) is not clear.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Regarding the motivation for presenting an explanation subgraph for TGNN: While with image instances we can verify that the important regions (e.g. highlighted by grad-cam) coincide with semantically meaningful regions in an image using human intuition and understanding, it is unclear if human intuition or logic can be applied to temporal graphs.
- Can you provide examples of human intuition or logic applied to temporal graphs? For example, with certain datasets, observing some part of the temporal graph would lead to a understanding on whether some target event should or should not exist.
- Can you provide examples of the explanation subgraph presented by TempME and show how such explanation subgraph provides "human intelligible explanation" as stated in line 38?
- Can you provide examples of the explanation subgraph presented by TGNNExplainer and Grad-CAM, as well as other baselines, to show how they do not present "human intelligible explanation"?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper presents some technical limitations but the discussions on potential negative societal impact could be strengthened by discussing the potential real-world application of explaining TGNNs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback, which significantly improves the quality of this work. We would like to address the following potential concerns you raise.
>The goal and setting of the evaluation are not clear. Clarification to “AUC value of the proportion of generated explanations that have the same predicted label by the base model over sparsity levels from 0 to 0.3”?
Sorry for any confusion. The sparsity level indicates the ratio of sizes between the extracted explanation and the computational graph of the event to explain. All of your other understanding is correct. To explain more, the accuracy of explanations is sensitive to the sparsity level. The area under the accuracy-sparsity curve is denoted as the ACC-AUC value, which is reported in Table 1.
>It could be clearer if the budget “K” is represented as “sparsity” in Eq.1 and any intuitive explanations to Eq.1?
Thanks for the constructive comments. We will rewrite the budget $K$ as $|\mathcal{G}_{\exp }^e| \leq s|\mathcal{G}(e)|$, where $s$ is the sparsity level, $\mathcal{G}(e)$ is the computational graph of the event $e$. Eq(1) aims to maximize the mutual information between the explanation and the original model prediction.
>Lack of intuitive explanations “Fidelity”.
“Fidelity” measures how valid and faithful the explanations are to the model’s original prediction. If the original prediction is positive, then an explanation leading to an increase in the model’s prediction logit is considered to be more faithful and valid. If the original prediction is negative, then an explanation that decreases the prediction logit is better. We will add more intuitive explanations to this metric in our revised version.
>Motivation for presenting an explanation subgraph for TGNN. How can human intuition or logic be applied to temporal graphs? How does TempME provide “human-intelligible explanation” for TGNN? How do other baselines fail to provide “human-intelligible explanations” for TGNN?
Thanks for the constructive feedback. Please refer to the real-world examples we provide in the general response. Explanation visualizations are provided in the 1-page pdf. Compared with TGNNExplainer, TempME works better in generating a ”cohesive” explanation. Moreover, the explanation generated by TempME provides additional motif-level
insights. In our visualization, different colors indicate the different types of temporal motifs that the corresponding event contribute to. Temporal motifs work as the building block of the dynamic system and provide a unique perspective of how the combination of events (e.g., preferential attachment, triadic closure, etc) contributes to the prediction of temporal GNNs. In contrast, we observe that previous methods (e.g., GradCAM, TGNNExplainer) either fail to generate a "cohesive" explanation or fail to provide any model-level insight.
> Others: Typos in the inline equation for “Fidelity” / Citation for Fidelity metric / Gradient Norm utilized for Grad-CAM / unclear color in figures
Thanks for your careful reviews. We will add the Grad-CAM details to the main text, cite for fidelity metric, change the figure colors for better readability, and fix the typos accordingly in our revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you. I am satisfied with your explanations. However, I maintain my score at weak accept based on the impact of this paper. | Summary: This paper provides a method to extract temporal motifs in order to capture correlated building blocks of networks.
Strengths: The method of extracting temporal motifs is interesting; it is based on information theoretic concepts.
Weaknesses: In my view there are two main weaknesses of this paper.
The first weakness is the underlying assumption that network motifs should consist of nodes and edges which are in close time proximity to each other. This assumption is not questioned but often does not hold, as there are often seasonal effects in network time series. These seasonal effects could occur for example only once per week, as Sundays may be special in terms of network behaviour.
The second weakness is the presentation. It is stated that the method is in a continuous-time setting, but the events occur at separated time points from the description; there does not seem to be the possibility of having more than one event occurring simultaneously. The large number of questions below indicate that the paper may not be very clearly written.
The paper aims to provide an explanation of temporal GNNs; it would be good to see a worked example in which explanation relates to a tangible real-world scenario. Which are the motifs that are found in the different networks? How can they be viewed as builing blocks of these networks; do they point towards fundamental differences between, say, the Wikipedia network and the Enron network? There is a start at this question in the supplementary material E.5 but the analysis may strongly depend on the chosen prior.
Finally, the discussion from the supplementary information Section F should have been in the main text.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It is not clear to me what the mutual information is between a random variable and a graph.
Are temporal motifs allowed to overlap each other?
Why is there no time component in Figure 1?
The elements in the edge set contain an event attribution but the motif definition ignores this event attribution. Why?
In Definition 1 why is u_0 = u_1 set ? What is the induced subgraph? What is the graph that this is a subgraph of? If this underlying graph is the union of all temporal graphs, then does it have multiple edges or self-loops?
Definition 2: When do two motif instances have the same topology? Is it their induced subgraph that has the same topology? Again, what happens to mutiple edges?
Why can the motif of preferential attachment not be represented as a temporal walk?
What are `surrounding' motifs in Section 4.2.?
Temporal motif encoding: what is T ( t - t_{pq}) ? What is N(p)? Is there an underlying graph with a fixed number of nodes and a fixed topology? Is that a simple graph?
How is an importance score generally obtained? It is a crucial ingredient of the method.
In Section 5.2 it is stated that ``TempME still surpasses all baselines and achieves the highest connectivity levels, primarly due to it ability to extract and utilize self-connected motifs'' ; how does self-connectedness enter here? what it meant by that?
Why is the discussion from the supplementary information section F not in the main text? Is it not important?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper discusses limitations only in the supplementary material. Clearly the choice of null model may have a major influence on the analysis, as does the choice of prior.
Flag For Ethics Review: ['Ethics review needed: Inadequate Data and Algorithm Evaluation']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback and positive comments on the theoretical contribution. We address below the questions in order.
> The assumption that ... may not hold when there are seasonal effects.
We also believe that considering these seasonal effects is crucial. However, we think developing seasonal motifs might deserve another paper. First, the definition of temporal motifs that represent seasonal effects is nontrivial, since we might need to first recognize the most significant time intervals that reflect the system dynamics. Second, given a time interval, how to sample the target seasonal motifs efficiently is also under-explored, which requires extensive effort. We sincerely appreciate your valuable comments, which actually light up an interesting future direction for us.
> The method is in a continuous-time setting, but the events occur at separated time points. There does not seem to be the possibility of having more than one event occurring simultaneously.
Sorry for any confusion. Actually, the continuous-time setting we used in this manuscript supports simultaneous events. “Continuous-time” means the timestamp($t_k$) in the event element can be any continuous value in the real number set $\mathbb{R}$, rather than being restricted to a specific value derived from a sequence of time points with a constant time interval. In a continuous-time setting, there can be multiple simultaneous events with the same value of timestamp.
> It would be good to see a worked example related to real-world scenarios.
Thanks for the constructive feedback. Please refer to the real-world example we provide in the general response.
>What is the mutual information between a random variable and a graph?
A graph is also a variable represented by its adjacency matrix, and node/edge attributes. Since the graph follows a certain distribution (either a prior distribution given the dataset, or a posterior distribution predicted by networks), we can calculate the mutual information between a random variable and this graph random variable.
>Are temporal motifs allowed to overlap each other?
Yes. They can overlap each other.
>Why no time component in Figure 1?
Time components are included in Figure 1. Here “1,2,3…” indicates the order of event occurrence.
>Why does motif definition ignore the event attribution?
Network motifs primarily focus on the arrangement and connectivity patterns of nodes within a network. In this work, we also follow this idea for consistency with existing literature.
>Why $u_0=u_1$ in Def. 1? What does “induced subgraph” mean? any multiple edges or self loops?
It means we sample motifs starting from the given node of interest. Therefore, the starting node of the first event should be the same as the given node. “Induced subgraph” means the graph formed from all vertices and events in $I$, which is the subgraph of the original temporal graph. There can be multiple edges between two nodes. Typically, in network motif studies, input networks are considered after all self-loops are discarded.
>What does "the same topology” mean? What about multiple edges?
“The same topology” means the induced subgraph has the same topology and the corresponding events happen in the same order over time (regardless of the absolute time interval). We use a $2l$-digit to represent the identities of sampled nodes in the event order. “Two motif instances are the same” indicates the same $2l$-digit representation. It is able to manage multiple edges. We discussed more details in Appendix B and D.2.
>Why cannot the preferential attachment ... as a temporal walk?
In Temporal Walk Extraction (Alg.1 in [1]), the next event is sampled from historical events that interact with the latest sampled node. For example, we have sampled $w_1\rightarrow w_2\rightarrow w_3$ that satisfies $w_1\neq w_3$ in the current node set. To extract a motif for preferential attachment, the next event $(u, v, t)$ should satisfy $u=w_2$. However, since the latest sampled node is $w_3$, then $(u,v,t)$ is not in a valid historical event that interacts with the latest node. Therefore, the motif of preferential attachment cannot be sampled by the temporal walk extraction algorithm.
[1] Inductive Representation Learning in Temporal Networks via Causal Anonymous Walks
>What are “surrounding” motifs in Section 4.2?
We sample $C$ motifs starting from $u$ and from $v$ by Algorithm 1 respectively, which represent the surrounding topology.
>What is $T(t-t_{pq})$? What is $N(p)$?
$t$ is the timestamp of the event to explain, and $t_{pq}$ is the timestamp of the historical event $e_{pq}$ in the temporal motif. $T(\cdot)$ is a time encoder that maps the time interval into a 2d-dimensional vector (Line215). $N(p)$ indicates the set of neighboring nodes of $p$. Here, it is essentially the set of nodes in the motif that have interaction with node $p$.
>Is there an underlying graph with a fixed number of nodes and a fixed topology? Is that a simple graph?
No. We are not assuming an underlying graph with a fixed number of nodes/topology/a simple graph.
>How is an important score generally obtained?
Given the motif embedding $m_I$ learned by Temporal Motif Encoder (Eq.3), we adopt an MLP for mapping $m_I$ to an importance score $p_I\in[0,1]$ as mentioned in Line 230-231, which is later used to model the distribution of the corresponding motif.
>What does “self-connectedness” mean?
“Self-connectedness” means each motif sampled by TempME is self-connected. It is guaranteed by the sampling algorithm. The next event during motif sampling is from a set of historical events that interact with at least one node in the current node set. Thereby, the self-connectedness of each motif potentially ensures a high “cohesive” level in the generated explanation.
>Discussion from the supplementary material should be in the main text.
We will adopt your suggestions to move part of the discussions to the main text in our revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you. I am satisfied with your explanations. | Summary: TempME is an inductive explainer for temporal graph neural networks (TGNNs) over link prediction tasks. It explains TGNNs using temporal motifs, guaranteeing temporally proximate and spatially adjacent explanations, hence more human interpretable. TempME’s pipeline can be broken down into three parts: sampling, embedding, and explaining. Given a link prediction instance, candidate motifs are sampled starting at either end of the edge. The motifs go through a series of steps to generate rich motif embeddings that encode the event's spatial and temporal roles. An MLP learns a Bernoulli distribution over the embeddings. The distribution is used to mask out the best explanation motifs from the candidates. The MLP is trained using the information bottleneck principle to keep the explanations succinct. The mutual information between the predicted label and the explanation is maximized, while the mutual information between the explanation motifs and the set of candidate motifs is minimized. TempME can also boost a model’s performance. The authors show improved results across datasets by concatenating the aggregation of the motif embeddings around a node of interest prior to a model’s final MLP layer.
Strengths: 1. Unlike prior work that measures the impact of singular events, TempME measures the combined effect of events on the black box prediction.
2. Cohesive explanations are more human-interpretable than non-cohesive ones.
3. It has significantly less computational cost compared to previous TGNN explainers.
4. It can generalize to unseen nodes, which is highly desirable.
5. The identified motifs can be utilized during training to boost model performance.
Weaknesses: 1. The authors have not experimented with synthetic datasets even though they are available. In graphs, the ground truth explanation is often unknown. Hence, synthetic graphs are vital to ascertain that the explanations comply with the ground truth. Please refer to the following for the synthetic datasets and their case study: *Xia, Wenwen, et al. "Explaining temporal graph models through an explorer-navigator framework." The Eleventh International Conference on Learning Representations. 2022.*
2. The authors have cited the following paper as prior work but have not used it as a baseline. *Wenchong He, Minh N Vu, Zhe Jiang, and My T Thai. An explainer for temporal graph neural networks. In GLOBECOM 2022-2022 IEEE Global Communications Conference, pages 462 6384–6389. IEEE, 2022.* Please either compare or provide a justification for not including as a baseline.
3. The authors have compared with GNNExplainer and PGExplainer for comparison with static graph explainers. However, these are no longer state-of-the-art (SOTA) explainers for static graphs. Table 1 and Fig 3 show that GNNExaplainer and PGExplainer are competitive. Using SOTA static graph explainers might have yielded even better results for static explainers and led to different insights. A comparison with the following will be more fruitful:
* *Tan, Juntao, et al. "Learning and evaluating graph neural network explanations based on counterfactual and factual reasoning." Proceedings of the ACM Web Conference 2022. 2022.*
* *Yaochen Xie, Sumeet Katariya, Xianfeng Tang, Edward Huang, Nikhil Rao, Karthik Subbian and Shuiwang Ji. Task-agnostic graph explanations. NeurIPS, 2022.*
4. Please provide the hardware details of the experimental setup.
5. Please provide TempME’s training time as well. It is good to have fast inference, and at the end of the day, inference is what matters, but a user should also have an idea of how long it takes to train.
6. The reference for GINE is correct; however, it does not explicitly use the term GINE in it, which may make it difficult for future readers to refer to. Please add a citation that uses the term GINE explicitly.
7. Pip throws an error when using the author’s requirements.txt to setup the experimental environment.
8. Some typos and grammar mistakes:
* 158, 622: denotes -> denoted
* 190: in this work (unnecessary)
* 278: In meanwhile -> Meanwhile
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Overall, I like the paper. I am open to increasing the score if the authors address the weaknesses listed above and address the questions below.
* Since we are trying to explain the black box and not the data, what if the black box is prioritizing the distant edges? In that case, will TempME force the explanation to be cohesive?
* The authors state the use of a “generative model”. However, it is not found anywhere. The term “generative” brings VAEs and probabilistic graph models to mind, a model that can generate new graph instances that follow a specific distribution or capture the underlying patterns and characteristics of the observed graph data. There is no graph generation involved. Sampling is different from generating. What exactly do the authors mean by a “generative model”? Is it just the ability to generalize to unseen nodes? In that case, the authors should change the terminology as it is misleading.
* In line 194, are C motifs sampled from both u and v, totalling 2C?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, the authors have addressed the limitations in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback, which significantly improves the quality of this work. We also address below the potential concerns.
>Lack of Experiments on synthetic datasets
Thanks for your constructive comments. The synthetic dataset is a point process where the arrival of an event can affect the arrival rates of other events. However, there are still no ground-truth explanations from the motif perspective in the synthetic dataset. The relevance and applicability of the point process to real-world temporal graphs have been less tested by the community so far. We believe our extensive experiments on real-world datasets provide a more accurate representation of the challenges that our approach aims to address. We acknowledge the potential value of experiments on synthetic datasets. If time allows, we are more than happy to involve the results on synthetic datasets in our revised version.
>Why not include Reference [21] as a baseline?
Previous work [21] is based on a sequence of static snapshots of a temporal graph, which is a very coarse approximation to real-world temporal graphs with continuous time-stamped events and may lose information by looking only at some snapshots of the graph over time. We didn’t involve [21] in our baselines since it might be problematic to compare two models under different problem settings.
>Static graph explainers are no longer state-of-the-art explainers for static graphs. What about more recent explainers?
Thanks for your constructive suggestions. We agree using more SOTA static graph explainers yields stronger experimental results. [1] proposes $CF^2$ that utilizes counterfactual and factual reasoning to generate factual explanations, while guaranteeing the counterfactual property of the remaining subgraphs.
We refer to the official codes of $CF^2$ and test the explanation performance on Wikipedia and UCI. In our implementation, we generalize the original loss function to explain the link prediction task. The results are reported as follows. We notice that $CF^2$ archives comparable performance with GNNExplainer, since the intrinsic training procedure of $CF^2$ is the same as GNNExplainer. We consider the additional constraint on the counterfactual property may not hold for temporal link prediction tasks, due to complex time dependencies between events (e.g. one event may have a factual contribution at timestamp $t_1$ but have a counterfactual contribution at timestamp $t_2$)
| | Wikipedia | | UCI | |
|------------|------------|--------|-------|--------|
| | $CF^2$ | TempME | $CF^2$ | TempME |
| TGAT | 84.46 | 85.81 | 73.24 | 76.47 |
| TGN | 93.23 | 95.80 | 94.16 | 96.34 |
| GraphMixer | 89.77 | 90.15 | 60.38 | 87.06 |
Regarding [2], the main focus is explanations for multi-task prediction. The backbone of the proposed explainer shares a very similar spirit with PGExplainer. Thus, [2] cannot be directly compared with our method since we focus on different problems. Meanwhile, we have included PGExplainer as a baseline.
[1]Learning and Evaluating Graph Neural Network Explanations based on Counterfactual and Factual Reasoning
[2]Task-Agnostic Graph Explanations
>Hardware details of the experimental setup
The implementation of our model and training is based on the NVIDIA Tesla V100 32GB GPU with 5,120 CUDA cores on an HPC cluster. The experimental environment is based on Python 3.8.10, PyTorch 1.10.1 and PyTorch Geometric 2.0.4. We will add hardware details in our revised version.
> Training time of TempME
Thanks for bringing this up. We take Wikipedia as an example (which is a relatively large dataset). We set $C=30, l=3$ in the temporal motif extraction step. It takes around 200 seconds for an epoch and $\approx 60$ epochs to converge. The training time depends on the dataset size. We will discuss this accordingly in our revised version.
>Reference of term GINE
Thanks for the careful suggestions. GINE was first used in [1] as a modified version of GIN to incorporate edge features in the aggregation function. However, we didn’t find any literature that officially proposes the terminology of GINE. We would like to cite both [1] and PyG, which seem to first name this GIN variant as GINE.
[1] Strategies for pre-training graph neural networks
> Experimental environment setup error
Our experimental environment is based on Python 3.8.10. We will refine and update our repository upon publication.
> What if the black box is prioritizing the distant edges? Can TempME still force "cohesive"?
We note that TempME extracts explanatory motifs from the computational graph of a given instance. The number of layers in current temporal GNNs is typically 2 or 3, meaning that the black box considers 2 or 3-hop neighbors. In that case, if the black box is prioritizing the distant edges, TempME is still able to extract a motif that involves the distant edge and the edge to explain, which potentially ensures the “cohesive” property of the explanation.
> Clarification to “generative model”
We agree with your understanding that “a generative model captures a specific distribution and characteristics of the observed graph data and obtains the ability to generate new instances”. TempME is exactly a generative model. We actually frame the explanation task as a generative problem, where the explainer is trained to capture the underlying distribution of explanatory motifs. The importance scores learned by TempME are used to model the Bernoulli distributions of certain motifs. TempME is also able to generate new explanatory subgraphs by sampling from the motif distributions.
> In line 194, are C motifs sampled from both u and v, totalling 2C?
Yes. We sample $C$ motifs for $u$ and $v$ at both ends of the interaction and obtain $2C$ motifs in total.
>Typos/grammar mistakes.
Thanks for pointing them out. We will revise our manuscript accordingly.
---
Rebuttal Comment 1.1:
Title: Thank you for the clarifications.
Comment: Thank you for the clarifications. I raise my rating to weak accept. | Summary: This paper proposes an approach, Temporal Motifs Explainer (TempME), to find out key sub-graph structures from temporal graph neural networks (TGNNs) that mostly influence the prediction for a better ability of explanation. It employs an generative approach including the steps, motif extraction and sampling, to retrieve the structures with explainability. Experiments show that the proposed TempMe improves the state-of-the-art explainable GNNs and TGNNs in prediction.
Strengths: 1. Explainable AI in motif exploration is a promising research field that has not been widely studied.
2. This paper is written in good organization. The motivation and the targeted problem are illustrated clearly.
3. The designed procedure of method is clearly stated as well.
Weaknesses: 1. The novelty seems not sufficient as claimed.
2. More related research should be compared in the section of related work.
3. The experimental evidence is not sufficient to support the proposed approaches to address the claimed issues in prior research.
Please see the detailed comments in the review block - Questions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The novelty seems not sufficient. It is claimed as the first work that utilizes temporal motifs for explanation. However, the compared state-of-the-art [23] targeted finding out sub-graph structures of TGNNs for explainability enhancement as well. Therefore, the claimed novelty is questionable.
2. More related research should be compared in the section of related work. Following question 1, it is strongly suggested to state the major differences of targeted research problems from the previous work [23]. In addition, as described in Lines 47-53, there are prior studies employing temporal motifs underlying generative mechanisms as well as the proposed work. The differences of contributions are not clearly illustrated in the manuscript though. Hence, the research value of this paper is inadequate accordingly.
3. The experimental evidence is not sufficient to support the proposed approaches to address the claimed issues in prior research. As stated in Lines 34-36, the existing research [20, 21] has not thoroughly studied the characteristics of temporal graphs, the consequent influence on the prediction and explanation is not clearly stated. Moreover, the research is not compared in the experiments. That is, it lacks experimental evidence to support the improvement under consideration of the temporal information over these works.
4. What is the time complexity of TempMe compared with [13, 14, 23]? Are there any trade-offs between the computational costs and the exploration of key motifs under TempMe?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: 1. The proposed approach is validated on the graphs with sparsity levels between 0 and 0.3, i.e., dense graphs with richer information. How about the effectiveness on the graphs with higher sparsity?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the valuable suggestions and positive comments on the motivation, and organization in this work. However, there are some misunderstandings in the novelty and experimental setting of this work. We would like to clarify as follows.
>The novelty seems not sufficient.
Our main novelty lies in the first attempt to utilize temporal motifs as the building blocks to analyze and explain the functionality of current temporal GNNs, which have never been explored before. Furthermore, we innovatively resort to the information bottleneck principle in the temporal explanation scenario. As mentioned in Line 42-46, the sub-graph structures extracted by [23] do not provide any motif-level insight and also entail high computational costs (Table 3). We believe the methodological and theoretical novelty of this paper is substantial and solid.
>More related research should be compared in the section of related work.
In this work, we focus on the same research problem with [23], i.e., the explainability of temporal GNNs. Compared with [23], we are the first to utilize temporal motifs in this field. The temporal explanations generated by TempME provide knowledge of the combined effect of event sets and motif-level insight.
In lines 47-53, we introduce how temporal motifs are used to analyze the underlying generative mechanisms in real-world systems as traditional methods. However, they have never been used to explain temporal GNNs. These previous works inspire us to utilize temporal motifs as more plausible and reliable composition units in explanation tasks. We have clearly stated our contributions in Line 77-81.
>The experimental evidence is not sufficient to support the proposed approaches to address the claimed issues in prior research. Why not compare references [20,21] in the experiments?
Existing work [20] mainly focuses on extracting a query-relevant subgraph for temporal knowledge graph reasoning. Though it explains the reasoning logic in temporal knowledge graphs to some extent, it requires a key “predicate” to infer a subgraph, which is specific to the knowledge graph setting. Previous work [21] is based on a sequence of static snapshots of a temporal graph, which is a coarse approximation to real-world temporal graphs with continuous time-stamped events. [20] and [21] either cannot process complicated dependencies between massive interactions in real-world scenarios or cannot be applied to explain general temporal GNNs under our problem formulation. We didn’t involve [20,21] in our baselines since it might be problematic to compare models under different problem settings. We believe the extensive experiments (in terms of accuracy, sparsity, connectivity, efficiency, etc.) and outstanding results over state-of-the-art baselines to be sufficient to support the superiority of TempME.
>What is the time complexity of TempME compared with [13,14,23]? Are there any tradeoffs between computational cost and the key motifs exploration?
The time complexity of TempME is discussed in Appendix D.3. For a thorough review, our TempME and PGExplainer[14] are more efficient than GNNExplainer[13] and TGNNExplainer[23]. TempME and PGExplainer learn a neural network to predict the importance score of a certain edge / motif, which is shared by all edges in the given graph. However, GNNExplainer and TGNNExplainer require retraining / re-searching individually for each given instance. To infer an explanation for a given instance, the time complexity of GNNExplainer is $\mathcal{O}(T|E|)$, where $|E|$ is the number of edges in the computational graph, $T$ is the number of retraining epochs. The time complexity of PGExplainer is $\mathcal{O}(|E|)$. However, these explainers ignore time information and fail to capture complicated interaction dependencies.
TGNNExplainer is an MCTS-based method, where the search space increases exponentially w.r.t tree depth. The time complexity of TGNNExplainer with navigator acceleration is $\mathcal{O}(NDC)$, where $N$ is the number of rollouts, $D$ is the expansion depth of each rollout and $C$ is a constant including inference time of navigator and other operations. The inference time of TempME is mainly determined by the sampling process, which results in a complexity of $\mathcal{O}(Cl)$ where $C$ is the number of motifs and $l$ is the maximum length of the motif. Empirical results of the runtime are given in Table 3.
We also conduct ablation studies on computational cost, motif exploration and model performance in Figure 4(a) and Figure 6 in Appendix E.3. By setting appropriate $C$ and $l$, TempME archives the best tradeoff between performance and efficiency.
>Clarification to “The proposed approach is validated on the graphs with sparsity levels between 0 and 0.3. What about the graphs with higher sparsity?”
Here are some misunderstandings. We would like to clarify that the proposed approach is validated on graphs with any level of sparsity. The sparsity levels between 0 and 0.3 is used to control the size of the explanatory subgraph w.r.t. the original graph size, since the explanation accuracy is sensitive to the explanation size. We actually conduct experiments on datasets with a wide range of sparsity levels. We also report the dataset statistics in Table 6 in Appendix E.1 for reference.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. Most of the issues are addressed. However, there are some some points should be clarified.
1. Following the response to Q2, it is suggested to clearly state that the studies [24-31] verify the important role of temporal motifs by what kinds of analyses or observation. As a meanwhile, it is strongly encouraged to highlight that this work is developed based on those previous verification results. Otherwise, it is confusing if these prior works utilize the temporal motifs for the enhancement of the explainability of TGNNs as well.
2. For the response to Q3, I understand the focused problems are subtly different. What I am looking forward is to see the improvements after the TGNNs are incorporated with the proposed temporal information. In the personal view, it is essential to compare the explainability of the graphs w/. and w/o the motif-level insights based on the evidence of either experimental or theoretical results. Otherwise, it may be hardly to be convinced that the characteristics of the "found motifs" under the proposed approach has a substantial enhancement of the explainability over those previous approaches without incorporating the temporal information as claimed in the section of related works.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer M5gq
Comment: We appreciate your time and effort spent! We will definitely adopt your suggestions to further improve paper writing on previous works and intrinsic insights in our revised version. To address your concern and ensure a clearer understanding, we'd like to clarify below.
1. Previous works [24~31] have successfully demonstrated the pivotal role of temporal motifs in unraveling the complexity and dynamics of real-world networks. These traditional methods typically investigate the frequency and statistical distribution of temporal motifs to gain insights into the behavior and evolution of real-world networks. For example, in biological networks, the temporal motifs could help in pinpointing critical events in biochemical reactions. In financial networks, the occurrence of specific triadic motifs is a strong indicator of impending financial crisis, offering predictive insights. Inspired by these real-world applications of temporal motifs, we harnessed the potential of temporal motifs to uncover the decision logic of temporal GNNs, thereby improving their transparency and explainability.
2. Compared with previous methods w/o temporal motifs, the enhancements achieved through the incorporation of temporal motifs are mainly three-fold.
- **Accuracy and Fidelity Enhancement:** By integrating temporal motifs, we have observed an explainability improvement. First, the accuracy of generated explanations sees a notable increase. This is substantiated by previous research that demonstrates how temporal motifs can effectively model the evolution of dynamic networks. Moreover, the motif-level information bottleneck principle serves as a foundation, ensuring the generated explanations exhibit both maximized fidelity and minimized sparsity. The empirical evidence presented in *Table 1* and *Figure 3* affirms this positive impact on explanation accuracy and fidelity.
- **Human-Intelligible Explanations:** The introduction of temporal motifs imparts a unique property of "cohesiveness" to our explanations. This attribute arises naturally due to the inherent nature of temporal motifs. Through the extraction of interaction-related motifs, the explanations transcend singular events and provide an encompassing view of motif importance. Empirical results on the “cohesive” property are shown in *Table 2*. Visualizations of the explanations w/. motif and w/o motif are provided in our general response (please refer to the *1-page pdf*).
- **Link Prediction Performance Enhancement:** The benefits of incorporating motif-level insights extend beyond explainability improvements. It also positively influences the link prediction performance of TGNNs. This enhancement is attributed to the appropriate motif encoding that effectively captures both the spatial and temporal roles of each event. The empirical results in *Table 6* and *Table 9* substantiate the consistent link prediction enhancement across various message-passing-based TGNNs, such as TGAT, TGN, and GraphMixer.
Thanks again for your suggestions. We hope that the above clarification improves your confidence in our work. Let us know if you have any further questions/concerns. | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers for their valuable time and effort in reviewing this manuscript. We have extended our experiments, which we detail below. We appreciate your feedback on this, and we agree that additional empirical verification would better support our proposed framework and thus make the paper stronger.
We provide a visualization of real-world explanation examples on Wikipedia. The base model is TGAT with two layers. We demonstrate the explanations generated by Grad-CAM, TGNNExplainer, and the proposed TempME. We can make the following observations.
- Compared with TGNNExplainer, Grad-CAM tends to generate more "cohesive" explanations. One possible reason is that topologically close events tend to have close gradients. In contrast, the explanation generated by TGNNExplainer contains more isolated events, which degrades the inspiration that explanations could bring us.
- Compared with Grad-CAM and TGNNExplainer, the proposed TempME not only can generate more "cohesive" explanations but also provide additional knowledge on the combined effect of explanatory events from a unique perspective of temporal motifs. TempME is capable of capturing the importance of each motif to the dynamic of the temporal graph.
According to the suggestions from the reviewer, we add $CF^2$[1] as a baseline, which is a state-of-the-art explainer on static graphs. We report our preliminary results on Wikipedia and UCI in Table 1. Complete results will be given in the final version.
We hope that the experimental support improves your confidence in our work. Again we thank you for your feedback and we continue to look forward to the coming discussion.
[1]Tan, Juntao, et al. "Learning and evaluating graph neural network explanations based on counterfactual and factual reasoning." Proceedings of the ACM Web Conference 2022. 2022.
Pdf: /pdf/31180555213ca4be9e58d42358325092bdad34c2.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The Explainability of TGNN is essential for human understanding of model prediction results. This paper proposed a framework, TempME, to construct meaningful, cohesive explanations by utilizing temporal motifs. The framework consists of temporal motif extraction, motif encoder, information-bottleneck-based importance score generator, and an explanation process. Benefiting from the information bottleneck principle, TempME can balance the explanation accuracy and compression. Experiments on six real-world temporal graph datasets were conducted with three TGNN backbones. The result shows that the selected temporal motifs not only reason the predictions of the backbones but also enhance their link prediction accuracy.
Strengths: -- Connecting the information bottleneck principle with the temporal motif explanation task is considered novel.
-- The experiment results are promising. The TempME achieves up to an 8.21% increase in explanation accuracy across real-world datasets. Besides, up to 22.96% improvement in the performance of TGNNs shows the effectiveness of discovered temporal motifs.
-- The author provides comprehensive theoretical analysis and extensive experiments.
-- The TempME is efficient in terms of the inference time for producing explanations.
-- The paper is well-organized and easy to follow.
Weaknesses: -- The performance of the tempME might be susceptible to temporal motif extractions in the first stage, but fewer discussions on other motif mining algorithms.
-- The proposed method assumed temporal motifs can capture the dynamics of the graph, which might not hold for real-world scenarios. For instance, external events or user preferences can also affect the explanations.
-- The ACC-AUC of TempME in Table 1. outperforms baselines in most cases. However, the performance improvements of the Enron dataset are not significant. The reviewer also finds the results of the Enron dataset in Table 4. show less improvement in the performance of TGNN models with motif embedding. It would be better to investigate the limitation of the proposed method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As stated in the weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The proposed method assumed temporal motifs can capture the dynamics of the graph, which might not hold for real-world scenarios. For instance, external events or user preferences can also affect the explanations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback and positive comments on the novelty, experimental results, theoretical analysis, and paper writing in this work. We address below the potential concerns.
>The performance of TempME might be susceptible to temporal motif extraction in the first stage. Discussions on other motif mining algorithms.
Thanks for bringing this up. Exact motif counting results in high memory and large computation complexity. Therefore, most other motif extraction algorithms (e.g. MAVisto, Kavosh) mainly aim to reduce approximation error of motif counting with less memory and CPU time. However, in TempME, the target of motif extraction in the first stage is to collect a candidate set of expressive temporal motifs, instead of approximating the exact motif counting. Therefore, the performance of TempME is not sensitive to the temporal motif extraction algorithms. We will add more discussions on other motif mining algorithms in our revised version.
>Clarification to “The proposed method assumed temporal motifs can capture the dynamics of the graph, which might not hold for real-world scenarios”
We totally agree that dynamic systems can be influenced by a multitude of external factors or emergent behaviors. However, many works have demonstrated that temporal motifs are essential building blocks of complex dynamic systems in real-world scenarios [1,2], which help us to gain a deeper understanding of the system’s functioning. We believe that this manuscript has a unique value in investigating how temporal motifs facilitate explainability. It is also a necessary first step before one can further extend to a multitude of factors in future works.
[1]Temporal motifs in time-dependent networks
[2] Motifs in Temporal Networks
>The performance improvements of the Enron dataset are not significant compared with other datasets in Table 1 and Table 4.
Thanks for the interesting question. We also noticed that the performance improvement on Enron is not as significant as other datasets. One possible reason is that temporal interactions are the most dense in Enron, compared with other datasets in experiments. We refer to Table 6 in Appendix E.1 for dataset information. Here we define the interaction density as the ratio of #links to #nodes (regardless of time durations). It might be more challenging to analyze temporal graphs with high interaction density with motifs, since there could be complex interactions and mixed effects of multiple motifs. We are more than happy to further analyze and discuss potential limitations in our final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications. I am satisfied with the explanations for the first two concerns. However, I am not sure whether the limited performance improvements in the Enron dataset can be simply attributed to dense temporal interactions. In Table 4, US Legis also has a higher interaction density compared to Wikipedia, UCI, and Reddit, but the improvement is significant. The statement “It might be more challenging to analyze temporal graphs with high interaction density with motifs…” would be more convincing if more evidence were provided.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer uwpb
Comment: Thanks for keeping up the discussion! Your question has motivated us to further investigate the characteristics of the Enron dataset compared with other datasets used in our experiments.
Enron is an email network where interaction events are emails exchanged among employees. After more statistical studies, we observed that there are multiple identical interactions in the Enron dataset. For example, the interaction between Node115 and Node124 at timestamp 533820 recurs four times. On average, each distinct interaction is accompanied by **3.284** exactly identical interactions within this dataset. (Refer to the following table for a comparison of the average occurrences of identical interactions across all datasets)
| UCI | USLegis | Wikipedia | Reddit | Canparl | Enron |
|-------|---------|-----------|--------|---------|-----------|
| 1.001 | 1.007 | 1.000 | 1.000 | 1.000 | **3.284** |
While this phenomenon might be deemed reasonable within the context of email networks, wherein multiple emails are dispatched to the same recipient at identical timestamps, it is not a common phenomenon in other datasets and many real-world scenarios. For consistency with existing literature [1,2] we restrict the timestamp of the next sampled event strictly earlier than the previous event, which is also a necessary condition of underlying causality between interactions [1]. Consequently, many identical interactions in the Enron dataset are not sampled within a temporal motif, thereby potentially degrading the performance improvement of TempME over this specific dataset.
It is crucial to emphasize that **our proposed algorithm adeptly handles scenarios involving multiple interactions at the same timestamp between different node pairs, as well as multiple interactions between the same node pair but at different timestamps.** However, we restate the limitations we articulated earlier: “One limitation of TempME is analyzing temporal graphs characterized by **high interaction density between the same node pairs at the same timestamp**, such as Enron dataset”.
We will add more discussion on limitations. Again, thanks for your attention and discussion.
[1]Temporal motifs in time-dependent networks
[2]Motifs in Temporal Networks | null | null | null | null | null | null |
Decision-Aware Actor-Critic with Function Approximation and Theoretical Guarantees | Accept (poster) | Summary: The authors present a decision aware actor-critic algorithm and then try to analyse the same. Subsequently they also provide some numerical results.
Strengths: The authors try and provide an analysis of an actor-critic algorithm. However, there are a number of questions that are unclear to me that I write in detail below.
Weaknesses: I am writing down the entire list of comments on the paper below. One can call these weaknesses as these points have not been explained well or there are issues in the explanations provided.
1. The authors say in the introduction that the objective of the critic is to minimize the value estimation error across all states and actions. This is incorrect since a critic's job is to estimate the expected TD error for any given policy provided by the actor and the actor's job is to find the policy that minimizes this expected TD error over policies. The authors seem to mention this at multiple places.
2. Weird notation: J_s(\pi) is well defined but what is J_s(\rho)? Note that \rho is the initial state distribution and then you say J(\pi) = E[J_s(\rho)]!
3. Note that d^\pi and \mu^\pi are not valid probability distributions. How do you then value sample states or state-action tuples from these?
4. What does it mean to say that since p^\pi(s,a) is a probability, you can write it in the Boltzmann form as you have written. Is it always possible to express any probability distribution as some Boltzmann distribution? Later you introduce a parameterization \theta to this distributional form. Why not directly give a parameterized Boltzmann probability if that is what you want for the ramdomized policy instead of doing this in such a round about manner.
5. In the definition of \pi_{t+1} in Section 3.1, obviously \eta plays a major role. What values can it take?
6. For the blue term on page 4, you mention that as c decreases, critic error decreases. I don't see how? As c decreases, the first term inside D_{\Phi^*}(.,.) will decrease as c decreases but that term is multiplied by 1/c which will blow up as c decreases.
7. Algorithm 1: How do you compute $\nabla_v L_t (v_k) and \hat{g}_t?
8. Step-sizes and stability of the algorithm: You use \alpha_c and \alpha_a as the step-sizes for the critic and actor respectively. I don't see any conditions written on these step-sizes. The reason is that actor-critic algorithms are meant to track policy iteration. This requires that the critic recursion moves on a faster timescale as compared to the actor recursion. When diminishing step-sizes are used, one requires that $\alpha_a go to zero at a rate faster than \alpha_c in order to get this effect of asymptotic convergence. Furthermore, the authors do not mention anything about the stability of their procedure. How does one ensure that this algorithm is stable - a precondition to ensuring convergence.
9. Since you do not assume anything about mixing times of the underlying Markov process under any policy, how do you ensure that Markov noise does not create any problems in convergence of the scheme, see for instance,Sajad Khodadadian, Zaiwei Chen, Siva Theja Maguluri, "Finite-Sample Analysis of Off-Policy Natural Actor-Critic Algorithm," ICML 2021.
10. In Proposition 2, what are the terms \tilde{H}_t^\dagger and [\hat{g}_t]_{s,a}?
11. How does one interpret Proposition 3? Not clear at what rate the second term decreases and whether it even does so? In the absence of a proof of decrease and the rate at which it happens, the result is meaningless.
12. In Section 4, you give a monotonic policy improvement result. TRPO also has a similar result. Which one is better - the improvement provided by your algorithm or TRPO?
13. Mistakes: Section 5.1 - you say \nabla_\pi J(\pi) = d^\pi(s) Q^\pi(s,a). What is s and what is (s,a) on the RHS since the LHS does not have any s or (s,a) tuple? Also, in Section 5.2, you say \nabla_\pi(J(\pi)) = d^\pi(s) A^\pi(s,a) p^\pi(a|s)? What are s, a on RHS. There is clearly a mistake since LHS does not depend on these quantities. Also, how do you reconcile these two definitions for the same object?
14. The bandit proposition 5 and also 7 suddenly come in and appear crude with such assumptions as deterministic rewards, deterministic Q-value updates etc. It is not clear why these results have been provided.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: I have given my questions in detail above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: I think the analysis of convergence is flawed since there are no assumptions or results regarding (a) Lipschitz continuity of the objectives, (b) step-sizes used in the actor-critic scheme, (c) stability of the two coupled recursions, (d) fast mixing nature of the Markov noise, etc., have been made or shown. Moreover, from my comments mentioned above, the paper lacks on several fronts and will require significant revision and re-review in order to be publishable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Part 2 of the rebuttal (please refer to the global rebuttal for Part 1)**
[4] *...don't see any conditions written on the step sizes...*
Both step-sizes $\alpha_c$ and $\alpha_a$ can be set according to the smoothness of the critic ($L_t(\omega)$) and actor ($\ell_t(\theta)$) objectives. Using such step-sizes is enough for Proposition 3 to hold. Setting these step-sizes is orthogonal to the main message of the paper, and we relegated these details to the Appendix (see Lines 705-707) where we directly make use of the result and associated step sizes in [26]. In practice, we set both step-sizes adaptively using an Armijo line-search which, by definition, guarantees ascent (descent) for the actor (critic) objectives respectively.
[5] *How does one interpret Proposition 3?*
Proposition 3 proves an $O(1/T)$ convergence of the policy to the *neighborhood* of a stationary point of $J$. Such results proving convergence to a neighborhood that depends on the critic error are common in the literature (see [2,23,60,61] in our references). As we explain in Lines 245-255, the second term is the critic error, which in turn, depends on the optimization error and the bias due to the function approximation. When using a large value of $m_c$ and a sufficiently expressive critic parameterization, the critic error in the second term can be driven to zero.
[6] *...Is it always possible to express any probability distribution as some Boltzmann distribution? Later you introduce a parameterization $\theta$ to this distributional form. Why not directly give a parameterized Boltzmann probability...?*
Indeed, any distribution can be represented using a softmax transformation such that $p^\pi(a|s) \propto \exp(z^\pi(s, a))$. As explained in Section 2 (Functional representation vs Policy parameterization), we explicitly distinguish between the functional representation of a policy and its parameterization. Specifically, we can use any function approximation to parameterize either the distribution $p^\pi(a|s)$ (the direct representation of a policy) or its logits $z^\pi(s, a)$ (the softmax representation of a policy). This abstraction is important because it enables us to prove monotonic policy improvement results for *any* actor/critic parameterization (Proposition 2).
[7] *What values can $\eta$ take in the definition of $\pi_{t+1}$*
Section 3 considers a general functional representation of the policy. For Proposition 1 to hold, $\eta$ needs to be set such that $J + \frac{1}{\eta} \Phi$ is convex (please see the Proposition 1 statement). In Section 5, we instantiate the generic framework and this gives more concrete requirements on $\eta$. For example, for the softmax representation in Proposition 6, $\eta$ needs to be set to $1 - \gamma$.
[8] *For the blue term on page 4, you mention that as c decreases, critic error decreases. I don't see how?*
As mentioned in Lines 194-195, in order to gain some intuition about the effect of $c$, we use a second-order Taylor series expansion (around $c = 0$) which shows that the $D_{\Phi^*}$ term is proportional to $c$. As a simple example, choose $\Phi$ to be the Euclidean mirror map, in which case the $D_{\Phi^*}$ term becomes equal to $\frac{c}{2} || \nabla J(\pi_t) - \hat{g}_t ||^{2}$.
[9] *Algorithm 1: How do you compute $\nabla_v L_t (v_k)$ and $g_t$?*
Algorithm 1 is a *generic* algorithm and the computation of $\nabla_v L_t (v_k)$ and $g_{t}$ depends on the choice of the functional representation. We instantiate both these quantities for the direct and softmax functional representations in Section 5. For example, for the direct representation (Line 278), $[g_t]_{s, a} = d^{\pi_t}(s) \hat{Q}^{\pi_t}(s, a)$. $L_t (\omega)$ is defined in Proposition 4 and corresponds to the blue term. In practice, we use a sample-based approximation of $L_t(\omega)$ and compute its gradient using automatic differentiation.
[10] *The bandit proposition 5 and also 7 suddenly come in...*
As is explained in Lines 104-105, 283-284, we aim to demonstrate the effectiveness of using the proposed decision-aware critic loss over the squared critic loss typically used in practice. Consequently, we consider bandit examples to demonstrate that even for extremely simple examples (2-armed bandit with deterministic rewards, a special case of the multi-armed bandit, and hence RL problem), the **typical actor-critic algorithm that relies on minimizing the squared loss can fail to converge to the optimal policy, whereas the proposed decision-aware actor-critic algorithm converges to the optimal policy**.
[11] *Weird notation: $J_s(\pi)$ is well defined but what is $J_s(\rho)$?*
This is a typo, and it should be $J(\pi) = E_{s \sim \rho}[J_s(\pi)]$.
[12] *Note that $d^\pi$ and $\mu^\pi$ are not valid probability distributions.*
The definition of $d^\pi$ has a missing normalization factor of $1 - \gamma$, which makes $d^\pi$ a valid probability distribution. With this correction, $\mu^\pi$ is also a valid distribution.
[13] *In Proposition 2, what are the terms $\tilde{H}_t^\dagger$ and $g_t$?*
As explained in Section 3, $g_t \in \mathbb{R}^{SA}$ is any gradient estimator used to approximate $\nabla J(\pi_t)$, and $[g_t]_{s,a}$ is the $(s,a)$ component of this vector. The matrix $\tilde{H}_t$ is defined in Proposition 2, and $\tilde{H}_t^\dagger$ denotes the matrix pseudo-inverse. We will clarify this in the final version of the paper.
[14] *...you say $\nabla_\pi J(\pi) = d^\pi(s) Q^\pi(s,a)$. What is s and what is (s, a) on the RHS...*
$\nabla J(\pi) \in \mathbb{R}^{SA}$ and the statement in Section 5.1 should be fixed to $[\nabla J(\pi)]_{s, a}$ $= d^\pi(s) Q^\pi(s, a)$.
Similarly, in Section 5.2, $[\nabla J(\pi)]_{s, a}$ $= d^{\pi}(s) A^{\pi}(s, a) p^{\pi}(a|s)$ where the (s,a) subscript denotes the component corresponding to state $s$ and action $a$.
---
Rebuttal Comment 1.1:
Title: response to rebuttal to uLAb
Comment: I appreciate the authors responses. However, many of my comments have been ignored. The most important of these is of stability of the procedure?
1. Since the resulting scheme is a stochastic approximation algorithm, proving stability of such a procedure is important and there is no result in the paper which talks about stability. In other words, how do you ensure that the resulting stochastic iterates remain uniformly bounded almost surely on any sample trajectory?
2. Also, the response concerning the choice of step-size is not convincing enough from a theoretical point of view. I say this because in actor-critic algorithms such as those of Konda and Borkar, SIAM J. Control and Optimization (1999) or Konda and Tsitsiklis, SIAM J. Control and Optimization (2003), it is assumed that the critic step-size converges to zero slower than the actor's.
3. Such an argument is missing here as I also don't see how the noise is being treated here? Is it a martingale difference sequence or is it also Markovian? It should be Markovian because you are looking at samples coming from a Markov process. The convergence of the noisy scheme is not clear from the arguments.
These are important concerns that cannot simply be wished away. I am not convinced with the responses to my questions and so I shall retain my rating of 3.
---
Reply to Comment 1.1.1:
Title: Further clarifications
Comment: We thank the reviewer for engaging with the rebuttal. We believe that we have addressed all of the reviewer's comments, and that there is still some misunderstanding.
[1] *...Since the resulting scheme is a stochastic approximation algorithm...*
As we have explained in the rebuttal (see the global response to all reviewers), **Alg. 1 is not a stochastic approximation algorithm similar to [Konda and Borkar, 1999]**. In order to clarify the difference between Alg 1. and the two time-scale setting in [Konda and Borkar, 1999], [59] (in our references), we can think of actor-critic as solving a bi-level optimization problem. Since the actor uses the $Q^\pi, A^\pi$ estimates from the critic in order to compute its loss and update the policy, the outer-level objective corresponds to the actor loss, whereas the inner-level objective is the critic loss. Similar to [23,60, a,b], Alg 1. aims to solve the inner-level optimization problem *using multiple critic updates* (Lines 5-8). On the other hand, the two time-scale algorithm in [59] performs one step of gradient descent (critic update) on the inner-level objective, followed by one gradient ascent step (actor update) on the outer-level objective.
[2] *...there is no result in the paper which talks about stability...*
Even though Alg 1. involves multiple critic updates, it is not required to *exactly* minimize the critic loss at iteration $t$. Specifically, the advantage of using the proposed joint objective for the actor and critic is that Proposition 2 guarantees monotonic improvement in $J(\pi)$ as long as the critic loss is less than a certain threshold. This result only depends on the magnitude of the critic loss (after $m_c$ updates). The form of the critic update (including the step-size) required to achieve that loss is irrelevant. Importantly, **the actor and critic are coupled via the threshold that the critic loss needs to achieve in order to guarantee policy improvement. Specifically, this threshold depends on the norm of the functional policy gradient (see Line 218)**. Hence, if this threshold on the critic loss is satisfied, Alg 1. can result in convergence to a stationary point of $J(\pi)$. On the other hand, if the critic loss does not satisfy such a threshold, Proposition 3 still guarantees the convergence of Alg 1. to the neighborhood of a stationary point i.e. for the special case of the Euclidean mirror map, we can obtain a policy $\bar{\pi}\_T$ s.t that $||\nabla J(\bar{\pi}\_T)||^2 \leq O(\frac{1}{T} + \epsilon_{\text{critic}})$ where $\epsilon_{\text{critic}}$ is the critic error.
On the other hand, in order to prove guarantees, the analysis in [59] requires reasoning about both the form and step-size of the critic update. **Importantly, in [59], the actor and critic are coupled through the step-size. Specifically, the critic step-size converges to zero at a slower rate than the actor's, effectively enabling more critic updates**
Unlike conventional actor-critic analyses including [Konda and Borkar, 1999], [59], our theoretical results hold for non-linear function approximation and can handle off-policy updates. **Hence, our theoretical guarantees are much stronger than the stability results that the reviewer is alluding to.**
[3] *Also, the response concerning the choice of step-size is not convincing enough from a theoretical point of view... I also don't see how the noise is being treated here...*
Again, we refer to the global response to all reviewers. We reiterate this: compared to the two time-scale updates in [59], the advantages of our approach are that (i) we *do not need* to explicitly reason about the relative step-sizes for the actor/critic updates. This makes the resulting algorithm more stable, while retaining the theoretical guarantees in Proposition 3. (ii) Since our analysis abstracts out how the critic loss is minimized (similar to [23,60]), we do not need to make assumptions about the noise.
We hope that our response has clarified the reviewer's misunderstandings and better placed our algorithm in the context of existing analyses of actor-critic. We will explicitly include these comparisons and explanations in the final version of the paper.
[a] Kumar et al, On the sample complexity of actor-critic method for reinforcement learning with function approximation, 2019.
[b] Qiu et al, On the finite-time convergence of actor-critic algorithm, 2019. | Summary: The paper addresses the issue of objective mismatch in Actor-Critic methods by designing a joint objective that enables training the actor and critic in a decision-aware manner. The proposed algorithm ensures monotonic policy improvement, irrespective of the chosen policy and critic parameterization.
Strengths: The paper presents an algorithm that guarantees monotonic policy improvement and has solid theoretical foundations.
Weaknesses: The experimentation in the paper is somewhat limited, which may raise concerns about the algorithm's performance in more complex environments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In complex real-world environments, $\Delta_\pi J(\pi)$could only be estimated, e.g. using MC sampling. And for large MDPs, function approximation error of Q is unavoidable for large MDPs. The article provides experiments on two simple environments, Cliff World and Frozen Lake, using linear/tabular action and Linear cirtic. Thus, my questions are:
(1) Can the algorithm be applied to more challenging environments, such as the standard benchmark tasks in MuJoCo? If so, how well does it perform in those environments? If there are limitations or challenges in applying the algorithm to more difficult tasks, it would be helpful to elaborate on those limitations.
(2) Does the algorithm introduce significant extra time overhead compared to other methods? It would be useful to report the additional time required by the algorithm on the Cliff World and Frozen Lake environments.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Potential Challenges in Complex Environments: The applicability of the proposed algorithm to more complex environments, such as those found in the MuJoCo benchmark tasks, remains uncertain.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and address their questions below.
[1] *Can the algorithm be applied to more challenging environments, such as the standard benchmark tasks in MuJoCo? If so, how well does it perform in those environments? If there are limitations or challenges in applying the algorithm to more difficult tasks, it would be helpful to elaborate on those limitations.*
We emphasize that the aim of this paper is to lay down the theoretical foundations of decision-aware actor-critic algorithms. Our experiments are designed to isolate and study the effect of the critic loss, without non-convexity or optimization issues acting as confounders. While we intend to evaluate the proposed algorithm on an extensive deep RL benchmark in the future, these experiments are beyond the scope of the current work.
However, we note that the algorithm can be directly used for more challenging environments. In order to do so, we would require more complex actor/critic parameterization in order to better generalize across states/actions. Theoretically, the monotonic policy improvement guarantees in Proposition 2 would still hold.
From a practical perspective, Alg. 1 can be directly used with any actor/critic parameterization. With respect to hyper-parameter tuning, $\eta$ is only dependent on the smoothness of $J$ w.r.t $\pi$ and does not depend on the actor/critic parameterization. On the other hand, $\alpha_{a}$ and $\alpha_{c}$ are set adaptively using an Armijo line search. The line-search procedure only requires the smoothness of the actor/critic objectives and does not rely on these objectives being convex. Finally, since our actor-critic framework does not involve a two time-scale algorithm, the relative scales of $m_a$ and $m_c$ (the number of actor/critic updates) do not influence our results, and the performance improves with increasing $m_a$ and/or $m_c$ (see the discussion in lines 253-255 following Proposition 3).
One limitation of the proposed algorithm is setting the hyper-parameter $c$. While we have a heuristic to set $c$ (see Appendix F.1), its effect on the performance for challenging environments is unclear. However, we do note that such hyper-parameter tuning is a major issue even for the standard algorithms in practice.
[2] *Does the algorithm introduce significant extra time overhead compared to other methods? It would be useful to report the additional time required by the algorithm on the Cliff World and Frozen Lake environments.*
No, compared to the other methods, the proposed algorithm involves minimizing a different loss function with respect to the critic and does not introduce significantly extra time overhead.
---
Rebuttal 2:
Comment: Thank the authors for their rebuttal. I don’t carefully review your theorems, as such, my perspective on that particular aspect is limited.
However, theory and experimentation are not conflicting endeavors. Therefore, I recommend the authors to include more compelling experiments to enhance the quality of the work. This is not beyond the scope of your work.
I respond quickly because I hope that perhaps during this discussion period, you could attempt some experiments using MuJoCo, as continuous control problems are crucial application scenarios. I'm not asking for results in all scenarios of MuJoCo; providing results in one or two scenarios would suffice.
Furthermore, regarding the time complexity here, I'd like to know the wall time clock data if possible. Because using a different loss function might come with additional costs.
Of course, those mentioned above are just some suggestions on my part. If you're unable to provide them, it's not a significant issue
---
Rebuttal Comment 2.1:
Title: Experimental evaluation
Comment: [1] *I hope that perhaps during this discussion period, you could attempt some experiments using MuJoCo*
We thank the reviewer for engaging with the rebuttal. We agree that theory and experimentation are not conflicting endeavors, and consequently, we did provide experimental evidence that validates our theoretical results. As briefly alluded to in the rebuttal, there are several reasons why we did not consider Mujoco experiments in the current paper:
1. Mujoco experiments typically require over-parameterized deep neural networks for the critic. Since these models are highly expressive and can interpolate the data [a], it is possible to estimate all state-action values for simpler environments in the Mujoco suite. In this case, the choice of the critic loss becomes irrelevant. This effect can already be seen in our experiments -- In Figure 1, for $d = 80$ (corresponding to using a highly expressive model), the choice of the critic loss does not matter and all methods have similar performance.
2. For more complex environments, the experimental results for Mujoco are significantly affected by the choice of hyper-parameters [b], and secondary implementation-level factors have a major impact on the algorithm performance [c]. Unless we do careful ablations controlling for these factors, these experiments will not effectively test the paper's contribution. If the reviewer can suggest an experimental protocol that could help isolate the effect of the critic loss, we would be grateful and could test it.
3. Finally, our work is in line with similar theoretical papers that provide the necessary framework for developing algorithms with performance guarantees. Aware of the massive amount of engineering and finetuning required to develop state-of-the-art RL methods from these theoretical frameworks, we felt that it would be more honest to restrict our experiments to evaluate the specific advantages offered by our approach, highlighting the issues of standard approaches, without claiming to have developed a full-fledged RL algorithm.
That being said, your point is well taken. We agree that more experiments will indeed enhance the quality of our work. Consequently, we are currently running experiments on Cart Pole, one of the simpler continuous control environments. Specifically, we are considering a linear critic with tile-coded features (following the experimental protocol in [d]). By varying the dimension of these features, we hope to study the effect of the critic loss. Because of time and computational constraints, we are not sure if we will be able to finish this set of experiments (with proper ablations) before the discussion period finishes. We will definitely add these experiments to the final version of the paper.
[a]. Zhang et al, "Understanding deep learning requires rethinking generalization", 2016
[b]. Henderson et al, Deep reinforcement learning that matters, 2018
[c]. Engstrom et al, "Implementation matters in deep policy gradients: a case study on PPO and TRPO", 2020
[d]. Jain et al, "Towards Painless Policy Optimization for Constrained MDPs", 2022
2. *Furthermore, regarding the time complexity here, I'd like to know the wall time clock data if possible. Because using a different loss function might come with additional costs.*
All methods have similar wall clock times. In particular, we report the average (across $1000$ outer iterations of Alg 1) time taken to minimize the critic loss corresponding to each method (TD, Adv-TD, and Ours) for the linear critic parameterization (with $d = 60$). Specifically, we use gradient descent with Armijo line-search (to automatically set the step-size $\alpha_c$) and terminate the algorithm when the gradient norm decreases below $10^{-6}$.
a. On the Cliff World environment, the wall clock times for the Decision-aware, TD, and Adv-TD methods are $0.0550$, $0.0779$, and $0.0427$ respectively.
b. On the Frozen Lake environment, the wall clock times for the Decision-aware, TD, and Adv-TD methods are $0.0020$, $0.0051$, and $0.0017$ respectively. | Summary: The authors develop a generic decision-award AC algorithm where both the actor and the critic take steps iteratively to optimize some “policy improvement lower bound” under the FMAPG framework. In essence, each step takes the gradient estimation error as the critic error, and characterizes the policy improvement in terms of the estimated gradient $\hat g$. This result seems to come from the Taylor expansions of the Bregman divergence. The authors argue that since both the actor and the critic act in a cooperative manner, the critic therefore only focuses on parts of the state-action pairs that have the largest impact on the actor’s performance improvement. The authors provide the conditions for monotonic policy improvement and show that the algorithms converge to some stable points. Examples and experiments show that the proposed algorithm outperforms TD/AdvTD methods on certain tasks.
Strengths: The algorithm is interesting and well-motivated. Although the idea is not new to the RL community, the theoretical results in this work are sound and in general support the authors’ claims. Moreover, the paper is well written and I find most of the statements clearly presented.
Weaknesses: In order to perform GD/SGD, you at least need to rollout trajectories collected under $d^{\pi_t}$. I didn’t find evidence supporting the sample efficiency of the proposed algorithm as you have claimed. Questions include:
1. How many samples are used for each GD step in your experiment?
2. How large is the variance of the proposed algorithm compared to TD/AdvTD when the sampling budget is limited, especially when the samples do not have sufficient coverage?
4. How do the relative scales of $m_c$ and $m_a$ influence your results?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. In practice, buffers containing trajectories collected under history policies are commonly used. Is there any evidence showing that your algorithm can handle the distribution shift issue?
2. Can you perform gradient steps for the actor/critic simultaneously? If so, what are the relative step sizes for the inner and the outer loops?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: No concerns here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and address their questions below.
[1] *How many samples are used for each GD step in your experiment?*
We varied the number of samples -- $\{1000, 5000\}$ for Cliff World and $\{1000, 10000\}$ for Frozen Lake in order to estimate the $Q^\pi, A^\pi$ functions for a linear critic. These $Q^\pi, A^\pi$ estimates were then used to update the parameters of the linear actor. All details about the number of samples, step-sizes, and number of inner-loops are presented in Appendix F.
[2] *How large is the variance of the proposed algorithm compared to TD/AdvTD when the sampling budget is limited, especially when the samples do not have sufficient coverage?*
Our preliminary experiments reveal that the variance in the gradient estimator $\hat{g}_t$ (and hence in the $Q^\pi, A^\pi$ functions) affects the performance of all the compared methods in a similar manner. While it is important to control the variance and evaluate the methods when the samples do not have sufficient coverage in practice, our experiments are designed to study and isolate the effect of the critic loss. The form of the critic loss becomes important when the bias (because of using function approximation with limited capacity) dominates the variance. Hence, we did not evaluate the methods in the small sample regime, but rather used the same number of samples for all methods, and compared their relative performance. We will clearly explain this in the final version of the paper.
[3] *How do the relative scales of $m_a$ and $m_c$ influence your results?*
Note that **we do not update the actor and critic in a two time-scale setting (one environment interaction and update to the critic followed by an actor update)**. Hence, the *relative* scales of $m_a$ and $m_c$ do not influence our results.
The proposed actor-critic algorithm is similar to the protocol in [2,60] in our references. Specifically, the critic interacts with the environment in *batch* and uses these interactions to form $\nabla \hat{J}(\pi)$ and form the decision-aware critic loss (Line 4 in Alg. 1). The critic is then trained (using $m_c$ inner-loops) to minimize the critic loss and form $\hat{Q}^\pi, \hat{A}^\pi$ estimates (Lines 5-8 in Alg. 1). These estimates are used to train the actor (using $m_a$ inner-loops) to maximize the surrogate function and update the policy. The performance of Alg. 1 monotonically improves with increasing $m_c$ and $m_a$ -- increasing $m_c$ ensures smaller critic error while increasing $m_a$ ensures better surrogate optimization for the actor. This effect is captured in theory by Proposition 3 (please see lines 253-255)
[4] *In practice, buffers containing trajectories collected under history policies are commonly used. Is there any evidence showing that your algorithm can handle the distribution shift issue?*
For the current paper, our experiments are designed to isolate and study the effect of the critic loss, without non-convexity, optimization, and distribution shift issues acting as confounders. Consequently, we did not perform large-scale experiments that require the use of replay buffers.
However, we note that the proposed framework can systematically incorporate the use of a replay buffer. In particular, in our framework, the policy at which the gradient is computed can be any policy, not just the current one. Hence, a buffer containing trajectories from multiple history policies can be used to construct the surrogate $\ell_t(\theta)$ around that mixture of policies, which is perfectly valid. In order to incorporate a replay buffer with our framework, we can also make use of alternative algorithms such as dual-averaging in the policy space. This would enable us to utilize the gradient estimates from history policies and make use of the replay buffer. We aim to pursue this interesting direction in the future.
[5] *Can you perform gradient steps for the actor/critic simultaneously? If so, what are the relative step sizes for the inner and the outer loops?*
As explained above, we do not perform gradient steps for the actor/critic simultaneously in a two time-scale setting. Hence, their relative step-sizes do not influence our results. In practice, we set $\alpha_a$ and $\alpha_c$ using an Armijo line search, and vary $\eta$, the outer step-size used to instantiate the surrogate function for the actor. Please refer to Appendix F for further details.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. While your explanation on the relative stepsize helps clarify things, I still have some reservations. I understand your intention to isolate the effect of the critic loss, and the monotonic increase in $m_c$ and $m_a$ does make sense for enhanced performance. However, it would be greatly beneficial if you could provide additional support for the theoretical analysis in Proposition 3, e.g., by showcasing empirical evidence of your algorithm's superior performance compared to its counterparts in scenarios where there's insufficient critic update, such as with small $m_c$ or larger stochastic error, you could offer a more robust validation of your approach. My evaluation of the work still remain unchanged.
---
Reply to Comment 1.1.1:
Title: Experimental evaluation
Comment: Thank you for engaging with the rebuttal, and for your suggestion. There are three sources of error for an insufficient critic -- the bias (due to the limited capacity of the critic), the optimization error (due to small $m_c$) and the variance (the stochastic error due to insufficient samples). Our experiments demonstrate that when the bias dominates, using a decision-aware critic loss does indeed result in better performance. For the final version of the paper, we will include ablation studies varying $m_c$ and the number of samples, and hence compare the different methods when the optimization error or variance dominates. | Summary: This research addresses the mismatched objectives in actor-critic (AC) methods used in reinforcement learning (RL). By introducing a joint objective for training the actor and critic in a decision-aware fashion, a generic AC algorithm is developed. The algorithm ensures monotonic policy improvement regardless of the policy and critic parameterization. The proposed approach offers advantages over traditional methods, as demonstrated through rigorous analysis and empirical evaluations.
Strengths: I think this work really pushes the RL community research efforts further by answering:
> can we design a generic actor critic algorithm with joint objective?
The main contribution of generic actor critic algorithm with joint objective is a really nice idea worthy for publication at NeurIPS.
Weaknesses: I have only have one minor weakness for this work as follows:
> Using linear representations as "general function approximation" is a bit weak. I presume this is the reason why only simple RL problems have been demonstrated in this work.
I am open to discussions with the authors and reviewers to increase my score. All the best for future decisions!
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: na
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: na
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and address their questions below.
[1] *Using linear representations as general function approximation is a bit weak. I presume this is the reason why only simple RL problems have been demonstrated in this work*
We emphasize that the main contribution of this work is to develop a theoretically principled framework for jointly training the actor and critic while being able to handle general function approximation. Consequently, our main theoretical results (Propositions 1-3) hold for function approximation schemes beyond linear models. For example, Proposition 2 holds for *any* actor or critic parameterization and demonstrates monotonic policy improvement for general function approximation. This is in contrast with existing work that focuses on tabular or linear parameterization for the actor and/or critic.
From an experimental perspective, we chose to use simple RL problems to demonstrate the importance of being decision-aware. These simple experiments enable us to isolate and study the effect of the critic loss (the standard squared TD loss vs the decision-aware loss instantiated in Prop 4,6), without non-convexity or optimization issues acting as confounders.
---
Rebuttal 2:
Title: Are you satisfied by the answers?
Comment: Dear reviewer,
Would you please indicate whether the authors' response is satisfactory for you? If not, please engage with the authors, so we can get a better assessment of this work.
Thank you,
Area Chair
---
Rebuttal Comment 2.1:
Comment: Hi, I am following up on this!
Thank you,
Area Chair | Rebuttal 1:
Rebuttal: **We respond to the major comments in the review of Rev. uLAb here. We believe that it would be helpful for all the reviewers to go through this response as it highlights the paper's key contributions, addressing possible misunderstandings**
We thank the Rev. uLAb for their feedback. However, we note that **the weaknesses highlighted by the reviewer stem from a misunderstanding of the paper's setting, or consist of notational typos that do not affect the results. In this part, we address their major misunderstanding about the actor-critic setting studied in our paper. Due to constraints on the rebuttal length, we address the notational typos and other questions in Part 2 (the direct rebuttal to Rev. uLAb).**
[1] *... a critic's job is to estimate the expected TD error for any given policy provided by the actor and the actor's job is to find the policy that minimizes this expected TD error over policies...*
The reviewer's understanding is incorrect -- the critic's job is to estimate/learn a policy's value, whereas TD methods are one way to do so. For example, please refer to [Konda et al, Actor-Critic Algorithms], the original actor-critic paper which clearly says this as *The critic uses an approximation architecture and simulation to learn a value function*
[2] *I think the analysis of convergence is flawed since there are no assumptions or results regarding (a) Lipschitz continuity of the objectives, (b) step-sizes used in the actor-critic scheme, (c) stability of the two coupled recursions, (d) fast mixing nature of the Markov noise, etc., have been made or shown.*
First, note that **Alg.1 does not update the actor and critic in a two time-scale setting (one environment interaction and update to the critic followed by an actor update)**. The proposed actor-critic algorithm is similar to the protocol in [2,60] in our references. Specifically, the critic interacts with the environment in *batch* and uses these interactions to form $\nabla \hat{J}(\pi)$ and form the decision-aware critic loss (Line 4 in Alg. 1). The critic is trained (using $m_c$ inner-loops) to minimize the critic loss and form $\hat{Q}^\pi, \hat{A}^\pi$ estimates (Lines 5-8 in Alg. 1). These estimates are used to train the actor (using $m_a$ inner-loops) to maximize the surrogate function and update the policy. **Since we are not in the two time-scale setting, the relative scales of the actor/critic step-sizes or the number of inner loops do not influence the performance of Alg 1**. We now compare our results to the two time-scale analyses (for example, in [59] in our references) of actor-critic.
1. Since we do not update or analyze the actor and critic in a two time-scale setting, we do not need to consider coupled recursions, nor do we need to set the two step-sizes at different scales.
2. We do not require Lipschitz continuity of either the actor or critic objectives but assume that the actor objective is smooth for our theoretical results in Proposition 3. Proposition 2 does not require any such smoothness assumption for either the actor/critic.
3. **Both Propositions 2 and 3 are independent of how the critic loss is minimized** i.e. we could use TD or any other policy evaluation method in order to estimate the value function. Proposition 2 provides necessary and sufficient conditions on the magnitude of the critic error in order to ensure monotonic policy improvement (and hence convergence to a stationary point). On the other hand, Proposition 3 guarantees convergence to a *neighborhood* of a stationary point, where the neighborhood depends on the critic error. This result holds for *any* critic error and is similar to the results obtained in [2,23,60] (see Lines 259-262 for a discussion). Since these results are agnostic to how the critic error is minimized, we do not need to model the mixing of the Markov chain, and consequently make no assumptions.
4. **Compared to the existing actor-critic analyses (for example [59,61]), our theoretical results require fewer assumptions on the function approximation.** For example, the monotonic policy improvement result in Proposition 2 holds for *any* actor or critic parameterization (including complex neural networks). In contrast, the typical analyses of actor-critic that utilize a two time-scale update only work with tabular or linear function approximation for the actor/critic.
5. Since we do not explicitly model how the critic error is minimized, we can only prove convergence to the neighborhood of a stationary point. This is in contrast to the existing two time-scale analyses that jointly analyze the actor and critic, and show convergence to a stationary point (see [59] for example). Hence, we prove weaker results for a more general class of function approximation. We briefly explain this in Lines 255-265 and will add a more thorough comparison in the final version of the paper.
6. Finally, **our analysis supports off-policy updates i.e. the actor can re-use the value estimates (from the critic) to update the policy multiple times (corresponding to Lines 10-13 in Alg 1)**. This is possible because we explicitly distinguish between a policy's functional representation and its parameterization. This is in contrast to existing analyses of actor-critic methods that require interacting with the environment and gathering new data after each policy update.
[3] *You give a monotonic policy improvement result. Which one is better - the improvement provided by your algorithm or TRPO?*
**The monotonic policy improvement result for TRPO only holds for the tabular setting, does not handle function approximation nor does it handle the critic error**. As explained in Lines 225, Proposition 2 presents a condition to ensure monotonic policy improvement *regardless of the policy representation and parameterization of the policy or critic*. Compared to TRPO, our result can thus handle the critic error and *any* function approximation for the actor/critic. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Polyhedron Attention Module: Learning Adaptive-order Interactions | Accept (poster) | Summary: This paper presents a feature interaction learning module called the polyhedral attention module (PAM).
The authors show that for any fully ReLU activated DNN, any input x is transformed with respect to a polyhedron defined as the intersection of the halfspace of each layer. Because the input is divided into distinct polyhedrons, the output of a DNN can be written in terms of these polyhedron, which the authors interpret as an attention mechanism.
The authors use this insight to define PAM, which incorporates interaction effects by multiplying the distance from x to each polyhedron. Therefore giving a DNN using PAM an interpretation as a piecewise polynomial function. That is when m activated segments are multiplied, it forms an m degree monomial representing a k-way interaction between the activated segments.
The authors then provide a concept framework for interpreting the learned interaction affects from the PAM. The authors validate the PAM theoretically and empirically on the Criteo, Avazu and the UB Biobank datasets, with PAM achieving the best overall results compared to other models in binary classification for Criteo and Avazu, and brain age prediction on UK Biobank, beating the next best model’s AUC score by 2.2%.
Strengths: * The authors provide an intuitive generalization of the geometric interpretation of DNN into piecewise polynomial function. The theoretical analysis is sound and validated with empirical results.
* Paper is well organized and self contained, with the theoretical concepts being clearly discussed with intuitive supporting figures
The authors provides a general and expressive feature interaction learning model that achieves commendable result across all datasets in classification and regression tasks.
Weaknesses: * Reviewer notices no apparent weaknesses
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * Reviewer has no suggestions for authors
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Author adequately address limitations of paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your positive review of our work and encouragement and we are open to any additional insights you may have.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. After reading the other reviews and the authors' responses, I remain confident in my assessment. | Summary: The paper proposes a Polyhedron Attention Module (PAM) to create piecewise polynomial models to learn feature interactions in multivariate predictive modeling. Specifically, the input space is split into polyhedrons which define the different pieces and on each piece the hyperplanes that define the polyhedron boundary multiply to form the interactive terms, resulting in interactions of adaptive order to each piece. Theoretical analysis and experimental verification are provided to demonstrate the superior of PAM.
Strengths: 1. The paper is written carefully with mathematical details. The tables and figures are quite clear.
2. The paper provides both theoretical and experimental verifications for the proposed method.
Weaknesses: 1. The paper is quite difficult to follow. I would suggest the authors use more natural language for the analysis of the proposed method.
2. A figure for an overview description of the proposed method is lacking.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. The intuition of PAM is that the authors believe data instances belonging to different pieces may endorse interactions in different ways. Why should this be the case? It would be more convincing if the authors could provide real-world examples to illustrate why we should consider data interactions at a local scale.
2. The common drawback of local methods is that inference can be expensive. For PAM's case, the heavy cost may lie in the step of generating polyhedrons for each data instance. How does tree search help reduce this cost?
3. Why the polyhedrons should overlap? An ablation study between overlapping and non-overlapping polyhedrons will be beneficial.
4. (Minor) The paragraph right after Eq. (5) is unfinished.
5. The improvements on Criteo and Avazu datasets are incremental with around 0.1% AUC increase. Although the authors have compared with the improvements of the second-best method, it would be more convincing if the authors could conduct statistical significance tests for PAM's improvements.
6. Overall, the standard deviations of PAM in the experiments are lower than other methods. Can the authors provide a thorough explanation for this improvement?
7. An ablation study on training & inference time of PAM compared to other methods may be beneficial.
I would happily increase my score if the authors carefully tackled my concerns above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have not included a limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your follow-up questions and comments. Below we address each question and minor questions in a point by point fashion.
Weaknesses:
Response: The mathematical derivation can develop the approach rigorously and helps demonstrate the solid foundation of PAM, but we agree that it makes our paper uneasy to comprehend. We will rewrite Sections 2 and 3 (with a notation table in the appendix to link the different symbols) and importantly, we will use more plain explanation and revise the current Figure 1 for better demonstration and make it Figure 2. Then, as suggested by reviewer, we will add in an overall figure (new Figure 1) to demonstrate the main idea of PAM. (Please see Fig. S1 in the attached pdf and we welcome more suggestions on revising this overall figure).
Questions:
Response to 1: We utilize the real-world example in [1] to illustrate why different input regions (pieces or polyhedrons) may endorse different interactions.
[1] Li, Zeyu, et al. "Interpretable click-through rate prediction through hierarchical attention." Proceedings of the 13th International Conference on Web Search and Data Mining. 2020.
A record movie genre = horror, user age = young, time = morning has conflicting factors: the combination of the first two encourages the user to watch the movie, whereas the combination of the latter two discourages it because movie-watching usually happens at night. Therefore, rather than capturing the global interaction among these three features, in which user age oppositely impacts on people's movie-watching behavior, categorizing individuals into two local groups based on their preference for movie-watching time or genre, and fitting the interaction weights of "movie genre $\times$ user age" and "time$\times$ user age" separately will achieve better performance.
Response to 2: We employ the oblique tree to search for appropriate polyhedrons in PAM because it could generate polyhedrons which cannot be obtained via simply using independent hyperplanes. As the example shown in Figs. S3a-S3d in the attached pdf, the oblique tree shown in Fig. S3a could divide the input space into three polyhedrons (P1, P2 and P3 in Fig. S3b). As shown in Fig. S3c, this division can not be obtained by two independent hyperplanes. If we split the input space using the hyperplanes H1 and H2, we will get four polyhedrons (see P1, P2, P3 and P4), which is different from those in Fig. S3b. On the contrary, as proved in our paper's Base Case of Appendix B, the oblique tree can identify all possible polyhedrons created by separating hyperplanes but not vice versa. For example, polyhedrons in Fig. S3c can be generated by the oblique tree demonstrated in Fig. S3d.
With $L-1$ hyperplanes (more precisely, truncated hyperplanes with $M$ parameters for each), an oblique tree can generate $2L-2$ polyhedrons. With $L$ value functions (with M parameters for each, $L-1$ for the oblique tree and $1$ for the global value function, see Eq. 4 and Remark 1 in the main text), an oblique tree results in $(2L-1)M$ trainable parameters. If we use the same hyperplanes found by the oblique tree to split the input space, it can create $2^{L-1}$ polyhedrons. It requires totally $(2^{L-1}+L)M$ trainable parameters to train the model. In this sense, the oblique tree does reduce the trainable parameters and computation cost.
Response to 3: We have already conducted the ablation study comparing overlapping and non-overlapping polyhedrons in our paper (refer to Fig. 4c bar for "w/o OP"). We observed that overlapping polyhedrons did improve performance. This might be partially due to two reasons. Firstly, a sample can belong to multiple clusters simultaneously in real life, like a red and green apple may be in red and green groups. Treating a polyhedron as a cluster of input instances, the overlapping ones lead to a fuzzy clustering framework. There is also a practical issue when a sample can only belong to one polyhedron, which results in one and only one non-zero attention score (as per Eq. 3). Model parameters are trained based on gradients of the loss function, and a zero $a_{\Delta}$ causes zero gradients for the value function $V_{\Delta}$ in PAM (as per Eq. 4 and the chain rule), each training iteration will update solely the value functions presented in a leaf and its ancestor nodes in the oblique tree. Secondly, identifying the exact boundary between two polyhedrons is challenging and prone to errors. With overlapping polyhedrons, the actual boundary may lie in that overlapping buffer zone to make the method more robust.
Response to 4: Thank you for pointing out the unfinished sentence after Eq. 5. That half sentence should be removed.
Response to 5: It is a great suggestion to conduct statistical significance tests. We performed more runs (5 additional) of PAM and baseline methods so we have enough data to conduct unpaired t-tests. This test result indicates that PAM significantly outperforms baseline algorithms on all Criteo, Avazu and UK Biobank datasets ($P<0.05$, respectively, as shown in Table S2 in the attached pdf). We will include this result to Appendix.
Response to 6: We did observe this interesting result. It did perform more stably than many of the baselines. It may be because the same PAM module (the same architecture) is stacked (see Figure 1 in Appendix G), which could reduce the variance of the model’s performance according to existing research [2].
[2] Fahmy, Hesham Ahmed, et al. "An ensemble multi-stream classifier for infant needs detection." Heliyon 9.4 (2023).
Response to 7: Following the reviewer's suggestion, we now added a new comparison of training and inference time. The new results are included in Table S1 in the attached PDF file to compare the inference time of PAM with those of baseline methods. The training time comparison can be found in Table 2 in the original paper.
---
Rebuttal Comment 1.1:
Title: Score Update
Comment: I am satisfied with the authors' careful responses to my concerns and have increased my score accordingly. Good luck! | Summary: The ReLU-activated DNN will split the input space into pieces such as polyhedrons. The author proposes a polyhedron attention module (PAM) to capture the interaction between different pieces of input spaces. And they propose an approximation theorem for PAM. The polyhedrons are generated via an oblique tree, and each tree node means a sub-space of a polyhedron. And for each node, two functions will be learned: the splitting function which decides how to further split the hyperplane, and the value function. The authors also show that such a module needs fewer parameters than the plain DNN.
Strengths: 1: The author provides a detailed and clear explanation on how the proposed works.
2: The general idea is novel. The split and attention step can dynamically capture the adaptive interaction between different data instances.
3: The author also provides necessary justification for the methods.
Weaknesses: 1: Since I am not familiar with the baseline proposed in the paper, in section 6.3, I want to see whether other methods can find the same effects and interactions or not.
2: Table 2 still shows that the PAM has more parameters and more running time compared to other methods. It will be helpful if the author can add a few explanations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your follow-up questions and comments. Below we address each question and minor questions in a point by point fashion.
Weaknesses:
Response to 1: Reviewer asked about the baseline methods in Section 6.3 and whether these methods can find the same effects and interactions. As far as we know, our interpretation framework introduced in Section 4 is the first algorithm to extract the main and interaction effects from DNN models. We do stress that although it was developed for PAM, it can be applied to those DNN architectures where activation functions are piece-wise linear (e.g., ReLU, ReLU6, and HardTanh, please also consult with the proof in Appendix D).
After carefully checking all baseline methods in Section 6, we found that our interpretation framework could be used to extract interactions from AOANet and DCN. The main effects of precentral gyrus and thalamus identified by PAM were found by DCN and AOANet, respectively (Fig. S2 in the attached pdf), and all three algorithms found the two-way interactions between the left and right frontal pole. In addition to the shared top-5 brain regions identified by these three algorithms, PAM additionally identified the main effect of the lateral occipital cortex and frontal pole and two-way interaction effects related to the insular and subcallosal cortex regions.
We will put these results into section 6.3.
Response to 2: It does make sense to explain why PAM has more parameters and run time in the experiments. Note that our theoretical analysis shows that PAM has more parameter efficiency for universal approximation than ReLU-activated DNN. In other words, ReLU-activated DNN may need more parameters to reach universal approximation. However, it does not prevent the overfitting of the larger size DNN on a specific task. In our experiments, the standard DNN model (ReLU-activated), if using more parameters following Theorems 3 and 4, produced worse test performance than the one we reported. For fair comparison, we reported on the architectures of all the other models which gave the best performance. It shows that these models used slightly smaller numbers of parameters, but their best tuned performance was worse than that of PAM (Fig. 4a).
For experiments on Criteo and Avazu, although PAM has slightly more trainable parameters, its run time is comparable with that of the baselines with fewer trainable parameters (i.e., DESTINE and DCN-V2) because all value functions within the red box in Fig.1 in Appendix G have the same input $x$ and can thus be calculated in parallel.
We would also like to point out that we used pre-existing modules within the PyTorch package to implement PAM. These modules have well-optimized CUDA kernels for forward and backward functions, leading to efficient execution of DNN. However, the oblique tree data structure is not included in PyTorch. We believe that a customized CUDA kernel would further enhance the run time of PAM.
We will add these explanations to a supplemental section.
---
Rebuttal Comment 1.1:
Title: Rebuttal read
Comment: Thanks for your response and clear explanation. They addressed my questions. I would like to keep my score. | Summary: The paper introduces a more general nonlinearity, PAM, to better capture the data features' interactions. Theoretical justification and interoperability are performed along with empirical results.
Strengths: The method seems to be inspired from sharp mathematical observations. It has mathematical interpretation and a sound rational connecting existing methods.
Weaknesses: The explanation and intuition behind the methods are unclear; for example, the introduction of k-way interaction is put in front of the article but lacks a description of what it means and why it is essential.
It is unclear how the method generalizes to complicated models and what the complexity will be.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: It will be nice to simplify the notations and convey the concepts more plainly. The article can improve by reorganization. More experiments can also help to validate the method.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations have been properly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your follow-up questions and comments. Below we address each question and minor questions in a point by point fashion.
Weakness:
Response: We further explain the rational of the presentation of our paper here and will revise our paper to explain better with examples. Feature interactions can occur in many different forms. The k-way interaction is defined first (according to existing work) to provide us a mathematical basis so we can derive the quantitative algorithm in the subsequent sections. K-way interactions mean any kind of interaction among k different input features. A deep learning model with highly nonlinear activation function, for a simple example, $sigmoid(\sum w_i x_i)$ with $i = 1, \cdots, d$ specifies a d-way interaction of any arbitrary order (as $sigmoid$ has derivatives of arbitrary order), and lack of interpretability. Instead, a lower-order interaction among fewer features (lower way) can significantly improve the model explainability. For a simple example, by incorporating the two-way interaction effect between gender (0/1: female and male) and age into the linear regression model $height\sim w_1\cdot gender+w_2\cdot age+w_3\cdot gender\times age+w_0$ (where $w_0,w_1,w_2$ and $w_3$ are trainable parameters), the effects of female's age on the heights will be different from that of male ($w_2$ v.s. $w_2+w_3$). Our approach seeks to identify such low-way and low-order interactions for better interpretability.
The most straightforward way to define k-way interactions is to form multiplicative terms among the k features (e.g., $x_1, x_2, \cdots x_k$). Our attention module PAM adaptively derives multiplications of linear functions (corresponding to the boundary hyperplanes of a polyhedron) when learning the partition of the input space into polyhedrons. Thus, the linear function multiplication generates multiplicative terms among various features (being adaptive to different input regions/polyhedrons) in generally lower order polynomials.
Response: The following answers the second critique about how to generalize PAM to complicated models and the complexity of PAM.
As mentioned in Section 6 and Appendix G, the proposed PAM can be used as a basic module in a stack to generalize to complicated models. Each module maps its own input (output from early PAM blocks) according to Eq.4. PAM has already been used as the basic module in our experiments to build a complicated model whose architecture is given in Appendix G.
The computation complexity of other deep learning methods has been previously discussed. We follow the principle of Yan et al. 2022 [1] to now discuss the complexity of PAM. As shown in Eq. 4, the dimension of PAM's input is $p$. In our experiment, we consider two kinds of value functions in this work, i.e., $V(x;\theta_n)=W_nx+b_n, W_n\in\mathbb{R}^{p\times 1}, b_n\in\mathbb{R}$ and $V(x;\theta_n)=b_n$, $b_n\in\mathbb{R}^p$. Since a PAM with a D-depth oblique tree has $2^{D-1}$ value functions (1 global value function following Eq. 3 and $2^{D-1}-1$ value functions for polyhedrons following Remark 1), the memory complexity of these two kinds of value functions is $\mathcal{O}(2^{D-1}p)$. In addition to the value function, a D-depth oblique tree has $2^{D-1}-1$ hyperplanes with $\mathcal{O}(2^{D-1}p)$ trainable parameters. Therefore, the total MEMORY complexity of PAM is $\mathcal{O}(2^{D}p)$.
As for the time complexity, PAM need to 1) calculate the attention score following Eq. 9, 2) generate the corresponding values via value functions mentioned above, and 3) output $f_{PAM}$ by multiplying the attention with values following Eq 8. The computation complexity is shown as follows:
| Value function | Step 1 |Step 2 |Step 3 |
| ---- | ----------- |----------- |----------- |
| $W_nx+b_n$ |$ \mathcal{O}(2^{D-1}(p+D)) $ |$\mathcal{O}(2^{D-1}p)$ |$\mathcal{O}(2^{D-1})$ |
| $b_n$ | $\mathcal{O}(2^{D-1}(p+D))$ |- |$\mathcal{O}(2^{D-1}p)$ |
Thus, the TIME complexity of PAM is $\mathcal{O}(2^{D-1}(2p+D))$. Note that in our experiment $p\gg D$. We will add this analysis to Remark 1 to illustrate the complexity of our model.
[1] Yan, Bencheng, et al. "APG: Adaptive parameter generation network for click-through rate prediction." Advances in Neural Information Processing Systems 35 (2022): 24740-24752.
Questions:
Response: We will simplify the notations and provide a notation table to link the different symbols. We will demonstrate the concepts with an overall figure as suggested by Reviewer wPpx (please see Fig. S1 in the attached pdf and response to Reviewer wPpx).
More experiments have been performed and added to validate our method (also in response to other reviewers' suggestions):
a) As shown in Table S1 in the attached pdf, the inference time of PAM and other baseline methods is compared.
b) As shown in Table S2 in the attached pdf, the unpaired t-test has been conducted to evaluate the statistical significance of PAM's improvement over baselines (using the performance numbers in Figure 4a and additional runs of the different models). | Rebuttal 1:
Rebuttal: We appreciate all four reviewers for their positive and encouraging summary about our paper's strength. All have noted that our paper provides a sound, novel approach with theoretical justification and empirical verification (e.g., from reviewer 61Ki, "sharp mathematical observations", "mathematical interpretation and a sound rational connecting existing methods"; from reviewer 8xSi, "the idea is novel"; from reviewer wPpx, "provides both theoretical and experimental verification", from reviewer fykg, "thereotical concepts being clearly discussed with intuitive supporting figures", "achieve commendable results"). Three reviewers suggested more experiments to either clarify baseline methods or further validate the proposed approach. We have performed all advised experiments and explain the results in the following point-by-point responses to all weaknesses arisen from the critiques which help us improve the quality of our paper.
Pdf: /pdf/c829cab2a87395b325cf71b4cedca7e44e2399a4.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning World Models with Identifiable Factorization | Accept (poster) | Summary: Learning efficient world models requires architectural priors to learn better representations that can capture different aspects of the environment. However, existing methods like Dreamer do not focus on learning disentangled representations and lack the ability to separate noise from the reward-relevant information in the observations. In this work, a new method is proposed where the latent state is divided into 4 blocks based on the dependence of action and reward function. The paper shows that using the blocks of states that affect the reward function can be used to learn policies. Experiments show the efficacy of the proposed algorithm on Robodesk and DMControl tasks with distractions and noise in the form of background or based on a sensor/camera. Moreover, ablation studies show that the model learns to disentangle the factors in the environment. Lastly, the paper also provides a theoretical justification for the proposed architecture.
Strengths: 1. The idea of disentangling the learned representations to different factors based on actions and rewards is interesting and nicely explained in the paper.
2. The experiments are thorough and support the claims made in the paper that the agent is able to differentiate between parts of the state that determines the reward and parts of the state that change based on the action.
3. The paper presents a theoretical derivation of the identifiability of the latent state representations.
Weaknesses: 1. World Models have been extended to scenarios with noisy distractions in [1][2][3]. They have shown improvements over the Dreamer method for such tasks. However, these methods are not used as baselines in the experiments.
2. Some ablations and experiments are required to further understand the scenarios where the proposed method works (More on this in Questions below)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. For learning efficient latent representations, recent works like DreamerPro have used auxiliary losses to extract relevant concepts from the observation [1]. How does the proposed method compare with reconstruction-free model-based methods?
2. For experiments on control tasks, the noise used is the background noise. There can be other types of noises as shown in [2] where distractor objects are moving in the environment. Will the proposed method be able to disentangle such noises too?
3. Were there any experiments conducted on a few games of Atari (like MsPacman, PrivateEye, and Montezuma’s Revenge) and observe what the latent embeddings are learning? On the current task, the whole agent is visible in the observation, what happens when the agent gets a partial view of the environment?
4. Related work should include [4] as they also plan to learn latent representations that disentangle different objects. Furthermore, how will the proposed method work in such scenarios where different rewards are associated with different objects?
5. Will the proposed method work in multi-task learning or meta-learning scenarios where the rewards are changing because there is a part of latent space that depends on the reward function which may change with different reward functions?
6. The current method assumes the presence of a reward function. Will this assumption make it difficult to learn unsupervised world models? Here the latent space cannot be conditioned on the intrinsic reward as it changes with training.
### References
[1] Deng, Fei, Ingook Jang, and Sungjin Ahn. "Dreamerpro: Reconstruction-free model-based reinforcement learning with prototypical representations." International Conference on Machine Learning. PMLR, 2022.\
[2] Jain, Arnav Kumar, et al. "Learning robust dynamics through variational sparse gating." Advances in Neural Information Processing Systems 35 (2022): 1612-1626.\
[3] Nguyen, Tung D., et al. "Temporal predictive coding for model-based planning in latent space." International Conference on Machine Learning. PMLR, 2021.\
[4] Kipf, Thomas, Elise Van der Pol, and Max Welling. "Contrastive learning of structured world models." arXiv preprint arXiv:1911.12247 (2019).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper briefly talks about a few limitations between lines 346-350 and needs to elaborate more on the discussion and why they are challenging.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Dear Reviewer,**
Thank you for your valuable feedback. Below, we address your concerns in a point-by-point manner and have added various experiments, following your suggestions.
**Weaknesses:**
1. Lack recent works in world models such as [4,5,6] as baselines
*Response:* Thank you for bringing our attention to the related works. In our updated experiments, we have included DreamerPro [4] as a baseline for DMC variants. The results, available in the general response and newly uploaded pdf file, consistently show IFactor outperforming DreamerPro. Variational Sparse Gating (VSG, [5]) introduces a distinct latent dynamics model that deviates from the typical RSSM structure. We have not included [5] in our baseline comparisons because IFactor and VSG actually address different aspects, and we are currently working on combining them for potential performance improvement. Moreover, [6] proposes an alternative approach using temporal predictive coding, where latent states may not be identifiable due to the non-invertibility of the mixing function f. We have not included [6] in our baselines because its implementation isn't yet publicly available.
2. Requirements for experiments to further understand the scenarios where the proposed method works.
*Response:* Thank you for your suggestions. We offer detailed responses to each question below.
**Questions:**
**Q1:** Comparative Analysis with reconstruction-free model-based methods, such as DreamerPro:
**A1:** We've expanded our experiments to incorporate DreamerPro as a baseline for DMC variants. Results are shown at the end of the general response and in Figure 4 of the newly uploaded pdf file. Notably, IFactor consistently surpasses DreamerPro's performance.
**Q2:** Disentanglement of Noises such as distractor objects in the environment:
**A2:** Thank you for your insightful thoughts. We have indeed addressed the concept of distractor objects moving within the environment in Section 5.1.2, where we considered a modified Cartpole environment with a distracting cart that is neither reward-relevant nor action controllable. The identifiability score ($R^2$) is given in Figure 2 of the newly uploaded pdf file. Following your suggestions, we are now working on the environment presented in [5].
**Q3:** Experiments on Atari games and Partial-view Environments:
**A3:** Thank you! Following your comments, we are now testing our approach to Atari games. Considering the achievements of Dreamer (v2/v3) in similar situations and the flexibility of our method to work with different representation types and training techniques, we believe that combining IFactor with Dreamer (v2/v3) holds the potential. Our visualizations in Figures 4 and 5 of the main paper, and Figures 1 and 6 in the appendix, offer insights into how latent features are organized.
**Q4:** Inclusion of Related Work [7] and performance of IFactor in scenarios where different rewards are associated with different objects:
**A4:** Thank you for bringing the related work [7] to our attention. We will make sure to include the proper citation in our revised version. Moreover, When dealing with multiple rewards, our proposed framework actually can be directly extended to identify the state representation or object that is associated with each reward signal. To illustrate, suppose there are two reward signals r1 and r2 that are associated with object 1 and object 2, respectively. Then with our framework, we can recover the representation $s^{r1}$ that is relevant to reward r1 and $s^{r2}$ that is relevant to reward r2.
**Q5:** Functionality in Multi-task or Meta-learning:
**A5:** Yes! In fact, our framework can be easily extended to cover heterogeneous/nonstationary environments, with the change of reward function, observation function, or transition dynamics. Basically, we can introduce a (latent) low dimensional change factor to characterize those changes, a similar strategy as AdaRL. This is the direction we plan to take in our next steps.
*Reference:* *[AdaRL] Huang, Feng, Lu, Magliacane, Zhang. AdaRL: What, Where, and How to Adapt in Transfer Reinforcement Learning. ICLR, 2022.*
**Q6** Application in Unsupervised World Models:
**A6:** Our framework is versatile when it comes to different supervision signals. Utilizing the reward and action signals in reinforcement learning systems is merely one specific application of its capabilities. In purely unsupervised world models without any supervision signals, we are unable to distinguish different categories of state representations. However, if intrinsic rewards are present, we can learn the state representation relevant to these intrinsic rewards.
**Limitations:**
In-depth discussion: While recent studies suggest that causal variables can be reconstructed from temporal sequences of observations, assuming no instantaneous causal relations, practical challenges arise. Specifically, if our measurement or frame rate lags behind the speed of causal effects, it can inadvertently introduce “instantaneous” effects, compromising prior identifiability conclusions. Our work's foundational limitation rests on the assumption that latent processes lack instantaneous causal relations. This assumption is evident in Figure 1(c) where no edges exist between concurrent latent states. For a deeper dive, we direct readers to two pivotal papers: [8] introduces iCITRIS, a method allowing instantaneous effects in intervened temporal sequences when intervention targets are observable. Meanwhile, [9] articulates that neglecting instantaneous dependence can lead to subpar policy learning in MBRL. We commit to elaborating on these limitations in the camera-ready edition of our paper.
Thank you once again for your rigorous review and constructive feedback.
The reference index can be found at the end of the general response.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. Regarding A6: Intrinsic rewards are non-stationary, will the method adapt to changing rewards? For the same state, the reward will change with time depending on how many times the state has been visited.
Overall, I am convinced with the response and hope the authors will make the changes to the paper. I have updated my score.
---
Reply to Comment 1.1.1:
Title: Thank you very much for carefully reviewing our response
Comment: Thank you very much for carefully reviewing our response and promptly updating us. Your attention is highly appreciated! In regard to the intrinsic reward scenario you proposed, one potential approach could be to include the number of times the state has been visited in the reward function. This may help capture the type of nonstationarity you mentioned.
Thank you! | Summary: - The paper introduces a framework called IFactor for modeling latent state variables in reinforcement learning (RL) systems. The framework categorizes these variables into four distinct types based on their interactions with actions and rewards. The paper further establishes block-wise identifiability of these latent variables, which provides a stable and compact representation and discloses that all reward-relevant factors are significant for policy learning. Overall, the paper contributes a comprehensive framework, IFactor, that provides representations of different aspects of information in RL systems and highlights the importance of considering states that influence the reward during decision-making.
Strengths: - IFactor models different latent state variables in based on their interactions with actions and rewards in RL systems. These categories include reward-relevant and controllable parts, reward-irrelevant but uncontrollable parts, controllable but reward-irrelevant parts, and unrelated noise.
- The paper defines blockwise identifiability as the existence of a one-to-one mapping between a latent variable and its estimated value, such that the estimated value can be recovered from the latent variable and vice versa. This means that each latent variable can be uniquely identified and separated from other variables in the model, allowing for a more stable and compact representation of the environment.
- The paper includes extensive experiments and ablation studies to validate the effectiveness of the proposed framework and to analyze the impact of different components. Meanwhile, the paper includes a thorough evaluation of the proposed framework on several benchmark tasks.
Weaknesses: - Missing related work. The separate modeling of controllable and uncontrollable components has been explored in various previous works, such as InfoPower[1] and Iso-Dream[2], which are missing from related work. It would be better if the authors can discuss the differences between the proposed method and this type of work.
- [1] INFOrmation Prioritization through EmPOWERment in Visual Model-Based RL. ICLR 2022.
- [2] Iso-Dream: Isolating and Leveraging Noncontrollable Visual Dynamics in World Models. NeurIPS 2022.
- The error bars of the baselines are not included.
- The performance of TIA on variants of DMC sometimes encounters a sudden drop during training, which is worth a further check for the reason.
- Except for DenoisedMDP, the compared approaches are somewhat "out-of-date". It would be nice if the authors could include stronger baseline methods for model comparison, especially the approaches that show impressive performance on generalization in noisy visual observations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I understand that blockwise identification refers to the existence of a one to one mapping between a late variable and its estimated value. However, I lacked a thorough understanding of how decoupling the four latent components is related to the concept of block-wise identifiability. Could you please explain further? Or what are the benefits of defining these components blockwise identifiable?
- Additionally, if this mapping is not blockwise identifiable, what impact will it have on the decoupling performance and how will it influence the model performance?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Negative societal impact is not discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your valuable feedback. Please see our responses to your questions point-by-point below.
1. M**issing related work on separate modeling**:
Thank you for bringing our attention to the related works, including InfoPower [1] and Iso-Dream [2]. We will make sure to give proper citations and provide comparisons in our appendix for the revised version. A brief discussion about differences is provided below.
Main differences: IFactor learns different categories of state representations according to their relation with action and reward, which is different from InfoPower and IsoDream, and moreover, IFactor emphasizes block-wise identifiability for the four categories of representations while InfoPower and Iso-Dream do not. Specifically, InfoPower learns 3 types of latent variables, including $s^{ar}_t, s^{\bar{a}r}_t$ and $s^{\bar{a}\bar{r}}_t$. IsoDream uses three branches for latent dynamics, distinguishing controllable, noncontrollable, and static parts.
- Other differences with InfoPower:
- **Reconstruction Basis**: InfoPower is reconstruction-free, while IFactor is reconstruction-based.
- **Objective Functions**: InfoPower prioritizes mutual information and empowerment, while IFactor utilizes reconstruction, KL constraints, and mutual information constraints for disentanglement. InfoPower formulates policy using task reward value estimates and the empowerment objective, while IFactor learns policy by maximizing the estimated Q value and dynamics backpropagating.
- Other differences with IsoDream:
- **Objective Functions and Dynamics Modeling**: IsoDream models controllable transitions using inverse dynamics, while IFactor disentangles with multiple mutual information and KL divergence constraints. IsoDream learns policy using a future-state attention mechanism rooted in present controllable and future uncontrollable states. In contrast, IFactor focuses on reward-relevant states, ensuring $s^r_t$ variables are optimal for policy optimization.
2. **Absence of error bars for baselines**:
Thank you for your careful checking. We didn't include error bars for baselines due to the following reasons. Initially, we tried to replicate the results of Denoised MDP [3] using their available code. However, the outcomes we obtained were considerably poorer than those reported in their paper (refer to Figure 4 in the appendix). In order to maintain a fair comparison, we directly incorporated the reported results from Denoised MDP's image plots. The absence of error bars is because we were unable to extract them from these image plots. In light of your feedback, we have made the subsequent changes.
1. We have included error bars for Figure 3 in the newly uploaded pdf file. They will be included in the final version.
2. We have also listed the mean and standard deviation of the return value for all baselines, when policy optimization gets converged, at the end of the general response (these values are directly copied from the reported results). These tables will also be added to the appendix of the revised manuscript.
1. **Performance inconsistencies of TIA**:
To maintain fair comparisons, the results of TIA were directly copied from Denoised MDP. We will replicate and provide detailed explanations of TIA's results in our revised manuscript.
2. **Comparison with contemporary baselines**:
Thank you for the suggestion. Considering the continuous progress in RL, we've added results from DreamerPro, a top model-based RL method that doesn't require reconstruction and can handle noisy visuals effectively, as a baseline. You can find the updated results in Figure 4 of the newly uploaded pdf file and at the end of the general reponse, where IFactor achieves superior performance.
## **Questions:**
Q1: Block-wise Identifiability & Decoupling:
A1: Basically, block-wise identifiability ensures the theoretical guarantee of the estimations of the four categories in relation to their ground truth values. With identifiability, we can ensure asymptotic correctness. Moreover, it is important to note that decoupling alone does not guarantee identifiability. Specifically, solely relying on decoupling may lead to missing reward-relevant information in the estimation $\hat{s}^r_t$, as well as the presence of redundant information.
Q2: Benefits of block-wise identifiability and consequences of non-blockwise identifiability
A2: Since block-identifiability guarantees the recovery of ground truth variables, it additionally offers the advantages of improved interpretability and facilitating more efficient and effective policy optimization, which are also the downsides of missing identifiability.
- **Interpretability**: It guarantees the recovery of underlying causal latent variables under certain conditions, thereby enhancing the clarity of decision-making processes by distinguishing latent state variables based on controllability and reward relevance.
- **Policy Optimization**: By focusing on $s^r_t$, IFactor streamlines policy learning, keeping only essential information and discarding redundancy. If the latent variables lack block-wise identifiability, $s^r_t$ may have redundant or insufficient information, hindering efficient and effective policy learning.
We deeply appreciate the depth and breadth of your review. Your insights are crucial in improving our manuscript. We will integrate your feedback into our revised version.
The reference index can be found at the end of the general response.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed explanations which have clarified my concerns. I am willing to raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thorough review and subsequent feedback on our response. We truly value your insights as they significantly contribute to the improvement of our paper. Thank you!
---
Rebuttal 2:
Comment: Thank you for your insightful review of our paper. We've diligently addressed the feedback in our rebuttal and observed updates from two other reviewers. Could you kindly let us know if our response has addressed your concerns? If any issues remain, please don't hesitate to inform us. We'd appreciate it if you could review our rebuttal and update your feedback or score when convenient. Your time and effort are greatly valued. | Summary: The authors propose an alternative way to create world models in an RL system by separating them into blocks dependant on their casual effects on future observations, states and rewards. They theoretically show that under some assumptions it is possible to identify such classes of variables, even if specific instances cannot be identified. They go on to show how their approach can increase robustness and disentanglement in several control tasks.
Strengths: 1. The proposed method is well motivated. I especially liked that instead of focusing on variable level disentanglement the authors have opted for block level disentanglement. This is a good way to relax the assumptions of the model without being completely unconstrained.
2. The theoretical motivation seems good (but see below). It is always a bonus when researchers can provide some theoretical justification for their work.
3. Experimental evidence is convincing. I especially liked the analysis of the disentanglement of structured datasets and the results look extremely promising.
Weaknesses: No serious weakness. I have some comments regarding how section 2 and 3 are written but otherwise it is a solid paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In general my questions revolve around the quality of the plots and some results that I think are missing.
1. I would like to sees disentanglement scores for the comparison models in Figure 3. While it is true that these do not posses explicit block level disentanglement, this could potentially emerge from training. I personally don't believe this is likely, but it is still important that the authors show that it is not the case.
2. Why are there no disentanglement scores for the cart pole dataset? The plot is very small too. This needs must be larger otherwise it is not easy to read. Figure 5 has the same issue.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: 1. Section 2 could use a slight rewrite. Each of the propositions in lines 98 to 113 start the same way. Seems like it should be possible to just use one heading and enumerate the propositions afterward.
2. I would prefer if the authors gave more intuitions in section 3 and left the theorem for the appendix. I find that when reading articles it's much easier to reason about the approaches if a good intuition is given than if I am presented with a technical derivation of a theorem. Speaking of which it is not clear why property A2 in Theorem 1 is needed. Also, I cannot find what the definition of $\hat{s}$ in the text until it is used in Definition 1.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Dear Reviewer,**
Thank you for your detailed feedback on our paper. Your helpful comments have helped us further improve the paper. We've worked hard to deal with all the things you pointed out, and here, we explain the changes we made based on your advice.
**Q1: Disentanglement scores for comparison models in Figure 3**
*Response:* Thanks for your comments. To clarify, we provided averaged disentanglement (or, as we prefer, 'identifiability') scores ($R^2$) in the right two panels of Figure 3, which are derived as the mean of the four diagonal values. This was intended to show how the identifiability scores change during training. Following your feedback, we have now computed and appended the identifiability scores for the optimal model iteration for all comparison models. You can find the results in the newly uploaded pdf file, and they will also be included in the appendix of the final version.
**Q2: Disentanglement scores for the cart pole dataset and legibility issues in Figures**
*Response:* Thanks for your careful reading. We have included the identifiability scores for the cart pole dataset in Figure 2 of the newly uploaded pdf file. Unlike the synthetic dataset, where there are clear categories for latent state variables, real-world situations pose a challenge due to the potential ambiguity in categorizing true variables. In our modified Cartpole environment, we defined the ground-truth $s^{ar}_t$ as the cart's position, $s^{\bar{a}r}_t$ as the pole's angle, $s^{a\bar{r}}_t$ as the light's greenness, and $s^{\bar{a}\bar{r}}_t$ as the distractor Cartpole's state (including cart position and pole angle). The disentanglement scores for individual $s^{ar}_t$ and $s^{\bar{a}r}_t$, as well as the combined $s^r_t$, are shown in Figure 2 of the supplementary file. We can obviously see that the true latent variables can be clearly identified. Additionally, we have enlarged Figures 4 and 5 in the revised version to improve readability.
**Limitations:**
1. **Structure of Propositions**
*Response:* We really appreciate your careful observation regarding the repetitive structure between lines 98 and 113. In line with your suggestion, we will simplify this section by consolidating it under a single heading and providing numerical listings for the propositions. This modification will be present in the paper's final version.
1. **Intuition in Section 3 and Clarifications on Theorem 1**
Thanks for your great suggestion. Intuitively, property A2 in Theorem 1 requires that the Jacobian varies “enough” so that it cannot be contained in a proper subspace of R. This requirement is necessary to avoid undesirable situations where the problem becomes ill-posed and is essential for identifiability. A special case when this property does not hold is when the function f is linear, as the Jacobian remains constant in such cases. We will give a more detailed explanation in the main text, accompanied by a more intuitive outline of the proof.
We sincerely hope our revisions and clarifications align with your expectations. Your invaluable insights have definitely improved the quality of our research.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. My concerns have been addressed and I look forward to their updated version with the proposed changes.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for carefully reviewing our response. We really appreciate your feedback and will make sure to include the changes into our updated version.
Thank you! | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers,
We sincerely thank the reviewers for their effort and helpful comments regarding our paper. We have carefully revised the manuscript according to your comments. A summary of the primary changes we've made is outlined below:
1. **Identifiability Scores**: We've incorporated Identifiability Scores ($R^2$) for the compared models in the synthetic dataset experiments, as well as Identifiability Scores for IFactor in the modified Cartpole environment. Both can be referenced in the newly uploaded pdf file.
2. **Structural Changes**: Propositions from lines 98 to 113 have been organized under a singular heading, followed by an enumeration for improved clarity.
3. **Theoretical Clarification**: An intuitive elaboration for Assumption 2 within Theorem 1 has been added.
4. **Literature Citations**: Proper attributions to related works, namely InfoPower [1], Iso-Dream [2], DreamerPro [4], VSG [5], TPC [6] and C-SWMs [7] have been integrated. We ensure a thorough comparison with these in the revised manuscript.
5. **Experimental Enhancements**:
- DreamerPro is now incorporated as an additional baseline in the DMC variants experiments. The results can be found in the tables at the end of this response and in Figure 4 of the newly uploaded pdf file.
- The middle panel of Figure 5 now displays error bars for baselines.
- For a comprehensive understanding, three tables elucidating the mean and standard deviation values of the performance of the converged policy in DMC variants experiments have been appended.
6. **Discussion Expansion**: A more in-depth discussion on the limitations between lines 346-350 is now available.
To get a thorough look at these changes, we direct your attention to the newly uploaded pdf file and the specific responses to the reviewers.
We hope our revisions and detailed responses satisfactorily address the concerns raised. Once again, thank you for your generous contribution of time and expertise to the community.
## Evaluation of Policy Performance in DMC Variants
**cheetah**
| | IFactor (Ours) | DreamerPro | Denoised MDP | Dreamer | TIA | DBC | CURL | PI-SAC |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Noiseless | **874.0±39.0** | 802.5±75.0 | 770.7±59.9 | 713.7±347.8 | 765.9±29.3 | 181.8±49.7 | 171.2±34.2 | 369.4±27.7 |
| Video Background | **572.5±192.5** | 366.0±96.4 | 430.6±111.0 | 171.0±44.9 | 387.8±297.3 | 247.9±71.3 | 70.3±47.6 | 111.9±31.0 |
| Noisy Sensor | **455.5±87.9** | 134.3±41.1 | 400.3±190.0 | 166.0±41.7 | 226.8±13.2 | 140.6±13.0 | 198.9±7.1 | 150.9±14.1 |
| Camera Jittering | **417.7±22.4** | 150.0±104.4 | 293.9±99.6 | 160.1±32.2 | 202.1±93.2 | 141.3±35.4 | 168.6±21.6 | 155.8±19.6 |
**walker**
| | IFactor (Ours) | DreamerPro | Denoised MDP | Dreamer | TIA | DBC | CURL | PI-SAC |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Noiseless | **966.0±4.5** | 941.4±16.4 | 947.4±13.3 | 954.9±6.0 | 955.3±5.2 | 613.9±110.7 | 416.5±295.8 | 202.9±92.0 |
| Video Background | **916.8±51.7** | 909.1±48.5 | 790.2±113.3 | 247.2±134.6 | 685.4±336.6 | 198.6±67.3 | 607.8±99.7 | 200.3±18.2 |
| Noisy Sensor | **700.5±174.3** | 242.1±64.9 | 660.7±120.1 | 269.6±145.0 | 424.7±281.2 | 95.1±53.7 | 338.4±91.7 | 221.9±20.8 |
| Camera Jittering | **523.7±194.2** | 367.8±300.9 | 290.5±103.9 | 105.9±22.1 | 229.5±331.9 | 62.0±17.3 | 447.5±69.5 | 115.6±5.9 |
**Reacher**
| | IFactor (Ours) | DreamerPro | Denoised MDP | Dreamer | TIA | DBC | CURL | PI-SAC |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Noiseless | 923.7±37.2 | **924.2±61.2** | 685.7±216.4 | 876.1±57.1 | 587.1±256.3 | 95.2±57.5 | 663.4±220.5 | 166.2±235.4 |
| Video Background | **962.7±9.5** | 555.1±91.5 | 543.9±121.3 | 252.8±127.0 | 123.2±21.3 | 101.6±57.7 | 751.1±188.9 | 76.2±34.6 |
| Noisy Sensor | **839.3±50.9** | 675.1±137.2 | 561.1±182.4 | 201.8±81.6 | 263.7±279.8 | 96.8±38.6 | 606.8±259.7 | 84.7±4.5 |
| Camera Jittering | **735.7±52.6** | 674.5±81.6 | 213.1±105.9 | 108.9±18.9 | 89.3±25.7 | 86.7±50.6 | 631.9±96.0 | 84.3±13.1 |
[1] Bharadhwaj, Homanga, et al. "Information prioritization through empowerment in visual model-based rl.” ICLR 2022.
[2] Pan, Minting, et al. "Iso-dream: Isolating and leveraging noncontrollable visual dynamics in world models." *Advances in Neural Information Processing Systems* 35 (2022): 23178-23191.
[3] Wang, Tongzhou, et al. "Denoised mdps: Learning world models better than the world itself." ICML 2022: 22591-22612
[4] Deng, Fei, Ingook Jang, and Sungjin Ahn. "Dreamerpro: Reconstruction-free model-based reinforcement learning with prototypical representations." International Conference on Machine Learning. PMLR, 2022.
[5] Jain, Arnav Kumar, et al. "Learning robust dynamics through variational sparse gating." Advances in Neural Information Processing Systems 35 (2022): 1612-1626.
[6] Nguyen, Tung D., et al. "Temporal predictive coding for model-based planning in latent space." International Conference on Machine Learning. PMLR, 2021.
[7] Kipf, Thomas, Elise Van der Pol, and Max Welling. "Contrastive learning of structured world models." arXiv preprint arXiv:1911.12247 (2019).
[8] Lippe, Phillip, et al. "Causal representation learning for instantaneous and temporal effects in interactive systems." *The Eleventh International Conference on Learning Representations*. 2022.
[9] Zhu, Zhengmao, et al. "Beware of Instantaneous Dependence in Reinforcement Learning." *arXiv preprint arXiv:2303.05458*(2023).
Pdf: /pdf/9cfad5f9abe191ccc0d3a0e86f3a6273bb737f8d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Hierarchical Randomized Smoothing | Accept (poster) | Summary: The paper proposes a new threat model for the adversarial robustness - intersection of $\ell_0$ and $\ell_2$ ones where the $\ell_0$ is measured as the number of modified rows of a matrix. To certify robustness to this threat models, the authors propose a variant of randomized smoothing - hierarchical smoothing scheme. First, some rows are selected at random and then they are smoothed with Gaussian smoothing. Robustness certificate is provided for this kind of smoothing distribution and the superiority over baselines is demonstrated on some graph-network tasks.
Strengths: * The paper proposes a new threat model that makes sense for modeling adversaries on graphs.
* The proposed certification method on this threat model (actually intersection of threat models) outperforms the baselines (methods designed for the individual threat models).
In general, this would be enough for me to vote for acceptance, however there are some problems as I list below.
Weaknesses: * The technical novelty of the paper is limited; the paper focuses on a new threat model - an intersection of two standard threat models $\ell_0$ and $\ell_2$ - and the provided certification method is randomized smoothing where the smoothing distribution is essentially a product of the distribution used for $\ell_0$ (random mask) and $\ell_2$ (Gaussian noise). The certification is then analogical to (Cohen et al.).
* Please check all the claims if they actually say what you want to say. There are two main problems here - you consider a fixed $\hat{X}$ and provide certificate with respect to this particular point (e.g., in Coro 1 and also in the text.) whereas you obviously want to certify robustness to all points within this distance. And second, you assume that the perturbed point is at $\ell_0$ distance $r$ while it should be at a distance at most $r$ (e.g., lines 142-144 and from this point on). These things should be very easy to fix to make it correct, but now it is just lax and wrong at places and it is the reason for the rejection at this point. **If this is fixed, then the paper becomes borderline acceptable for me**.
* The exposition is maybe unnecessarily complicated. The actual mathematical machinery performed in the paper boils down to decomposing the smoothing distribution as a mixture of two distributions (over $R_1, R_2$ or $R_2,R_3$ respectively in Propo 2) and then the NP lemma to two Gaussians is used for the distributions over $R_2$ (This is the standard NP-certification of (Cohen et al.)). It takes quite some effort to even understand "what is done" in the paper before I can focus on "why does it work". I suggest the authors to provide a pseudocode (e.g., as in (Cohen et al.)).
### minor
* The method has some parameters, but there is no suggestion on how to select them. They are selected randomly here and one can see a huge variance in the performance.
* The datasets are not explained. E.g., their dimensionality and rande of values. The discrete dataset contain some new parameters here but they are unexplained ($r_d, r_a, p_d, p_a..$)
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: -
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
Please note that we cannot update the paper during the rebuttal period according to the rebuttal policy, and we therefore carefully describe all changes directly here in the rebuttal.
### Concerning the technical contribution (Comment 1)
Please consider that our certificates hold under adversarial perturbations that are bounded by arbitrary $\ell_p$-norms, unlike the result of Cohen et al., 2019 [1] that is limited to the $\ell_2$-norm. So far there are no randomized smoothing certificates specifically designed against our threat model and we experimentally demonstrate that simply using previous works is not enough when considering the robustness-accuracy trade-off. Deriving new certificates against this threat model is technically challenging: One would have to propose a new smoothing distribution and prove robustness certificates for each possible $\ell_p$-norm.
As the result of a rigorous theoretical proof, we can provide robustness certificates that are more modular and in fact orthogonal to all existing ones. With hierarchical smoothing we also obtain stronger guarantees compared to the baselines. We believe that this is a sufficient contribution to the scientific community since it not only impacts the community of machine learning on graphs but also the (certifiable) robustness community in general.
### Concerning theoretical claims (Comment 2)
Thank you for pointing out potential for using a more precise formulation in Section 4. In response to your comment, we carefully went over all theoretical statements to ensure their quality and further improve their clarity.
Specifically, we clarified the statement regarding fixed $\tilde{\mathbf{X}}$ and a fix distance $r$ and rewrote Section 4 as follows: We first derive the NP-lower bound for a fixed point in the entire ball, yielding a point-wise certificate. Then we introduce guarantees against the entire threat model by including an additional proposition stating that the minimum of Theorem 1 (line 174) is assumed when exactly $r$ rows are perturbed (and not less). To see this note that $\Delta=1-p^{|\mathcal{C}|}$ is strictly monotonically increasing in the number of perturbed rows $|\mathcal{C}|$ for $p\in(0,1)$. We agree that this argument was missing, and we therefore fixed the corresponding parts. Thank you for helping us to further improve the clarity of our theoretical claims.
Please further note that Corollary 1 is already correct and assumes any perturbed $\tilde{X}$ in the ball (line 207).
### Concerning the exposition (Comment 3)
Thank you for helping us to further improve the exposition of our paper. In response to your comment, we simplified the explanations of the regions in Section 4 and additionally provide a Figure for clarification (see Figure 1 in the rebuttal PDF and also our response to reviewer 7yDS). Following your suggestion, we will also include pseudocode in the camera-ready version to further improve the clarity of our methodology.
### Concerning parameters of the smoothing distribution (Comment 4)
The randomized smoothing framework already introduces parameters of the smoothing distribution as hyperparameters (see e.g. [1]). In general, increasing the probabilities to add noise increases the robustness guarantees but also decreases the accuracy. We introduce an additional parameter allowing to better control the robustness-accuracy trade-off.
As you correctly pointed out, in our exhaustive experiments we randomly explore the entire space of possible parameter combinations. The reason for this is to demonstrate Pareto-optimality, i.e. to find all ranges for which we can offer both better robustness and accuracy. In practice one would have to conduct significantly less experiments to find suitable parameters. Please also consider that the task of efficiently finding the optimal parameters is a research direction orthogonal to deriving robustness certificates.
### Concerning further explanations (Comment 5)
Please note that we thoroughly describe the graph datasets including number of nodes, edges and the feature dimensionality in lines 256-261 (Section 6). For experiments on discrete data we carefully introduce $r_d, r_a, p_d, p_a$ in lines 275-284 (Section 6) and we also elaborate on this in Appendix B.
We hope that we could address all your questions to your satisfaction. Please let us know if you have any additional comments or questions.
### References
[1] Jeremy M. Cohen, Elan Rosenfeld, J. Zico Kolter. Certified Adversarial Robustness via Randomized Smoothing. ICML 2019.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
Regarding the generality of p-norms - I have read the paper two months ago, so I might not recall stuff precisely, but from what I remember, the method basically masks out something and then applies standard $\ell_p$ certification to the rest. Thus, I can use a smoothing distribution of my choice (and the certificates are straightforward as everything is just NP-lemma in one way or the other) in order to provide guarantees I want. If you do something more general than this, I would stress it out in the text.
I did not realize that there is no updated pdf, so these changes would be good. With coro 1 I still have the problem that it is stated as a certificate for one point with some failure probability so in order to have a certificate for all the points uniformly, using this corollary I would need to union bound over all points that I clearly don't want; but clearly we can lower bound the probability once and that would work for all the points uniformly (which is exactly what everyone in randomized smoothing does), so I would consider rewriting coro 1 in the way that you don't fix the perturbation, but you state that the result holds for every perturbation.
The $r_a, r_b, p_a, p_b$ thing - I checked the pdf again and I could find $r_a, r_b$, but could not find $p_a, p_b$. I am not familiar with the datasets so it is pretty hard to interpret the figures; maybe if there would be link from the captions to description/notation, it would be clearer.
Anyway, thanks again for the response, I update me score and the things I wrote are some suggestions that you need not to reflect. | Summary: A randomized smoothing based robust certification approach is presented for machine learning under test-time/inference attacks, when only a subset of data is under attack e.g. a subset of nodes in graphical data. Robustness certificates are derived for discrete and continuous domains, and empirically certified and clean accuracies are computed and compared with prior work for certified robustness for graph neural networks (GNNs).
Strengths: - The studied GNN model where only a subset of nodes of the graph may be attacked by the adversary is interesting and of practical significance.
- The proposed robust certification approach could achieve better trade-off of clean and certified accuracy than existing approaches.
Weaknesses: - The overall approach seems like a simple extension to (Cohen et al. 2019), with a suitable adjustment factor $1-\Delta$ in the analysis corresponding to selecting a subset of rows. The theoretical analysis also follows similar arguments and it is not clear if any novel insights are obtained.
- A key tuning parameter for the approach is $p$, it is not clear how to set $p$ in a principled way.
- Missing several closely related prior research works e.g. [1] and [2].
- Results seem specific to bounded $\ell_2$ norm attacks.
[1] Wang, Binghui, Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. "Certified robustness of graph neural networks against adversarial structural perturbation." In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 1645-1653. 2021.
[2] Gao, Zhidong, Rui Hu, and Yanmin Gong. "Certified robustness of graph classification against topology attack with randomized smoothing." In GLOBECOM 2020-2020 IEEE Global Communications Conference, pp. 1-6. IEEE, 2020.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - What is $\alpha$ in line 82? How is its value related to the certificates?
- Why is the noise applied independently to all matrix entries (e.g. line 97)?
- How do the theoretical results compare to Scholten et al. 2022?
- If $r=N$, i.e. all rows are under attack, how do current certificates compare to previously known certificates?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors could investigate further the limitations of the proposed approach and add a discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
Please note that we cannot update the paper during the rebuttal period according to the rebuttal policy, and we therefore carefully describe all changes directly here in the rebuttal.
### Concerning the overall approach (Comment 1)
So far there are no randomized smoothing certificates designed against our threat model and we demonstrate that simply using previous works is not enough when considering the robustness-accuracy trade-off. Deriving new certificates is technically challenging: One would have to propose a new smoothing distribution and prove robustness certificates for each $\ell_p$-norm. The modularity of our approach and the adjustment by $1-\Delta$ is the result of our particular smoothing distribution followed by a rigorous proof and represents a novel contribution: Our framework is of great flexibility since we can integrate existing certificates into our framework. We believe that our derivations represent an important contribution to the scientific community since our certificates provide better ways of assessing the robustness of GNNs and machine learning models in general.
### Concerning hyperparameter p (Comment 2)
As you correctly pointed out, $p$ is a key parameter of our method and represents the probability of adding noise to the rows of a matrix. In response to your comment, we now provide additional intuition about $p$ in the paper: Specifically, larger $p$ adds more noise and increases the robustness but also decreases accuracy. By introducing this additional parameter we can better control the robustness-accuracy trade-off. Please consider that using hyperparameters of the smoothing distribution to control this trade-off is standard in randomized smoothing, and finding the optimal parameters efficiently is a research direction orthogonal to deriving robustness certificates.
### Concerning additional related work (Comment 3)
In response to your comment, we revised the related work section and included the work [1] and [2] that you mentioned: The works of [1] and [2] derive robustness certificates against structural perturbations of the graph. While their certificates can be technically integrated into our framework, we experiment with the sparsity-preserving certificates proposed in [3] since it represents the current state-of-the-art in GNN certification.
### Concerning the threat model (Comment 4)
Please note that our certificates hold under arbitrary $\ell_p$-norms (we just instantiate our framework with three different norms later in the experiments). In response to your comment, we rephrased the corresponding parts in the theoretical analysis (Section 4) for clarification.
### Concerning the parameter $\alpha$ (Question 1)
Randomized smoothing certificates are probabilistic and $1-\alpha$ represents the confidence level used when estimating the smoothed classifier with Monte-Carlo samples. Please note that this is an inherent property of probabilistic certification and not a limitation of our work specifically. In response to your comment, we added further clarifications in the background section when introducing the randomized smoothing framework.
### Concerning independent noise (Question 2)
As you correctly pointed out, we derive certificates for a smoothing distribution that adds random noise independently on the matrix elements. This is the most studied class of smoothing distributions and allows us to integrate a whole suite of existing distributions. Although there are first approaches adding non-independent noise, the corresponding certificates are still in their infancy [4]. Please note that non-independent noise is a research direction orthogonal to ours. Integrating such smoothing distributions is an interesting idea for future work.
### Concerning the theoretical comparison (Question 3)
The certificate proposed in [5] is based on ablation smoothing and can be considered as a special case of our framework. In contrast, our paper represents a novel contribution that goes beyond simple ablation: We first select rows but instead of ablating them we add random noise to them. As we show experimentally, our method yields stronger results under our threat model when compared to [5].
### Concerning the special case $r=N$ (Question 4)
For $r=N$ (all rows are under attack) we recover the threat model already studied in the literature (see e.g. [3]). In this special case our certificates are exactly as strong as existing methods (see lines 195-201). Please consider that we are proposing novel certificates specifically designed against the threat model of $r \ll N$ (multiple but not all rows are under attack), which is a more realistic assumption for graphs and already actively exploited in several adversarial attacks against GNNs [6].
### Concerning the discussion (Comment 5)
Please consider that we already discuss the limitations of our approach in Section 5. In response to your review, we further extended the discussion: Specifically, we included the hyperparameter discussion and the non-independent noise discussion (see our response to Comment 2 and Question 2). Thank you for helping us to further improve the paper.
We hope that we could address all your questions to your satisfaction. Please let us know if you have any additional comments or questions.
### References
[3] Aleksandar Bojchevski, Johannes Klicpera, Stephan Günnemann: Efficient Robustness Certificates for Discrete Data. Sparsity-Aware Randomized Smoothing for Graphs, Images and More. ICML 2020.
[4] Peter Súkeník, Aleksei Kuvshinov, Stephan Günnemann. Intriguing Properties of Input-Dependent Randomized Smoothing. ICML 2022.
[5] Yan Scholten, Jan Schuchardt, Simon Geisler, Aleksandar Bojchevski, Stephan Günnemann. Randomized Message-Interception Smoothing: Gray-box Certificates for Graph Neural Networks. NeurIPS 2022.
[6] Jiaqi Ma, Shuangrui Ding, Qiaozhu Mei. Towards More Practical Adversarial Attacks on Graph Neural Networks. NeurIPS 2020.
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: Thank you for the detailed response. In particular, thanks for providing the clarifications for my questions.
Overall, there are a lot of edits needed to improve the readability of the theoretical results and make the paper ready for publication in my opinion. Also further related work needs to be discussed (e.g. [4] and [6] in the rebuttal) and compared with. This is particularly important since the work proposes a new attack model for GNNs.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer u4ou
Comment: Thank you for your response!
For the revised version of our manuscript, we would be grateful if you could clarify which further edits are needed based on your original review.
In response to your questions, we suggested in our rebuttal:
- Adding a few clarifications regarding the background of randomized smoothing, and
- Including the related work from our discussion.
Regarding the works [4] and [6], we are happy to include them in the related work section. However, it is important to point out that these are orthogonal to our core research question. Specifically, [4] represents a negative result concerning input-dependent randomized smoothing (which we do not work on), and [6] proposes an adversarial attack. In contrast, we develop novel robustness certificates. | Summary: The paper introduces a new variant of the randomized smoothing algorithm for certified robustness. The paper considers the setting where the input data is split into multiple parts or sites (e.g., a graph) and an adversary can perturb at most $r$ sites at the same time.
To obtain robustness certificates for this setting, the authors introduce hierarchical randomized smoothing.
In standard randomized smoothing (Gaussian) noise is added to the input, a classification model is evoked on the noisy input, and the lost likely output class is taken as the final prediction.
In practice this is done via Monte Carlo estimate: Noise is sampled multiple times and the majority vote is returned.
Hierarchical randomized smoothing mimics this process but splits the sampling of the noise into two steps: i) first a random indicator variable determining the affected sites is sampled, ii) noise is added to those sites as before.
To obtain a guarantee in this setting, the theoretical derivation combines the standard form for randomized smoothing for continuous (Gaussian) with a variant for discrete noise. Ultimately, this approach recovers the same certificate as the standard randomized smoothing algorithm, up to a scaling factor depending on the number of sites that can be perturbed at the same time, thus allowing for improvements.
Strengths: - Strong and original contribution
- Well written
- Rigorous mathematical derivation
- Good empirical results
Weaknesses: I have no major concerns, other than maybe the applicability of the threat model considered in the paper (see questions).
However, I have some minor comments mostly on the presentation of the paper:
- In the example around line 153, I found the set $\mathcal{R}_0$ confusing as the explanation only comes on the next page.
- Similarly the construction of Propositions 1 and 2 might be easier to follow if it was visualized.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - Does row selection $p$ need to be uniform? What is the impact on the certificate if it is used to encode a prior on the likelihood of change/attack for different part.s
- Can you further comment on the threat model? Not mathematically, but rather on the settings in which the proposed combination of LP-constraints over $r$ site changes is applicable.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations and broader impact are adequately discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
Please note that we cannot update the paper during the rebuttal period according to the rebuttal policy, and we therefore carefully describe all changes directly here in the rebuttal.
### Concerning a visualization of Propositions 1 and 2 (Comment 1 and 2)
Thank you for your suggestion to further improve the clarity of our claims. In response to your comment, we rewrote Section 4 to make our derivations more accessible. In this context we also included a new Figure to visualize the regions and the propositions (see rebuttal PDF in the global response). We also followed your suggestion and explain the regions and $\mathcal{R}_0$ earlier.
### Concerning row-selection probability $p$ (Question 1)
In our paper we currently select rows independently from each other with a row-selection probability $p$. Although there are first non-hierarchical smoothing distributions adding data-dependent noise, the corresponding certificates are still in their infancy (see e.g. [1]). The challenge of making our hierarchical smoothing data-dependent is that using higher $p$ for some parts of the input means adversaries could simply attack the other parts. Please consider that deriving robustness certificates for data-dependent smoothing distributions represents a research direction orthogonal to ours. Thank you for pointing us to this interesting idea for future work.
### Concerning further applications for our threat model (Question 2)
As you correctly pointed out, we consider the threat model where adversaries can perturb at most $r$ entities of an object. This is a realistic assumption for graphs such as social networks and already actively exploited in several recent adversarial attacks against graph neural networks (see e.g. [2]). For example, consider you want to attack advertisements in Facebook: Probably you will not be able to control the entire Facebook graph, but you may be able to buy a few hundred accounts.
Since we propose certificates for data where objects can be decomposed into multiple entities there are numerous applications beyond graphs: For example in Natural Language Processing, adversaries may be restricted to perturb at most $r$ documents in a collection, or at most $r$ paragraphs in a document. Further real-world examples include applications where reliability and security are crucial, this includes especially the medical and financial domains. Since graphs are ubiquitous datastructures the applicability of our certificates is versatile with a far-reaching impact.
Beyond their real-life applicability, robustness certificates are also generally useful tools that allow us to assess the robustness of models beyond adversarial perturbations (for example against noisy signals or incomplete data). Robustness certificates can also provide insights to design more robust models in the future.
We hope that we could address all your questions to your satisfaction. Please let us know if you have any additional comments or questions.
### References
[1] Peter Súkeník, Aleksei Kuvshinov, Stephan Günnemann. Intriguing Properties of Input-Dependent Randomized Smoothing. ICML 2022.
[2] Jiaqi Ma, Shuangrui Ding, Qiaozhu Mei. Towards More Practical Adversarial Attacks on Graph Neural Networks. NeurIPS 2020.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: I thank the authors for their rebuttal. I greatly appreciate the new Figure.
With regards to Question 1: My suggestion was not (necessarily) to make the distribution data-dependent, which I few as quite problematic, but rather have a non-uniform a-priory p for all notes depending on some key property of the graph/nodes that the adversary does not control. Especially in a distributed setting (as might be realistic for many of the discussed scenarios), certain nodes may depend more or less on external input and therefore might be exposed to differing levels of attacks.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 7yDS
Comment: Thank you for your response!
Regarding question 1 $-$ In general we can use a non-uniform $p$:
Let $p_i$ denote a node-specific (non-uniform) selection probability for node $i$ that the adversary does not control. To get robustness certificates we have to make the worst-case assumption, i.e. the nodes with lowest selection probability will be attacked first by the adversary. For the certificate this means we have to find the largest possible $\Delta$ (compare Theorem 1). Specifically, we have to choose $\Delta = 1-\prod_{i\in\mathcal{I}} p_i$ where $\mathcal{I}$ contains the indices $i$ of the $r$ smallest probabilities $p_i$. Then we can proceed as in our paper to compute certificates for the smoothing distribution with non-uniform $p$.
If we additionally know that certain nodes will be attacked before other nodes, we do not have to consider the first $r$ nodes with smallest $p_i$ but rather the first $r$ nodes with smallest $p_i$ under an attack order. This can be useful for example if one cluster of nodes is attacked before another, then we can increase the selection probability for nodes in the first cluster and obtain stronger certificates.
As you suggested in your response, some nodes might be exposed to different levels of attacks than others. Here, making the lower-level smoothing distribution node-specific might also be beneficial: This would allow us to add less noise for nodes that are less exposed to attacks, potentially leading to even better robustness-accuracy trade-offs.
We will add this discussion to the paper. Please let us know if you have any additional comments or questions. | Summary: This paper proposes hierarchical randomized smoothing, a variant of randomized smoothing that not only randomly perturbs input data at test time but also randomly selects which rows of the input to perturb. This is motivated by a threat model in which an adversary only selects a subset of rows of matrix-valued data to attack. The authors show that their hierarchical smoothing scheme inherits certified robustness by the lower-level smoothing scheme used (i.e., the randomization method on the rows to be perturbed), and also show that the hierarchical scheme generalizes randomized ablation (random masking of elements rather than random additive noise). Experiments are conducted on benchmark node classification datasets that show the increased certification power over prior smoothing methods that do not account for the row-selection limitations of the adversary.
Strengths: 1. The paper is easy to follow, well-written, and appears to have some nice novel ideas for exploiting the structure of adversarial attacks on graph neural networks and related "hierarchical" models in order to get tighter robustness certificates than structure-blind methods.
2. It is interesting that the proposed approach generalizes both conventional additive noise certificates as well as ablation (masking) certificates.
3. The proposed hierarchical approach seems to be quite modular, allowing for new lower-level certification methods to be incorporated as they are developed in the future.
Weaknesses: See below "Questions."
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In my opinion, the overarching structure of the paper could be improved. I'd recommend moving the Discussion section to the end of the paper with the Conclusion, and also moving at least the more "general" parts of the Related Work section to the end of the Introduction.
2. Line 276: "discrete, binary features: $\mathcal{X} = [0,1]$." Do you mean $\mathcal{X} = \\{0,1\\}$?
3. Line 302: Should "(2) evaluating certificates" be "(3)"?
4. The experimental comparisons seem a bit misleading. In particular, you are not making an entirely fair comparison to conventional RS, since those certificates are tailored to threat models where the adversary has access to perturb every entry in your input data. It is therefore no surprise that your certified radii are larger, since you are considering a weaker adversary altogether. How do your certificates perform in the extreme case where the adversary does have access to all entries in the matrix? Do your certificates still outperform conventional RS?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
Please note that we cannot update the paper during the rebuttal period according to the rebuttal policy, and we therefore carefully describe all changes directly here in the rebuttal.
### Concerning the paper structure (Question 1)
We followed your suggestion and moved the (improved) related work after the introduction, and the discussion to the end. We found that this way we also better highlight the novelty of our contributions and the contrast to related work. Thank you for helping us to further improve the paper.
### Concerning the typos in line 276 and 302 (Question 2 and 3)
Yes you are right, we fixed the corresponding lines.
### Concerning the experimental comparison (Question 4)
Since our certificates generalize ablation and additive noise certificates into a new common framework, our method performs either better or on-par with the existing methods. To see this, for $r=N$ (all rows are under attack) we recover the threat model already studied in the literature (see e.g. [1,2]). In the special case of choosing $p=1$, our certificate boils down to the standard certification methods and therefore performs on-par with the baselines (see discussion in lines 195-201).
Please consider that the threat model where adversaries can only attack a subset of all rows ($r \ll N$) is a more realistic assumption for graphs such as social networks and already actively exploited in several recent adversarial attacks against graph neural networks (see e.g. [3]). Since we propose the first smoothing distribution specifically tailored against the described threat model, we believe our experimental comparison to existing certificates is fair and scientifically sound.
We hope that we could address all your questions to your satisfaction. Please let us know if you have any additional comments or questions.
### References
[1] Jeremy M. Cohen, Elan Rosenfeld, J. Zico Kolter. Certified Adversarial Robustness via Randomized Smoothing. ICML 2019.
[2] Aleksandar Bojchevski, Johannes Klicpera, Stephan Günnemann: Efficient Robustness Certificates for Discrete Data. Sparsity-Aware Randomized Smoothing for Graphs, Images and More. ICML 2020.
[3] Jiaqi Ma, Shuangrui Ding, Qiaozhu Mei. Towards More Practical Adversarial Attacks on Graph Neural Networks. NeurIPS 2020.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses and updates to the paper. To better motivate your threat model, you should consider citing [3] in the paper, which I did not see in the original submission.
Given the unsurprising comparison to conventional RS (which is designed for a more general threat model), I maintain my original score. Are there no other certification techniques (perhaps even non-smoothing based ones) that directly consider your specific threat model?
---
Reply to Comment 1.1.1:
Title: Response to Reviewer FfTs
Comment: Thank you for your response, we will add the citation to the revised paper.
So far there are no robustness certificates specifically designed against our threat model, which again motivates the need for our certificates and the novelty of our method. Please consider that there are also no non-smoothing certificates against our threat model. We therefore demonstrate Pareto-optimality of our method when compared to the three existing state-of-the-art baselines for robustness certification of graph neural networks. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their valuable feedback.
Please note that we cannot update the paper during the rebuttal period according to the rebuttal policy, and we therefore carefully describe all revisions directly in the rebuttals. In the rebuttal PDF we further include a Figure as clarification of the regions and propositions as suggested by the reviewers (see PDF attached).
Pdf: /pdf/d86ffe17abcf61a76f863b654f285977c6fa2321.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Certified Minimax Unlearning with Generalization Rates and Deletion Capacity | Accept (poster) | Summary: This paper proposes using hessian-based updates to solve the unlearning problem for strongly-convex-strongly-concave minimax learners. They also provide theoretical analyses of the error of such an unlearning algorithm as well as its sensitivity to deleted data.
Strengths: 1. This paper is well-written and structured. The narrative is very easy to follow, and the analysis is strong.
2. The unlearning update is simple.
3. While in a restricted setting, the analysis is coherent.
4. Motivation is obviously, well done, given the connections to privacy.
This paper operates in a restricted setting with some small issues with the presentation. The issues with presentation are fixable, and I expect the authors will go through and better annotate their notation for the camera-ready version. Moreover, the setting is restricted and not practical for unlearning in modern settings. However, this paper achieves its goal and is a good step in the right direction. I, therefore, advocate for this paper's acceptance.
Weaknesses: 1. There is a notational mistake for Equation (5). You are missing a closing parenthesis.
2. In equation 6, you should specify that $m$ is the size of $U$.
3. You miswrote Assumption 2 in Lemma 2.
4. I feel some work is needed on the motivation of the setting of this paper. I will discuss more of this in limitations. However, this paper seems only to work when the loss is strongly-convex-strongly-concave loss functions (or suffers bad constants for transforming in the c,c case). However, unlearning is meant to be a more practical tool for modern learning, where this condition rarely holds.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. What is $\mathcal{T}$? I understand that it is the set of memory variables, but that should be more formally defined, and I don't find that it is formally defined elsewhere in the paper.
2. It seems that most of the variables in the statement of Lemma 2 are difficult to understand. It is only after going through the proof that I am able to understand. This should be made more clear. For example, what is $\rho$ or $l$ here? This suggestion follows for correcting Theorem 5.
3. I would be interested if the authors of this paper have thought about weakening the strongly-convex-strongly-concave conditions instead assuming gradient-dominance holds (see "Solving Robust MDPs through No Regret Dynamics" or "Global Optimality Guarantees For Policy Gradient Methods" for examples).
4. Have this paper's authors considered including some small toy experiments to verify their theoretical results? It would be interesting to see if they can achieve strong unlearning for linear classifiers with regularization under this unlearning framework.
I also believe this paper is missing a relevant citation: "Accelerated Perceptrons and Beyond" analyzes the generalization/margin of minimax learners in linear or convex-concave settings. This, however, will not impact my score.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, they have sufficiently addressed the limitations of their works.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her careful reading of our manuscript and numerous constructive remarks and questions.
**Notational mistake, size of $U$, and miswrote Assumption 2 in Lemma 2:**
Thank you for your careful reading and pointing out these issues. We will fix them and thoroughly proofread our paper to avoid other potential representational issues.
**Regarding the SC-SC/C-C assumptions:**
Yes, we acknowledge that the SC-SC/C-C assumptions can limit the applicability of our proposed method. However, we would like to emphasize that existing certified unlearning methods, especially those focused on establishing theoretical results, have largely focused on convex/strongly convex settings. As a nascent research field, we expect to develop advanced certified unlearning methods that rely on relaxed assumptions to be applicable to more general tasks in the future. That being said, it does not diminish the significance of our current work as the first certified minimax unlearning algorithms.
**Regarding the set of memory variables $T$:**
The set of memory variables $T$ corresponds to variables that can be computed based on a trained model and memorized to facilitate the unlearning algorithms. When it comes to our certified minimax unlearning algorithm (e.g., Algorithm 2), this set contains the total Hessian matrices with respect to $\omega$ and $\nu$, which are inputs to Algorithm 2. We will formally define the set of memory variables for each algorithm in our paper, as suggested by the reviewer.
**Regarding making variables in Lemma 2 and Theorem 5 more clear:**
Thank you for pointing it out. We will provide all necessary descriptions to clearly introduce the variables that appear in all lemmas and theorems.
**I would be interested if the authors of this paper have thought about weakening the strongly-convex-strongly-concave conditions instead assuming gradient-dominance holds (see "Solving Robust MDPs through No Regret Dynamics" or "Global Optimality Guarantees For Policy Gradient Methods" for examples):**
We appreciate your suggestion and agree that it is a valuable direction for future research. We sincerely thank the reviewer for pointing us in this future direction.
**Regarding inclusion of some small toy experiments:**
We appreciate your suggestion and agree that including toy experiments could strengthen our paper. We will consider machine unlearning under the pre-training and fine-tuning setting described in [1]. In this setting, all network parameters are fixed except for the loss layer, which will be fine-tuned on a dataset. During the fine-tuning stage, we effectively consider a SC-SC/C-C model. We will test the performance of our proposed methods with deletion requests made only from this dataset.
[1] Golatkar, Aditya, et al. "Mixed-privacy forgetting in deep networks." CVPR'21.
**Missing a relevant citation:**
Thank you for pointing us to this missing reference. We will introduce and cite it in our paper.
---
Rebuttal Comment 1.1:
Title: Thank you for your rebuttal
Comment: I appreciate the author's rebuttal. I maintain my score and vote to accept. | Summary: Machine unlearning is a privacy-inspired area to remove certain training samples of users’ data from well-trained model making those samples uninfluential while the unlearning approach does not cause the model to be retrained from scratch (not cause comprehensive computational cost) to achieve the baseline relied on the rest of datasets. The conventional machine unlearning usually applies for standard statistical learning models with one variable, but there is limited work for minimax models. Borrowing from Gaussian mechanism of differential privacy, this work also focuses on epsilon-delta certification of machine unlearning. Moreover, generalization rates and deletion capacity (number of deleted samples) are also very crucial to study in this framework.
Strengths: 1. Machine unlearning is a quite new field for privacy of machine learning. This work smoothly introduces the field with related work (a lot of citations) and preliminaries. Also it well states the reason minimax model is important for machine unlearning.
2. Certified minimax unlearning can be well extended to more general loss functions with reasonable generalization rates and deletion capacity results.
3. This work supports successive and online deletion requests, which is computationally efficient for unlearning phase with its minimax unlearning update.
Weaknesses: 1. Although this work focuses more on theoretical part of unlearning, it is still better to provide some preliminary results on some datasets. At least in one paragraph of intro or conclusion, authors can discuss scenarios or applications how to make the certified minimax unlearning practical.
2. This work does not discuss the relationships of certified unlearning between STL model and minimax model theoretically (How are they different).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Compared to other approaches for privacy of machine learning like differential privacy, what is the advantage of machine unlearning for the privacy? Since this work focuses on minimax model, the question could narrow down the specific model.
2. On Line 98, is it intentionally “extract” or should it be “exact”?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: 1. As weakness 1, authors can consider some implications for the applications by adding a paragraph. Like in the beginning of the introduction, how does this work help GAN, ARL or RL?
2. As weakness 2, this work emphasizes how important the certified unlearning for minimax model, but this work does not talk too much about the connection to unlearning for STL models.
3. Instead of putting every proof in the appendix, it is better to add one or two sentences for each theorem or lemma about the clue to prove (what methods will you use? How will you prove?) to convince readers who are interested in the topic but not familiar.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her careful reading of our manuscript and numerous constructive remarks and questions.
**Regarding 1) providing some preliminary results on some datasets and 2) discussing scenarios or applications:**
Thank you for the valuable suggestions. For 1), we will consider the machine unlearning under the pre-training and fine-tuning setting described in [1]. In this setting, all network parameters are fixed except for the loss layer, which will be fine-tuned on a dataset. During the fine-tuning stage, we effectively consider a SC-SC/C-C model. We will test the performance of our proposed methods with deletion requests made only from this dataset. For 2), we will add discussions about the applications (e.g., machine unlearning for GAN, fairness learning, RL, adversarial training) in the introduction section.
[1] Golatkar, Aditya, et al. "Mixed-privacy forgetting in deep networks." CVPR'21.
**Regarding differences of certified unlearning between SLT model and minimax model theoretically:**
Thank you for your valuable suggestion and constructive comments. Due to the character limit, we provide detailed discussions of their differences in General Response. We will revise the introduction section and Sec 4.1 by adding these discussions to clearly convey the differences and challenges, as suggested by the reviewer.
**Regarding advantage over differential privacy:**
The main advantage of certified unlearning over differential privacy is its better deletion capacity. This has two interpretations: First, for a given level of generalization performance, certified unlearning supports a larger number of data deletions while maintaining that generalization performance. Second, for a given number of deletions, certified unlearning yields better generalization performance than differential privacy. The intuition behind this is that certified unlearning has smaller sensitivity than differential privacy, and therefore requires less random perturbation to maintain utility.
**Consider some implications for the applications by adding a paragraph. Like in the beginning of the introduction, how does this work help GAN, ARL or RL:**
Thank you for your constructive comment. We will add a paragraph to the introduction section discussing how our certified minimax unlearning algorithm can be applied to existing minimax models to facilitate effective unlearning with certified unlearning endurance, along with generalization and performance guarantees.
**Regarding adding one or two sentences for each theorem or lemma about the clue to prove:**
Thank you for the valuable suggestion. We will provide the essential ideas and clues used to derive our main results including Lemma 1 to 3 and Theorem 2 to 4.
**Miswrote "exact" on Line 98:**
Thank you for your careful reading and pointing out the miswriting. We will fix it and thoroughly proofread our paper to address other potential representational issues.
---
Rebuttal Comment 1.1:
Title: Thanks for your rebuttal
Comment: Authors answered all my concerns and I agreed with those explanations. Thank you! | Summary: The papers proposes a Newton-based differentially private algorithm for stochastic minimax problems and analyse the generalization rate and deletion capacity for the algorithms.
They analyse the generalization bound for weak primal-dual risk in SC-SC, C-SC and SC-C cases. Also the deletion capacity they derive is $O(n/d^{1/4})$, which is better than the baseline result $O(n/d^{1/2})$. The results of generalization bound and deletion capacity match the bound in the pure minimazation case.
Strengths: The paper is the first one analysing certified machine unlearning for minimax problems and provide certain generalization and deletion capacity guarantee.
Weaknesses: There are several weaknesses and issues:
1. The reviewer is not convinced that weak primal-dual risk is enough to capture the behavior of generalization for minimax problems. In recent literature, people begin to study the strong primal-dual risk for minimax problems even without strong convexity.
For example, the authors in ``What is a Good Metric to Study Generalization of Minimax Learners'' derive the result for generalization of strong primal dual risk in convex-concave setting.
Therefore, the reviewer thinks the paper can be improved if the authors can extend their result to strong PD risk.
2. Suggestions for writing: The certified machine unlearning and deletion capacity might not be well-known to general audiums. However, in the abstract and begining of the paper, the authors use too many specific terminology for machine unlearning, deletion, etc. It is not friendly to the audiums. The reviewer suggests the authors can discuss more motivation for this paper and give more explanations for the intuition of these terminologies.
3. Missing reference: ''Uniform convergence and generalization for nonconvex stochastic minimax problems'', ''What is a Good Metric to Study Generalization of Minimax Learners''.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her careful reading of our manuscript and numerous constructive remarks and questions.
**Regarding strong primal-dial risk vs weak primal-dual risk:**
Thank you for the valuable suggestion. We have derived new population performance results in terms of the strong primal-dual risk for our minimax unlearning algorithm, as can be found in General Response and the attached PDF.
**Regarding specific terminology for machine unlearning and deletion in the abstract and beginning of the paper:**
Thank you for your valuable suggestions. We will revise the introduction section to include the following content.
1) Introduce unlearning-related terminologies and their intuition: We will provide clear definitions and explanations of key unlearning-related terms and concepts to help readers better understand the motivation and significance of our work.
2) Motivation of our algorithm: We will discuss the challenges and limitations of existing unlearning methods and explain how our algorithm and analysis address these challenges to provide effective certified minimax unlearning algorithms with generalization and deletion capacity guarantees.
3) Meaning and significance of our results: We will provide a clear and concise summary of our main results and their implications, as well as meanings to potential applications.
**Missing references:**
Thank you for pointing us to these references. We will introduce and cite them in our paper.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for addressing most of my inquiries and providing a generalization bound for strong PD-risk in the strongly-convex case. I'd like to update my rating to 6. However, this paper also examines the convex-concave scenario, lacking a generalization bound for strong PD risk in that case. The authors' response only furnishes a proof for the strongly convex case. In the paper "What is a Good Metric...," the authors present a generalization bound for strong PD risk in the convex-concave situation. I'm curious whether the findings from this paper can be extended to the convex-concave case.
---
Reply to Comment 1.1.1:
Title: Generalization bound for strong PD risk in the convex-concave situation
Comment: Thank you very much for engaging with our rebuttal and raising the initial score. We have also extended the strong primal-dual risk analysis to the general convex concave case, which was not presented in the previous rebuttal due to limited space. In this follow-up response, we provide this result and its sketched proof.
With an application of Lemma 23 in the attached pdf, we first extend Lemma 10 in terms of population strong primal-dual risk: $\mathbb E[\max_{\nu\in\mathcal V}F(\omega^\*\_S,\nu)-\min_{\omega\in\mathcal W}F(\omega,\nu^\*\_S)] \leq \frac{128\sqrt 2 L^2(2\ell+\lambda)}{\lambda^2 n}+\frac{\lambda}{2}(B_\omega^2+B_\nu^2)$. Then, together with eq.(133) we have that $\triangle^s(\omega^u,\nu^u)
\leq \mathbb E[L\\|\omega^u-\omega^\*\_S\\|]+\mathbb E[L\\|\nu^u-\nu^\*\_S\\|]+\frac{128\sqrt 2 L^2(2\ell+\lambda)}{\lambda^2 n}+\frac{\lambda}{2}(B_\omega^2+B_\nu^2)$. Plugging eq.(96) and eq.(97) into the equation above with noise scales given in Lemma 12, we can get the generalization guarantee for population strong PD risk in the convex-concave situation: $\triangle^s(\omega^u,\nu^u) \leq O \bigg( (L^3 \ell^3 \rho / \lambda^6 + L^2 \ell^2/\lambda^3)\cdot \frac{m^2 \sqrt{d\log(1/\delta)}}{n^2 \epsilon} + \frac{mL^2}{\lambda n} + \frac{L^2\ell}{\lambda^2 n} + \lambda(B_{\omega}^2 + B_{\nu}^2) \bigg)$. In order to ensure the population strong PD risk is bounded by $\gamma$, it suffices to let the deletion capacity $m ^ { A,\bar A} _ {\epsilon,\delta,\gamma}(d_1,d_2,n) \geq c \cdot \frac{n\sqrt{\epsilon}}{(d\log(1/\delta))^{1/4}}$.
Thank you again and we will add these new results in the final version of our paper. | Summary: This paper studies the problem of machine learning for the minimax model. By using Newton's step update with the Hessian information of the leftover data and Gaussian Mechanism, the proposed method improves the deletion capacity from $O(n/d^{1/2})$ to $O(n/d^{1/4})$.
Strengths: - Given the rise of GAN and the increasing concern for the privacy of participants in datasets used by GAN/ML models, minimax unlearning is an important problem to study and understand.
- The proposed algorithm shows improvement in both strongly convex-strongly convex and the more general convex-concave settings.
- The improvement of the deletion capacity from $O(n/d^{1/2})$ to $O(n/d^{1/4})$ could be quite significant for practical problems with high-dimensional features.
- The efficient algorithm that dispenses the need for recomputing some of the Hessian information is a nice touch since computing the Hessian matrix can be very computationally expensive.
- The paper is well-written overall.
Weaknesses: - Some of the technical details are used before being defined properly. For example, in equation (7), $\kappa$ is defined as $\kappa = l/\mu$ even though $l$ and $\mu$ haven't been defined yet. These quantities are later mentioned in section 4 but it would be better for the readers if those quantities are defined right away.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I'm a bit confused about the intuition part where $\nu^\star_{S\}- \nu^\star \approx -\partial_{\nu\nu}^{-1}F_{S\}(w_S^\star,\nu_S^\star)\dot\partial_{\nu w}^{-1}F_{S\}(w_S^\star,\nu_S^\star)(w^\star_{S\}- w^\star)$. Can the author explain how we can use linear approximation and the implicit function theorem to get this result? Also why do we want to leave out the second term in eq. 60?
- Is there any benefit in doing the update in Algorithm 2 compared to Algorithm 3? As the authors have shown in the proofs of Lemma 1 and Lemma 19, the analyses are basically identical so I'm just curious if there's any good thing about the update of Algorithm 1 and why we want to do it in the first place?
- Seems like there's no improvement in the result in SC-SC case compared to C-C case. Is this the artifact of the algorithm or is this something that usually happens in minimax learning?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her careful reading of our manuscript and numerous constructive remarks and questions.
**Some of the technical details are used before being defined properly:**
Thank you for your valuable suggestion. We will move Assumptions 1 and 2 from Sec 4 to Sec 3.3.
Additionally, we will carefully check all definitions of terms in the paper to ensure that they are defined before being used.
**Regarding the intuition part in Sec 4.1:**
We apologize for causing the confusion. We will provide more detailed explanations of the intuition and the derivation of Fact 1 in our paper, as follows.
For the linear approximation and the implicit function theorem to get $\nu^\*\_{S^{\setminus}} - \nu\_{S}^\* \approx -\partial^{-1}\_{\nu\nu} F\_{S^{\setminus}}(\omega\_S^\*,\nu\_S^\*) \partial\_{\nu\omega} F\_{S^{\setminus}}(\omega\_S^\*,\nu\_S^\*) (\omega\_{S^{\setminus}}^\* - \omega\_{S}^\*)$ is as follows. Following Eq.(61), we have $\nu^\*\_{S^{\setminus}} - \nu^\*\_{S} \approx V\_{S^{\setminus}}(\omega^\*\_{S^{\setminus}}) - V\_{S^{\setminus}}(\omega ^\*\_{S}) \approx \Big{(}\frac{d V\_{S^{\setminus}}(\omega^\*\_{S})}{d \omega} \Big{|}\_{\omega = \omega^\*\_{S}}\Big{)}\cdot(\omega^\*\_{S^{\setminus}} - \omega^\*\_{S})$, where the second ''$\approx$'' is the linear approximation step and is the response Jacobian of the auxiliary function $V$ defined in Definition 8. Next, by implicit function theorem, we further have $\Big{(}\frac{d V\_{S^{\setminus}}(\omega^\*\_{S})}{d \omega} \Big{|}\_{\omega = \omega^\*\_{S}}\Big{)} = -\partial^{-1}\_{\nu\nu} F\_{S^{\setminus}}(\omega\_S^\*,\nu\_S^\*) \partial\_{\nu\omega} F\_{S^{\setminus}}(\omega\_S^\*,\nu\_S^\*) $, which leads to the result. We emphasize that this only provides the intuition to introduce the total Hessian (rather than the conventional Hessian) in the minimax unlearning, which is not meant to be rigorous. We rigorously derive the $(\epsilon,\delta)$-certified minimax unlearning algorithm based on the one-step complete Newton update and Gaussian mechanism. To do so, it requires to bound the closeness upper bound in Lemma 1, which is more involved than the certified machine unlearning for standard learning models.
For the purpose of leaving out $V\_{S^{\setminus}}(\omega^\*\_{S})-\nu\_S^\*$: This is based on the observation that we already have the direct and partial Hessian of $F\_{S^{\setminus}}$ in Eq.(10), both are for the loss $F\_{S^{\setminus}}$. Thus, when dealing with the remaining term $\nu^\*\_{S^{\setminus}} - \nu\_{S}^\*$, we want to convert it into a form having strong correlations with $F_{S^{\setminus}}$ (rather than $F_{S}$). First, according to Definition 8, we notice that $\nu^\*\_{S^{\setminus}} - \nu\_{S}^\*$ can be equivalently represented as $V\_{S^{\setminus}}(\omega^\*\_{S^{\setminus}}) - V\_{S}(\omega^\*\_{S})$, where the term $V\_{S}(\omega^\*\_{S})$ does not related to $F_{S^{\setminus}}$. We further convert it to $[V\_{S^{\setminus}}(\omega^\*\_{S^{\setminus}}) -V\_{S^{\setminus}}(\omega^\*\_{S})] + [V\_{S^{\setminus}}(\omega^\*\_{S}) - V\_{S}(\omega^\*\_{S})]$. Then, the first square bracket belongs to the total Hessian of $F_{S^{\setminus}}$ after conversions, while the second bracket still has $V\_{S}(\omega^\*\_{S})$ that does not have relationship with $F_{S^{\setminus}}$ (instead it is related to $F_{S}$). We have shown that the second bracket can be bounded in Lemma 2 and the increased approximation does not affect the overall certified unlearning guarantee and the order of generalization and deletion capacity. As a result, this term can be safely left out.
**Why we want Algorithm 2:**
We choose to present Algorithm 2 in detail first in the paper mainly due to the following two considerations.
First, the minimax unlearning update in Algorithm 2 is a direct consequence of the intuition in Sec 4.1 to introduce the total Hessian of $F_{S^{\setminus}}(\omega,\nu)$ (rather than the conventional Hessian in prior SLT unlearning updates). Thus, Algorithm 2 has a closer relation to the intuition than Algorithm 3, which uses the total Hessian of $F_{S}(\omega,\nu)$ and does not need to update the total Hessian. This cannot be directly derived from the intuition and it is only after formally bounding the closeness upper bound and deriving the generalization and deletion capacity results that we can ensure that replacing $F_{S^{\setminus}}(\omega,\nu)$ with $F_{S}(\omega,\nu)$ still have the same utility but better efficiency.
Second, sequentially introducing Algorithms 2 and 3 is aligned with the chronological development of certified unlearning methods in SLT, which facilitates direct comparison between certified unlearning for minimax models and SLT models. That is, early works on certified unlearning for SLT proposed to use the conventional Hessian of $F_{S^{\setminus}}(\omega)$ first, while later works discovered that replacing it with the conventional Hessian of $F_{S}(\omega)$ can still yield certified unlearning guarantee as well as generalization and deletion capacity guarantees. Thus, we chose to present Algorithms 2 and 3 according to this chronological development so that both minimax unlearning updates can be directly compared with their counterparts in SLT unlearning updates.
**Regarding improvement in results in SC-SC case compared to C-C case:**
The generalization performance of the C-C case is worse than the SC-SC case, where the former is $O(\sqrt{\frac{m}{n}} + (\frac{m}{n})^{2/7})$ while the latter is $O(\frac{1}{n} + \frac{m^2}{n^2})$. In terms of the deletion capacity, the two cases indeed have the same order of deletion capacity $\frac{n}{d^{1/4}}$, which indicates that both cases can support the same amount of samples to be deleted. This order of deletion capacity matches that of SLT unlearning, which can be regarded as a special case of minimax unlearning. Therefore, it does not seem to be an artifact of our certified minimax unlearning algorithm.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response! I will keep my score. | Rebuttal 1:
Rebuttal: In this general response, we would like to first thank all reviewers for their careful review and valuable comments. Next, we provide the responses to two common comments that are shared by at least two reviewers. We provide the remaining point-to-point responses to each reviewer in our individual responses.
**Regarding the comparison of certified unlearning between minimax models and SLT, and additional challenges overcome in relation to prior works (Reviewers DtDD and fuZ1):**
Comparing the certified unlearning between minimax models and SLT, there are three main differences:
1) Different designs of the unlearning update: minimax unlearning introduces the total Hessian (i.e., direct Hessian plus indirect Hessian) to sufficiently capture the data influence for unlearning, which is tailor-made to the minimax structure that has the inter-influencing min and max variables. In contrast, SLT unlearning requires only the direct Hessian (also known as the conventional Hessian).
2) Different calibrations of the random perturbation: In order to properly calibrate the random perturbation to guarantee $(\epsilon,\delta)$-certified unlearning, it is essential to analyze the closeness upper bound (e.g., Lemma 1), which signifies the magnitude of the randomized perturbation. Three factors make the analysis of the closeness upper bounds very different between minimax unlearning and SLT unlearning: i) extra approximation terms are introduced during the derivation of the minimax unlearning update formula; ii) the minimax unlearning update utilizes the total Hessian-based unlearning update rather than the conventional Hessian; iii) the auxiliary functions (as defined in Definition 8) that arise due to the minimax structure appear in the analysis.
3) Different generalization performance metrics: SLT unlearning uses the excess population risk to measure generalization performance. In our minimax unlearning, we provide weak primal-dual risk in our paper. Additionally, we derive new generalization results in terms of the strong primal-dual risk during the rebuttal period.
Due to these differences, certified minimax unlearning presents additional challenges:
1) it is nontrivial to design the certified minimax unlearning update scheme, which requires a series of conversions and proper approximations to come up with a closed-form update in the form of the total Hessian-based complete Newton step. This challenge corresponds to Equations (9) to (11) in the paper and the development of Fact 1 in Appendix A.2.
2) it is challenging to derive the closeness upper bound to guarantee certified minimax unlearning, which requires dealing with i) the extra approximation terms left out during the minimax unlearning update design, ii) the total Hessian that has different characteristics compared to the conventional Hessian, and iii) best response auxiliary functions that are unique to the minimax structure and do not appear in STL unlearning.
3) it was previously unknown whether it was possible to achieve ideal generalization performance and deletion capacity for certified minimax unlearning. Our work is the first to show that the rates of generalization and deletion capacity match the state-of-the-art rates derived previously for SLT unlearning, which are special cases of minimax unlearning. This indicates that certified minimax unlearning can indeed achieve ideal generalization performance and deletion capacity.
**Regarding strong primal-dual risk (Reviewers DtDD and sP3P):**
Thank you for your valuable suggestion. During the rebuttal period, we have derived new generalization performance results for our certified minimax unlearning algorithms in terms of strong primal-dual risk. In the general response below, we summarize our main results derived. In the attached PDF file, we provide more detailed results, along with a sketch of the proof highlighting the main differences in derivation compared to the weak primal-dual risk counterpart. Due to space limits, we provide the new results for Algorithm 2 as an example in this response. We will provide the counterpart results for all proposed algorithms to our paper.
Denote $d=\max\\{d_1,d_2\\}$, under the same settings of Lemma 1, the population strong primal-dual risk for the certified minimax unlearning variables $(\omega^u,\nu^u)$ returned by Algorithm 2 is $ \triangle^s (\omega^u,\nu^u) = O \left( (L^3 \ell^3 \rho / \mu^6 + L^2 \ell^2/\mu^3)\cdot \frac{m^2 \sqrt{d\log(1/\delta)}}{n^2 \epsilon} + \frac{mL^2}{\mu n} + \frac{L^2\ell}{\mu^2 n}\right)$. Define the deletion capacity $m^{A,\bar A}_{\epsilon,\delta,\gamma}(d_1,d_2,n)$ as the maximum number of samples $U$ that can be unlearned while still ensuring the population strong primal-dual risk is at most $\gamma$.Then it suffices to let $m ^ { A,\bar A} _ {\epsilon,\delta,\gamma}(d_1,d_2,n) \geq c \cdot \frac{n\sqrt{\epsilon}}{(d\log(1/\delta))^{1/4}}$, where the constant $c$ depends on $L, l, \rho,$ and $\mu$ of the loss function $f$.
Pdf: /pdf/c2f18f44127e898d82535a7e46dfc6aaa8dc7593.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper studies approximate unlearning for minimax problems. They design learning and unlearning procedures and provide bounds on deletion capacity in terms of generalization performance (weak gap). Akin to minimization (statistical learning), the deletion capacity for strongly convex-strongly concave setting, is shown to be, $n/d^{1/4}$, where $n$ is the number of samples and $d$, dimension. The authors also provide extensions to non strongly convex/concave settings and efficient updates.
Strengths: The problem of machine unlearning has gathered a lot of interest recently, owing to various privacy regulations. Further, the minimax formulation, is widely applicable, especially in robust adversarial learning and reinforcement learning. The paper is the first to study unlearning for minimax settings. Therefore, the topic of the work is natural and timely.
Weaknesses: 1. The paper very closely follows the outline and techniques in prior work of Sekhari et. al. The extensions, to non-strongly convex settings (via regularization) and efficient updates also use techniques directly from prior work. If there are additional challenges due to the minimax (as opposed to min) structure, then I don't think they are communicated well in the write-up. The only relevant section is Section 4.1 "Intuition for Minimax Unlearning Update", however, I found it too raw to convey the intuition -- for instance, how does Eqn. 10 follow? In the current state of the write-up, it is difficult to evaluate if there are significant challenges overcome in relation to prior works.
2. Comparison between Section 4.3 and 5.2: It seems to me that both are in the same setting, and achieve the same guarantees, yet Section 5.2 provides a more efficient update. If indeed Section 5.2 is a strict improvement over 4.3, then what is the point of devoting considerable space to the weaker result in Section 4.3. The authors should re-organize and present the strongest result in the main paper. The space should be used to explain the challenges compared to the minimization setting. If this is not the case, then please explain the differences.
3. Strong gap vs weak gap: The generalization performance considered in the paper is the weak primal-dual gap. In the non-private setting, "strong" primal-dual gap (as opposed to weak) is what is typically considered. Further, as explained in Bassily et al. 2023, the strong gap criterion has game-theoretic interpretation and motivation, and moreover, the weak and strong gaps may be arbitrarily apart. Seemingly, the consideration of the weak gap in the paper primarily stems from challenges of studying strong gap under privacy settings, until recently. However, given that the work Bassily et al. 2023 has established optimal rates for strong gap under privacy, can the authors, perhaps borrowing techniques from the aforementioned work, also provide guarantees in terms of the strong gap?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Please answer the question posed in weakness 1.
2. Please answer the question posed in weakness 2.
3. Please answer the question posed in weakness 3.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: 1 The work is limited to the approximate unlearning setting, as opposed to exact unlearning.
2. The authors study generalization performance in terms of weak gap as opposed to strong gap.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her careful reading of our manuscript and numerous constructive remarks and questions.
**Regarding additional challenges due to the minimax structure and comparison to prior works:**
Thank you for your valuable suggestion and constructive comments. Due to the character limit, we provide explanations of additional challenges due to the minimax structure in General Response. We will revise Sec 4.1 by adding these explanations to clearly convey the challenges, as suggested by the reviewer.
**Regarding Equation (10):**
The left hand of Equation (10) contains two terms. The first term is the direct Hessian (also known as the conventional Hessian) of $F_{S^{\setminus}}(\omega,\nu)$, which has already appeared in the unlearning update for SLT models. The second term is unique to the minimax unlearning, which captures the inter-influence between the min and max variables and leads to the total Hessian to be used in the minimax unlearning update. In detail, the second term spells out as follows. To get
$\nu^\*\_{S^{\setminus}} - \nu_{S}^\* \approx -\partial^{-1}\_{\nu\nu} F_{S^{\setminus}}(\omega_S^\*,\nu_S^\*) \partial_{\nu\omega} F_{S^{\setminus}}(\omega_S^\*,\nu_S^\*) (\omega_{S^{\setminus}}^\* - \omega_{S}^\*)$, we begin with $\nu^\*\_{S^{\setminus}} - \nu^\*\_{S} \approx V\_{S^{\setminus}}(\omega^\*\_{S^{\setminus}}) - V\_{S^{\setminus}}(\omega ^\*\_{S}) \approx \Big{(}\frac{d V\_{S^{\setminus}}(\omega^\*\_{S})}{d \omega} \Big{|}\_{\omega = \omega^\*\_{S}}\Big{)}\cdot(\omega^\*\_{S^{\setminus}} - \omega^\*\_{S})$, where the second ''$\approx$'' is the linear approximation step and is the response Jacobian of the auxiliary function $V$ defined in Definition 8. Next, by implicit function theorem, we further have $\Big{(}\frac{d V\_{S^{\setminus}}(\omega^\*\_{S})}{d \omega} \Big{|}\_{\omega = \omega^\*\_{S}}\Big{)} = -\partial^{-1}\_{\nu\nu} F\_{S^{\setminus}}(\omega\_S^\*,\nu\_S^\*) \partial\_{\nu\omega} F\_{S^{\setminus}}(\omega_S^\*,\nu_S^\*) $
, which leads to the result. We emphasize that this only provides the intuition to introduce the total Hessian (rather than the conventional Hessian) in the minimax unlearning, which is not meant to be rigorous. We rigorously derive the $(\epsilon,\delta)$-certified minimax unlearning algorithm based on the one-step complete Newton update and Gaussian mechanism.
We will move the above derivation from Appendix to Sec 4.1, as suggested by the reviewer.
**Regarding the comparison between Section 4.3 and 5.2 (Algorithm 2 and Algorithm 3):**
We thank the reviewer for the valuable suggestion. Algorithm 3 in Sec 5.2 is indeed a strict improvement over Algorithm 2 in Sec 4.3, with better efficiency and similar guarantees in terms of generalization and deletion capacity. We choose to present Algorithm 2 in detail first in the paper mainly due to the following two considerations.
First, the minimax unlearning update in Algorithm 2 is a direct consequence of the intuition in Sec 4.1 to introduce the total Hessian of $F_{S^{\setminus}}(\omega,\nu)$ (rather than the conventional Hessian in prior SLT unlearning updates). Thus, Algorithm 2 has a closer relation to the intuition than Algorithm 3, which uses the total Hessian of $F_{S}(\omega,\nu)$ and does not need to update the total Hessian. This cannot be directly derived from the intuition and it is only after formally bounding the closeness upper bound and deriving the generalization and deletion capacity results that we can ensure that replacing $F_{S^{\setminus}}(\omega,\nu)$ with $F_{S}(\omega,\nu)$ still have the same utility but better efficiency.
Second, sequentially introducing Algorithms 2 and 3 is aligned with the chronological development of certified unlearning methods in SLT, which facilitates direct comparison between certified unlearning for minimax models and SLT models. That is, early works on certified unlearning for SLT proposed to use the conventional Hessian of $F_{S^{\setminus}}(\omega)$ first, while later works discovered that replacing it with the conventional Hessian of $F_{S}(\omega)$ can still yield certified unlearning guarantee as well as generalization and deletion capacity guarantees. Thus, we chose to present Algorithms 2 and 3 according to this chronological development so that both minimax unlearning updates can be directly compared with their counterparts in SLT unlearning updates.
We will revise our manuscript according to the reviewer's constructive comments. Specifically, we will: 1) relegate the detailed analysis for Algorithm 2 in Sec 4.3 to the appendix; 2) Move Algorithm 3 and its analysis from Sec 5.2 to Sec 4.3; 3) Highlight the additional challenges between certified minimax unlearning over SLT unlearning in Sec. 4.1; 4) Add discussions in Sec 4.2 to compare the differences between minimax unlearning and SLT unlearning and provide discussions to motivate the changes made to Algorithm 2 to yield Algorithm 3.
**Regarding Strong gap vs weak gap:**
Thank you for the valuable suggestion. We have derived new population performance results in terms of the strong primal-dual risk for our minimax unlearning algorithm, as can be found in General Response and the attached PDF.
**Regarding limited to the approximate unlearning setting, as opposed to exact unlearning:**
In this paper, we focus on the approximate unlearning that comes with $(\epsilon,\delta)$-certified unlearning guarantee, generalization, and deletion capacity guarantees. We will add discussions in the future work part about the potential for developing minimax unlearning methods under the exact unlearning setting.
---
Rebuttal Comment 1.1:
Title: Thanks!
Comment: I thank the authors for their detailed response, and in particular their derivation of guarantee on strong gap. I encourage the authors to include this, and the other parts of their response, to the revised paper. I increase my score to 6. | null | null | null | null | null | null |
Revisiting Scalarization in Multi-Task Learning: A Theoretical Perspective | Accept (poster) | Summary: In recent years, there has been a surge in papers suggesting Specialized Multi-Task Optimizers (SMTOs). These papers show the empirical advantage of using SMTOs compared to linear scalarization (LS). However, recently, there have been several papers that suggested that LS with proper tunning can match SMTOs performance. The paper analyzes scalarization from a theoretical perspective, and studies whether scalarization is capable of fully exploring the Pareto front in linear MTL. It reveals some inherent limitations of LS for MTL optimization.
Strengths: 1. The paper addresses an important open question in MTL optimization.
1. The paper is (mostly) well-written and well-structured.
1. The paper provides an in-depth theoretical analysis with implications for both MTL researchers and practitioners.
Weaknesses: 1. A primary argument is that LS cannot reach a point in the intersection of surfaces. However, it is not clear from Section 3.2 why this is the case. Please elaborate on this point and revise the manuscript accordingly.
1. Please discuss the main results and novelty w.r.t the known result ([1]), which states that LS cannot reach non-convex parts of the Pareto front.
1. It would be beneficial to revise Section 3, by either breaking it into two sections or summarizing the main results at the beginning of the Section.
1. Please add an experiment with the proposed randomization approach to empirically verify that it can explore the entire Pareto front.
1. Missing citations in related work:
1. SMTOs:
- Multi-Task Learning as a Bargaining Game, ICML 2022.
- Towards Impartial Multi-task Learning, ICLR 2021.
- Gradient Vaccine: Investigating and Improving Multi-task Optimization in Massively Multilingual Models, ICLR 2021.
1. Exploration of the Pareto front: There’s a line of work for methods trying to learn the entire Pareto front, so it is important to mention it. The two pioneering papers for Pareto front learning are:
- Learning the Pareto Front with Hypernetworks, ICLR 2021.
- Controllable Pareto Multi-Task Learning, 2021.
[1] S. Boyd, S. P. Boyd, and L. Vandenberghe. Convex optimization. 2004.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. How realistic is the case $q<k$ for real-world applications?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank Reviewer aGcP for their constructive feedback. We are grateful that the reviewer appreciates the significance of the problem we tackled as well as the theoretical contributions. We hope to address all points in the review below, following the order they were made.
**Why scalarization cannot explore intersection points.** Please refer to point 3 of the general response, in which we provide detailed reasoning steps and explanations. We will elaborate on this point in the revision.
**Comparison with [1].** We have carefully checked the reference by searching the keyword 'scalarization', but were unable to find the result that you mentioned. Could you kindly refer to the relevant content in the book? As far as we know, [1] only states that, when the objective functions are convex, the Pareto front can be fully explored by LS. Figure 4.9 in [1] provides an example of PO ($f_0(x_3)$) that cannot be achieved by LS, but it is unclear how it corresponds to the 'non-convex part' of the PF, and how $f_0(x_1)$ and $f_0(x_2)$ corresponds to 'convex parts' of the PF. We believe that these terms (convex or non-convex part of a set) are not well-defined, as one can only talk about the convexity or non-convexity of the PF as a whole.
In our work, we note that the PF is only a subset of the feasible region. Proving the PF is convex does not directly yield the result of full-exploration. As a consequence, we approach this problem from a different perspective. After revealing the multi-surface structure of the feasible region, we examine when the PF lies on a single surface instead of checking when it is convex. This allows us to derive a both necessary and sufficient condition for full-exploration.
**Structure of Section 3.** Thanks for the suggestion! We agree that the current Section 3 is too long, and will break it into two sections to increase the readability.
**Experiments on randomization.** We conduct experiments based on the equation derived before Line 300. Specifically, the region achievable by randomization can be expressed as the collection of the convex combination of two points from the feasible region. To this end, we randomly sample 100000 weight pairs (the same number as in the original experiment). For each weight pair $(w_1,w_2)$, we uniformly draw $t \sim U(0,1)$, and get two corresponding optimal networks $f_1$ and $f_2$ by SVD. For each sample, with probability $t$, the model uses $f_1$, otherwise $f_2$, to calculate the MSE. Finally, we plot the set of all MSEs in Figure B (see the attached PDF in the general response). It is straightforward to see that randomization allows scalarization to trace out the PF since there is no hole within the blue region, thus validating our theoretical analysis. We additionally comment that randomization convexifies the feasible region, as such, the solutions found by MGDA and MGDA-UB are dominated.
**Missing references.** We thank the reviewer for pointing out the references, and will for sure incorporate them in the revision.
**The $q<k$ assumption.** Please refer to the first point of the general response.
**References**
[1] Boyd, Stephen P., and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response and for providing additional results.
- **Comparison with [1]:** I meant the results in Ch 4.7 of [1], specifically, please see Ch. 4.7.4. Note that Linear Scalarization is referred to as Scalarization.
- **Point raised by Reviewer nBcV w.r.t Xin et al. (2022):** I agree with the point raised by Reviewer nBcV regarding the results w.r.t Xin et al. (2022). The theoretical results here are limited in terms of applicability to real-world scenarios and modern, over-parametrized MTL networks. This does not diminish or damage the value and importance of the analysis in the paper, but the limitations in immediate empirical implication should be made clear.
[1] S. Boyd, S. P. Boyd, and L. Vandenberghe. Convex optimization. 2004.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response.
Regarding the first point, we believe the results in Ch. 4.7.4 have been well summarized in Xin et al. (2022) (see Theorem (Informal) in P3). Specifically, 1) any solution to the scalarization objective with positive coefficients is Pareto optimal; 2) when the objective functions are convex, the Pareto front can be fully explored by linear scalarization. In contrast, our theoretical results differ significantly from prior works because 1) they are established in the non-convex setting, which demands fundamentally different techniques compared to those used in convex analysis; 2) the conditions for full exploration that we have uncovered are both necessary and sufficient, which helps us understand the weakness of scalarization; 3) finally, we build up a theory (the multi-surface structure and the phenomenon of gradient disagreement) to explain why scalarization fails. This significantly advances previous studies by offering new insights into the failure modes of scalarization, going beyond mere hypothetical examples (e.g., Figure 4.9 in the textbook).
We agree with the reviewer that it is important to contrast our results with prior art in order to showcase their significance and novelty. We will make sure that the above points are adequately discussed in the revision.
Regarding the second point, we would like to clarify that we do not attempt to completely refute the claims and results in Xin et al. (2022), and we understand the difference in the settings makes the results in these two papers not directly comparable. Instead, below are our standing points:
- We feel there are some reasoning gaps in Xin et al. (2022) that we would like to point out and clarify. Specifically, as shown in our paper, the positive results in the convex setting do not transfer. Therefore, the theory they developed cannot be used to support their experiments, which are performed with deep neural networks. By pointing this out, we call for a need from the research community to develop a new theory to explain the empirical success of linear scalarization.
- We don't think one single SMTO algorithm can fix the issue of linear scalarization, and it is not our goal to advocate for a particular algorithm like MGDA. Instead, we hope to refute the claim that linear scalarization is sufficient for MTL, and bolster research in the development of SMTO algorithms. One of our central goals is to promote a healthier and more balanced algorithmic developement in the field of MTL.
- We view our work as a complement to Xin et al. (2022) (as well as a few other works). Putting together, they provide a more comprehensive view on the strengths and weakness of linear scalarization and SMTO.
We thank the reviewer for emphasizing this point. We will make sure to discuss the above points as well as the limitation of our empirical evaluations in the revision.
Again, we would like to thank the reviewer for taking the time to review our paper and joining the discussion. We will be happy to answer any further questions. | Summary: This paper is related to multi-objective optimization area.
They post a research question: if linearly weighting multiple objectives can fully explore the Pareto front?
Through theoretical analysis and a simple experiment, the answer is negative.
Hence, this might prove that multiple gradient descent algorithm is inherently better than linear objective weighting.
Strengths: Recently, many report that linear weighting is no worse and even better than gradient balancing algorithms like multiple gradient descent algorithm (MGDA).
However, these findings are empirical,
This paper targets on an important question: if multiple gradient descent algorithm is inherently better than simple linear weighting when conflict gradients are presenting.
Through theoretical analysis and a simple experiment, the answer is yes, which lay a good foundation for this area.
Weaknesses: The MTL model is this paper is very simple. I am not sure if this could reflect the real problem in the industry system.
This model only have one layer of shared layer and one layer of task-head. And this is a linear neural network. (I am not sure if pure linear can be called neural network).
So, seems this MTL model is a convex optimization.
However, in real industry systems, there are many layers (maybe millions of parameters) with non-linear transformation and hence a non-convex optimization.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. if MTL has different objectives like one regression task and one classification task, will your conclusion hold?
2. If MTL has non-linear transformation like Sigmoid, will your conclusion hold?
3. Do we have practical usage of randomization for linear scalarization?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No negative social impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank Reviewer CFe6 for taking the time to review our paper. We appreciate that they found the problem of our study important. Below we attempt to address the reviewer’s concerns, following the order they were made.
**Weaknesses.** Please refer to point 2 of the general response. We emphasize that even for two-layer linear networks, the loss function is *not* convex w.r.t. the model parameters, and that both the techniques and results in this paper are novel (to the best of our knowledge) and greatly advance those in the literature of convex analysis.
**Question 1 and 2.** The honest answer to the first two questions is 'we don't know'. We understand that a more sophisticated setting or model architecture will be more appealing to the reviewer. However, we would like to share a few points in this aspect:
- While a more complicated setting or model generally necessitates more advanced techniques, it is not the case conversely. For instance, ridge regression is probably one of the simplest models in machine learning, yet it is still being extensively studied in the literature [1,2,3]. In analogy, while we focus on linear MTL for regression, we are targeting an important question in MTL from an unique perspective, and the techniques that we have developed in this paper are highly non-trivial (e.g., the innovative application of Perron-Frobenius theorem, bridging full-exploration with doubly non-negative matrices, etc). We therefore believe the theoretical contribution of this paper is significant, and shouldn't be dimmed by the setting.
- We carefully compile a list of implications derived through our theoretical analysis. We believe that sharing such insights with the broader community is more important than proving similar results in variations of the original setting.
- Building up a strong and robust theory requires collaborative effort from the research community. As stated at the end of our paper, we hope this work can initiate a line of work that provides theoretical justifications of the usage of specific algorithms in MTL. We therefore believe that follow-up works will answer the reviewer's questions.
We sincerely hope the reviewer can reconsider their evaluation of the paper based on the above response.
**Question 3.** We are not aware of existing literature on using randomization in linear scalarization. But the caveat here is that randomization helps to enlarge the feasible regions beyond the ones that could be achieved by any deterministic models. Specifically, the new PF is going to be the convex hull of the original one, and this implies that 1) it can lead to better solution points (in the Pareto sense) overall; 2) it guarantees that the new PF can be fully explored by linear scalarization according to standard results in convex analysis (e.g., [4]). In essence, randomization increases the representation power of the original neural network.
We additionally comment that randomization is a powerful tool that has wide applicability in machine learning. For instance, it has achieved great success in online learning, specifically in the Follow-the-Perturbed-Leader algorithm (see [5]). It also appears in the literature of fairness to achieve the optimal regressor or classifier (see [6]). In our work, we further demonstrate that it can be applied to MTL, specifically, to convexify the feasible region and allow linear scalarization to fully explore the PF.
**References**
[1] Richards, Dominic, Jaouad Mourtada, and Lorenzo Rosasco. "Asymptotics of ridge (less) regression under general source condition." International Conference on Artificial Intelligence and Statistics. PMLR, 2021.
[2] Cheng, Chen, and Andrea Montanari. "Dimension free ridge regression." arXiv preprint arXiv:2210.08571 (2022).
[3] Tsigler, Alexander, and Peter L. Bartlett. "Benign overfitting in ridge regression." J. Mach. Learn. Res. 24 (2023): 123-1.
[4] Boyd, Stephen P., and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
[5] https://users.soe.ucsc.edu/~sesh/Teaching/2020/CSE290A/Slides/Lecture18.pdf
[6] Zhao, Han, and Geoffrey J. Gordon. "Inherent tradeoffs in learning fair representations." The Journal of Machine Learning Research 23.1 (2022): 2527-2552.
---
Rebuttal Comment 1.1:
Title: reply to rebuttal
Comment: Thanks for the reply.
I agree, although the network is simple in this paper, this is still an important step towards explaining the advantage of SMTO over linear weighting
---
Rebuttal Comment 1.2:
Comment: So I raised the rating from 4 to 5
---
Reply to Comment 1.2.1:
Comment: Thank you for taking the time to review our paper! | Summary: This paper revisits linear scalarization from a theoretical perspective. The authors study multi-task learning with a two-layer linear network and reveal a multi-surface structure of the feasible region. They show the necessary and sufficient conditions for full exploration in the under-parameterized regime. Their theoretical results imply that linear scalarization has fundamental limitations and that scalarization tends to overfit a small fraction of tasks. They answer some open questions proposed in Xin et at.(2022) and also provide experiments on a three-task learning problem to verify the theoretical results.
Strengths: - It is a novel and interesting problem to study whether linear scalarization can fully explore the Pareto front. The authors provide necessary and sufficient conditions for this in the under-parametrized regime, which is the main contribution of this paper.
- The notations and presentations in this paper are clear and the proofs seem to be correct.
- They empirically study the validity of the theoretical results, and the experiments are reasonable, and the results are convincing to me.
Weaknesses: - The authors study only two cases (q = 1 and q = k - 1) in the under-parametrized regime, but are these two extreme cases representative for other cases in the under-parametrized regime?
- This paper discusses linear scalarization in the under-parametrized regime. However, over-parameterization is common in deep learning, where the network has sufficient capacity to adapt to the target tasks. Does this mean that linear scalarization in the overparameterized regime does not have the fundamental limitations discussed in this paper?
- I commend the authors for attempting to answer some open questions in Xin et al. (2022), but it lacks an explanation for why SMTOs have no improvement over linear scalarization in deep multitask learning, which is found in Xin et al. (2022).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The theoretical analysis in this paper is only applied to the linear network and under-parametrized regime, which is a limitation of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank Reviewer 2hHW for taking the time to review our paper. We appreciate that they found our work interesting and novel. Below we attempt to answer the reviewer’s questions, following the order they were made.
**Are $q=1$ and $q=k-1$ representative?** A rigorous study for the general under-parametrized regime is challenging, primarily due to the hardness of characterizing the feasible region. As demonstrated in Section 3, our analysis crucially relies on the algebraic form of the surfaces, but we are unable to obtain a closed form for general $q$. Nevertheless, we do think deeply about this question, and would like to share some of our thoughts here:
- The capacity of the network (reflected by $q$) is the most important quantity in this problem, as different tasks are competing for the capacity of the network for better prediction. We note that $q=1$ and $q=k-1$ represent two extreme cases, corresponding to extreme under-parameterization (least capacity) and mild under-parametrization (close-to-optimal capacity). Since the results for these two cases are similar, plus it is reasonable to assume a *smooth* change when $q$ varies, we expect the same conclusions to hold for the $q$'s in between.
- We make a conjecture regarding the general conclusion. We hypothesize there is a function $F$ that transforms the gram matrix to its inverse as we increase $q$. Specifically, $F$ takes $q$ as input and outputs a matrix, $F(1)$ equals the gram matrix, and $F(q-1)$ equals the inverse of the gram matrix. The necessary and sufficient condition of full-exploration for general $q$ will then be: '$F(q)$ is doubly non-negative'.
- Finally, we strongly believe that the multi-surface structure that we have revealed is both universal and fundamental. As a matter of fact, we observed a similar phenomenon when working on a toy example in a generalized setting, in which different tasks have different input $X$. This is strong evidence showing that our conclusion can have broad applicability.
**The under-parametrized regime.** Please refer to the first paragraph in point 1 of the general response. In short, while the over-parameterized regime is indeed more practical, we find it important to identify *certain* problem settings, in which we can demonstrate the weakness of linear scalarization while revealing potential benefits of SMTO methods. This helps to counter the recent arguments in the field which suggest linear scalarization is sufficient for MTL, and encourages further research in developing novel SMTO methods.
**Comparison with Xin et al. (2022).** Please refer to the second paragraph in point 1 of the general response. In short, the results in these two works are not directly comparable and do not necessarily contradict each other; instead, they complement each other by providing a more complete view on the strengths and weaknesses of linear scalarization and SMTO methods.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response, I believe it is an interesting paper, and I still have some concerns:
1. Is the under-parametrized regime commonly used in recent works on multi-task learning? Is it meaningful to show that linear scalarization is not sufficient in this regime?
2. The authors study only two cases ($q = 1$ and $q = k-1$) in the underparametrized regime. I understand that it is difficult to study the case where $q$ is not equal to $1$ or $k-1$ in general, so the authors assume that the result changes slightly when $q$ varies. However, can you state this assumption explicitly in your theoretical analysis? Why is this assumption reasonable?
---
Reply to Comment 1.1.1:
Comment: Thank you for the response.
Regarding your first concern, we have two comments:
- From a theoretical perspective, studying the under-parametrized regime is necessary for linear MTL. Otherwise, the model will have sufficient capacity to fit all tasks perfectly, and there is essentially no conflict between different tasks. In other words, it is only within the under-parametrized regime that the 'competence among difference tasks' can be accurately reflected, which we believe to be a core of MTL. We refer to the reviewer to a prominent work [1], whose main result (Proposition 3) is derived within the under-parametrized regime.
- On the empirical side, it is pointed out in the literature (see [2]) that under-parameterized models are actually favorable in MTL, as they can help with information transfer and lead to better generalization.
Based on the above points, we believe our study will be of interest to both theorists and practitioners in MTL.
Regarding your second concern, we will elaborate further on our previous response:
- We believe a larger $q$ (which implies larger model capacity) should intuitively help in our problem. We have two observations to support this argument: 1) when the model has sufficient capacity to fit all tasks (i.e., $q \ge k$), the Pareto front reduces to a singleton and can be explored by scalarization; 2) the necessary and sufficient condition under $q=1$ ($C_1$) is more restrictive than the condition under $q=k-1$ ($C_{q-1}$). Specifically, under the simplified probabilistic model discussed in Section 3.3, we can prove that the probability such that $C_1$ holds is strictly smaller than that of $C_{q-1}$.
- Therefore, for the case of general $q$, we expect the necessary and sufficient condition to be more restrictive than $C_{q-1}$ while being less restrictive compared to $C_1$. Since $C_{q-1}$ still does not hold in general, we expect the hardness of full exploration to be true for all $q \in [1,q-1]$.
- Finally, we believe it is more important to reveal 'why' scalarization fails in terms of full exploration. Our key observation is that the multi-surface structure of the feasible region, along with the associated phenomenon of 'gradient disagreement', leads to the failure of full exploration. While it may not be obvious that the conclusion should change smoothly with $q$, we believe it is reasonable to expect a smooth change of the feasible region from a geometric perspective (i.e., that the feasible region will not abruptly collapse into a single surface as we increase $q$). This hypothesis, combined with our observation that gradient disagreement leads to the failure of full exploration, yields the conclusion for general $q$. We leave the rigorous analysis to future work.
Again, we would like to thank Reviewer 2hHW for engaging in the discussion. We will make sure that the above points are adequately addressed in the revision, and will be happy to answer further questions.
References
[1] Wu, Sen, Hongyang R. Zhang, and Christopher Ré. "Understanding and improving information transfer in multi-task learning." In ICLR 2023
[2] Wang et al., "Can Small Heads Help? Understanding and Improving Multi-Task Generalization". In WWW 2022 | Summary: This paper studies the linear scalarization approach in multi-task learning (MTL). It shows theoretically that the linear scalarization is not able to fully capture the Pareto optimal (PO) solutions. It also identifies necessary and sufficient conditions of full exploration of the PO solutions for under-parameterized two-layer linear MTL models.
This explains why scalarization fails in certain cases. Experiments are performed to further verify the theoretical findings.
Strengths: 1. The paper studies an important and timely problem, focusing on whether the popular approach -- linear scalarization for MTL are able to fully capture the Pareto front.
2. The perspective of this paper and techniques to develop the theory under the specific two-layer linear models are unique.
3. Paper is well written and easy to follow.
Weaknesses: 1. There is a lack of discussion of related works that also identifies the failure cases of linear scalarization fully capturing the Pareto front. See [1,2,3,4].
What is the unique contribution of this paper compared to these prior works? Some discussions need to be provided to distinguish with the prior works.
---
[1] Lin, Jiguan G. “Three Methods for Determining Pareto-Optimal Solutions of Multiple-Objective Problems”
[2] Zadeh LA, "Optimality and non-scalar-valued performance criteria"
[3] Goicoechea A, Hansen DR, Duckstein L, "Multiobjective decision analysis with engineering and business applications".
[4] R. Timothy Marler, Jasbir S. Arora, "The weighted sum method for multi-objective optimization: new insights"
[5] Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, Qiang Liu, "Conflict-Averse Gradient Descent for Multi-task Learning"
---
2. In line 56-57, it is too restrictive to claim that "if scalarization cannot fully explore the Pareto front, there is no inherent advantage of SMTOs over scalarization." In fact, there are other benefits of SMTOs beyond this. For SMTO methods like MGDA, the motivation is to find the steepest common descent direction for all objectives at each iteration. And the benefit sometimes lies in the optimization procedure besides the solutions they find. See the toy example in [5]
3. Even though the paper shows that linear scalarization approach is not able to fully explore the Pareto set in some settings, there is no discussion on whether SMTOs are able to fully explore the Pareto set in the same settings.
4. Some questions need to be clarified. See **Questions**
-----------------------------update post rebuttal-------------------------------
I have read the rebuttal and participated in the discussion.
The rebuttal partially addressed my concerns so I remained relatively positive about this work.
It could further improve with a more thorough discussion of the classical works and more precise positioning of their contributions.
--------------------------------------------------------------------------------------
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I do not agree with the claimed contribution 2, line 63-65, where the authors claimed to give the "first guarantee for full exploration in the presence of non-convexity."
- Please see reference [1], where necessary conditions of Pareto optimality are given for the linear scalarization method with "p-directionally convex" requirement of the criterion space, which is not convex.
- Please see reference [2,3], where sufficient conditions of (weak) Pareto optimality are given for the linear scalarization method.
- Please discuss the difference and relations with the listed prior works. Perhaps you need to add the specific settings, such as "for under-parameterized two-layer linear networks".
2. Since this paper is focused on the underparameterized model with $q < k$, how does the theory in this paper helps understanding the question posed in Xin et al. (2022), which is for deep models usually overparameterized?
3. What are the implementation details for visualization in Figure 1?
4. In line 170, what does "gradients disagree" mean? Does it mean they are not in the exactly same direction or the angle between the two gradients is larger than a threshold? There should be a formal definition.
---
[1] Lin, Jiguan G. “Three Methods for Determining Pareto-Optimal Solutions of Multiple-Objective Problems”
[2] Zadeh LA, "Optimality and non-scalar-valued performance criteria"
[3] Goicoechea A, Hansen DR, Duckstein L, "Multiobjective decision analysis with engineering and business applications".
---
### Minor
Typos and grammar errors:
Line 2: "since its inception" -> "because of its inception"
Line 96: "vecotrs" -> "vectors"
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations in Section 5.
I expect the authors to conduct a thorough review with the most relevant works, see Weaknesses-1.
Overall, this paper studies an important problem with a unique perspective.
I am willing to increase my score if the authors can address these questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express our sincere gratitude to Reviewer c3Pi for taking the time to review our paper, providing valuable and constructive feedback, and pointing out useful references. We hope to address all comments in the review below, following the order they were made.
**Comparison with prior works (Weakness 1 and Question 1).** Thank you for pointing out the references! After carefully checking these papers, we find that [1] does provide a sufficient condition (p-directionally convex) for full-exploration (rather than a necessary condition), which we were previously unaware of. We are grateful to the reviewer for pointing this out and will for sure 1) incorporate it in the related work; 2) revise the second point of contribution accordingly. However, we are unable to find necessary or sufficient conditions for full-exploration in [2,3,4]. We kindly request the reviewer to further refer to the relevant content in these papers.
One obvious difference between our work and prior works, as the reviewer has mentioned, is that we focus on a specific setting with a concrete model (linear networks), while prior works provide more general (and therefore much weaker) results, concerning the properties of the objective functions. But we believe the most important difference that allows our work to stand out is that the conditions we uncover are **both necessary and sufficient**, so they provide a complete picture as to when scalarization can fully explore the PF in our setting. Importantly, this facilitates the understanding of the *hardness* of full-exploration. By examining the restrictiveness of the condition, we come to the conclusion that such weakness of scalarization is fundamental. In contrast, to the best of our knowledge, there is no prior work that identifies a both necessary and sufficient condition in any non-convex setting. A sufficient condition alone is not as indicative of the hardness of full-exploration or the fundamental weakness of scalarization.
**Overclaiming (Weakness 2).** We agree with the reviewer's comment and will revise this sentence. We understand that there are other aspects in comparing scalarization and SMTOs aside from the one we take, and have pointed out in the future direction (Line 356-357) that a rigorous analysis of the advantages of SMTOs is an important future work.
**Advantages of SMTO methods (Weakness 3).** We agree this is a valid point, and have replied in point 4 of the general response. In short, we don’t think one SMTO method can fix the issue of linear scalarization in every scenario and we were not advocating for one particular method such as MGDA. On the other hand, we do believe that developing novel SMTO methods is a valuable line of research as they provide great flexibility in exploring different parts of the feasible region, and the argument that linear scalarization is sufficient for MTL should be rejected. We also perform additional experiments on MGDA and its variants to showcase that their capabilities of finding balanced solutions are not affected by the choice of random seed (see Figure C in the attached PDF in the general response).
**The under-parametrized regime (Question 2).** Please refer to point 1 of the general response. In short, our results do not have direct implications on the results of Xin et al. (2022), as they are established under different settings. Rather, they complement each other by providing a more complete view on the strengths and weaknesses of linear scalarization and SMTO methods.
**Implementation details of Figure 1 (Question 3).** Figure 1 is generated from a simple three-task linear MTL problem that we constructed, using Eq. (4). Specifically, we set $\hat y_1\approx(0.98,0,0.2), \hat y_2\approx(-0.49,-0.85,0.2), \hat y_3\approx(-0.49,0.85,0.2)$ (the number of data points is $n=3$; this is a rotated version of the equiangular tight frame), set $q=1$ (the width of the network is one, i.e., under-parameterized), and plotted the achievable points of Eq. (4) by sweeping $P_Z$ (the set of rank-1 projection matrices). The software we used is Mathematica. We will include the details in the Appendix.
**Gradient disagreement (Question 4).** We are sorry for the confusion. ‘Gradient disagreement’ refers to a situation where a point lies at the intersection of two surfaces, and the tangent planes to each surface at that point are different. Equivalently, it means that the two gradients are not in the exactly same direction, so your first hypothesis is correct. We will formally define this concept in the revision. The reviewer can also refer to point 3 of the general response for more explanations.
**Minor.** Thanks for pointing them out, we will revise accordingly.
**References**
[1] Lin, Jiguan G. "Three methods for determining Pareto-optimal solutions of multiple-objective problems." Directions in Large-Scale Systems: Many-Person Optimization and Decentralized Control. Boston, MA: Springer US, 1976. 117-138.
[2] Zadeh, Lofti. "Optimality and non-scalar-valued performance criteria." IEEE transactions on Automatic Control 8.1 (1963): 59-60.
[3] Goicoechea, Ambrose, Don R. Hansen, and Lucien Duckstein. "Multiobjective decision analysis with engineering and business applications." (No Title) (1982).
[4] Marler, R. Timothy, and Jasbir S. Arora. "The weighted sum method for multi-objective optimization: new insights." Structural and multidisciplinary optimization 41 (2010): 853-862.
---
Rebuttal Comment 1.1:
Title: Discussion of related works
Comment: Thanks for the rebuttal.
Regarding the concern of **Comparison with prior works (Weakness 1 and Question 1)**, there seems to be some confusion in definitions.
I will first clarify some definitions of **sufficient and necessary conditions for (weak) Pareto optimality (PO)** that are used in prior works.
For easy reference, there is a survey paper [5, Section 2.3] that provides all the definitions.
1. If a formulation (e.g. linear scalarization) provides a **sufficient condition for PO**, its solutions (given all possible scalarizations) are always Pareto optimal, though it may not cover all Pareto optimal points. The solution set is a subset of the Pareto optimal set.
2. If a formulation provides a **necessary condition for PO**, then a Pareto optimal point must be a solution to the formulation, though some solutions may not be Pareto optimal. The Pareto optimal set is a subset of the solution set.
To ensure linear scalarization fully explores the Pareto front (sufficient condition for full exploration), it means the PO set is a subset of the solutions of linear scalarization, the **necessary condition for PO**.
[1] provides necessary conditions of PO, or sufficient condition for full exploration, and [2,3] provide sufficient conditions for PO.
So I think [2,3] is not the same as the condition studied in this paper. Thank you for your response! And it would be better if you can have more discussion on these prior works.
---
[5] R.T. Marler and J.S. Arora, "Survey of multi-objective optimization methods for engineering"
---
Reply to Comment 1.1.1:
Comment: Thank you for the clarification. We now have a better understanding of what "sufficient/necessary conditions for PO" mean. Apart from [1] which was already included, we will further add these results to the related work section---the discussion below Eq. (3) in [2], Theorem 1 in [5], and the discussion below Eq. (4.60) in [6] (another sufficient condition for PO we are aware of). Unfortunately, we failed to access [3] online; we appreciate it if the reviewer could point us to an accessible source. We thank the reviewer once more for bringing these classical results to our attention, and will make sure to give proper credits to them in our revision.
Among the vast literature that concerns Pareto optimality in multi-task learning, one thing that we find missing is a systematic and rigorous study of *when and why scalarization fails to achieve a specific Pareto optimum*. Prior work typically resorts to *hypothetical examples* (e.g., Figure 4.9 in [6]), or applies intuitive yet theoretically-unjustified descriptions (e.g., "the non-convex regions of Pareto fronts" in [Zhang 2023](https://openreview.net/pdf?id=M8rwWdaGa6x)) to demonstrate the failure modes of scalarization. Our study---identifying the multi-surface structure of the feasible region and uncovering the phenomenon of gradient disagreement---serves as a first step towards filling this important gap in the literature. We hope this further underscores the significance of our work, beyond providing a more balanced and comprehensive view on the strengths and weaknesses of scalarization and SMTO.
Reference
[6] Boyd, Stephen P., and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. | Rebuttal 1:
Rebuttal: Here we address some common concerns raised by multiple reviewers.
**1. The under-parameterized regime.** (Reviewer J52d, c3Pi, 2hHW, aGcP)
We acknowledge that the over-parameterized regime is more practical, and the conclusions in this paper may not directly generalize. As a matter of fact, even for linear over-parameterized models, scalarization can fully explore the Pareto front (PF), and the limitation that we have revealed no longer holds. Nevertheless, our main focus is to show that under *certain* problem settings, scalarization has fundamental limitations, while SMTO methods can provide benefits (e.g., finding balanced solutions). In doing so, we strive to bolster research in SMTO by addressing and countering recent arguments found in [1,2], which excessively laud the merits of scalarization, consequently diminishing the perceived value of SMTO. Our aim is to provide a more balanced perspective on the subject and clarify the importance of SMTO in the context of relevant research.
We emphasize that we take a different perspective—the full exploration of PF—in comparing scalarization and SMTO methods than [1], which focuses on the *accuracy*. As such, the conclusions in these two works are not directly comparable. In contrast, they complement each other by providing different views on the strengths and weaknesses of these two lines of work, which we believe is of great significance to both researchers and practitioners.
**2. Simple models.** (Reviewer J52d, CFe6)
Generalization to the non-linear setting is an important future work that goes beyond the scope of our paper. Additionally,
- Even for linear networks, the objective functions are not convex w.r.t. the model parameters, so the theoretical analysis demands fundamentally different techniques than those in convex analysis (see [3]).
- As stated in the paper, our results are strong in the sense that they are inherent, i.e., our conclusions are independent of the specific optimization algorithms used to minimize the scalarization objective. Furthermore, our results can be easily generalized to multi-layer linear networks, as they have the same representation power as their two-layer counterparts.
**3. Why scalarization cannot explore intersection points?** (Reviewer nBcV, c3Pi, aGcP)
We apologize for the confusion, and provide detailed explanations in what follows.
We use Figure 4.9 in [3] for ease of illustration (see [this link](https://drive.google.com/file/d/1d-5PWLSs-MJSGpxgLr_sDdCHS2IVQpyR/view?usp=drive_link)). A point $P$ lying at the boundary of the feasible region can be achieved by scalarization, if and only if there exists a hyperplane at $P$ and the feasible region lies above the hyperplane. In Figure 4.9, $f_0(x_1)$ and $f_0(x_2)$ are positive examples as the dashed lines do not intersect with the shaded region, while $f_0(x_3)$ is a negative example. The normal vector of the hyperplane is proportional to the scalarization coefficients.
By definition, if a hyperplane at $P$ lies below the feasible region, its normal vector must be a subgradient of the surface. When the boundary of the feasible region is differentiable and the subdifferential set is non-empty, the normal vector must be the gradient of the surface, and the hyperplane becomes the tangent plane at $P$. So the mathematical characterization of ‘gradient’ would be the normal vector of the tangent plane.
Now suppose $P$ lies at the intersection of two differentiable surfaces $S_1$ and $S_2$, and that $P$ can be achieved by scalarization. Applying the above argument to $S_1$ and $P$, we know that the scalarization coefficients are proportional to the gradient w.r.t. $S_1$. Similarly, applying the above argument to $S_2$ and $P$ yields that the scalarization coefficients are proportional to the gradient w.r.t. $S_2$. This will result in a contradiction if the two gradients w.r.t. $S_1$ and $S_2$ at $P$ lie in different directions, a phenomenon we refer to as ‘gradient disagreement’.
We hope the above explanation clarifies the reviewers’ concern.
**4. Advantages of SMTO methods.** (Reviewers nBcV, c3Pi)
We acknowledge this is a limitation of our work, and have mentioned it in the future work section. Additionally,
- We hypothesize that there does not exist a *single* SMTO method that can trace out the PF in every case and therefore dominate the others. But the abundance of SMTO methods does provide flexibility for practitioners: depending on the problem structure and specific requirement, one may select proper SMTO methods that are capable of exploring *some* part of the PF. For instance, when the balancedness of solution is of utmost importance, [4] is a good fit.
- As a consequence, it is not our goal to advocate for any specific SMTO method like MGDA. Instead, by pointing out a fundamental limitation of scalarization, we hope to reject the prevalent claim that scalarization is sufficient for MTL, and bolster research in the development of novel SMTO methods. We believe our paper, combined with [1,2], provides a more comprehensive understanding of the strengths and limitations of scalarization and SMTO methods, and contributes to a healthier and more balanced understanding of different MTL paradigms.
- Finally, we conduct additional experiments on MGDA and MGDA-UB. As reflected in Figure C in the attached PDF, their capabilities of finding balanced solutions are not affected by the choice of random seed. This further strengthens our argument on the potential benefit of some SMTO methods.
**References**
[1] Xin, Derrick, et al. Do Current Multi-Task Optimization Methods in Deep Learning Even Help? NeurIPS 2022
[2] Kurin, Vitaly, et al. In defense of the unitary scalarization for deep multi-task learning. NeurIPS 2022
[3] Boyd, Stephen P., and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004
[4] Navon, Aviv, et al. Multi-task learning as a bargaining game. arXiv:2202.01017
Pdf: /pdf/c7a76df6556b5d10a6f04f6f0d38fad928b6e647.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper revisits linear scalarization in multi-task learning from a theoretical perspective.
The authors reveal that scalarization is, in general, incapable of tracing out the Pareto front.
Specifically, when the model is under-parametrized, a multi-surface structure of the feasible region is revealed by the authors and it can be used to identify necessary and sufficient conditions for full exploration.
Experimental results further verify the theoretical findings.
Strengths: 1. This paper provides rigorous analysis to answer an important question in multi-task learning: whether linear scalarization is capable of
fully exploring the Pareto front?
The authors provide insightful remarks for the theories, and the figures provide intuitive explanations for the multi-surface structure of the feasible region.
2. The paper is well-written and easy to follow. Sufficient preliminaries and explanations are provided.
Weaknesses: 1. The paper only studies the under-parametrized regime. It remains unclear whether the conclusion still holds for over-parametrized regimes where the model capacity is large enough and task competence does not exist.
2. The paper mainly focuses on a two-layer neural network. Therefore, it remains unclear whether the conclusion still holds for non-convex functions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please check the weakness.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank Reviewer J52d for taking the time to review our paper. We are grateful that the reviewer appreciates our theoretical contributions. The concerns raised by the reviewer are addressed in the general response. Specifically,
**The under-parameterized regime.** Please refer to point 1 of the general response. In short, while the over-parameterized regime is indeed more practical, and that our results might not directly transfer, we still find it important to identify *certain* problem settings, in which we can demonstrate the weakness of linear scalarization while revealing potential benefits of SMTO methods. This helps to counter the recent arguments in the field which suggest linear scalarization is sufficient for MTL, and encourages further research in developing novel SMTO methods.
**Toy models.** Please refer to point 2 of the general response. We emphasize that even for two-layer linear networks, the loss function is *not* convex w.r.t. the model parameters, and that both the techniques and results in this paper are novel (to the best of our knowledge) and greatly advance those in the literature of convex analysis.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for addressing my concerns. Although the results may not hold in the over-parameterized regime, this paper still provides some new insights into the weakness of linear scalarization and the strength of SMTO methods. I prefer to increase my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your acknowledgment of our response and taking the time to review our paper! | Summary: In this paper, the authors introduce a novel mathematical framework to answer the problem “Under what conditions can (or cannot) linear scalarization recover the Pareto front of a given multi-objective optimization problem?”. To this end, the paper considers the multi-task learning setting using a simple linear multi-task learning model in the under-parameterized regime, and introduce new concepts like “feasible regions” and "gradient disagreement", and state that the intersection points of aforementioned feasible regions are unattainable by optimizing any linear scalarization of the multiple objectives. Accordingly, the paper derives necessary and sufficient conditions as to when optimizing any linear scalarization of the multiple objectives can recover the Pareto front by checking for situations where the Pareto front of the problem lies entirely in one so called feasible region. The paper then provides some empirical validation of the theory that suggest linear scalarization indeed fail to cover the Pareto front, especially the “balanced” Pareto optimal points. The empirical results further suggest that Specialized Multi-Task Optimizers (SMTOs) like MGDA and MGDA-UB can achieve a more “balanced” Pareto optimal points compared to linear scalarization.
Strengths: * The paper is aimed at finding necessary and sufficient conditions for when scalarization can not recover the Pareto front, which is an important and pertinent problem in the field of multi-objective optimization.
* This paper provides a mathematical framework for analyzing the aforementioned problem, which may provide insights into how to select a method of solution for a multi-objective optimization problem based on the nature of the problem.
Weaknesses: * The necessary and sufficient conditions derived for linear scalarization to recover the Pareto front in this paper requires Pareto optimal objective vectors to be “on the same feasible surface”, rather than being at an “intersection point of feasible surfaces”, as described in Lemmas 3.4, 3.5. Yet, it is unclear why these intersection points cannot be attained by some scalarization of the objectives. Specifically, the “gradient disagreement” concept is unclear; for example how to mathematically characterize this gradient in the objective space, and why this gradient is related to the optimization of the multiple objectives.
* It is unclear what is the origin ((0, 0, 0) point ) of the objective spaces used in the Figures 1 and 3. Thus, it is a bit hard to understand the provided illustrations of feasible regions in the objective space.
* While the paper provide some weakness (i.e. the inability to recover the whole Pareto front of a problem) of scalarization in the under-parameterized regime, it does not give theoretical validation whether SMTOs can overcome this weakness. The empirical validation for showing that SMTOs can overcome the aforementioned weakness have some limitations, as described in following points. Thus, the conclusion of the paper seems to be not justified from the theoretical and empirical results.
* The experiments done to verify theory seems inconsistent for SMTOs and linear scalarization methods. Specifically, SMTOs are optimized using iterative methods for 100 epochs, while scalarization solutions are obtained from the corresponding closed form expression for the optimum solution. A fair comparison of methods would be either running SMTO methods for a very large number of iterations, or using an iterative updating scheme for scalarizations for the same number of epochs.
* By nature of the MGDA algorithm, if deterministic gradients are used, the algorithm should terminate at the first encounter of a Pareto stationary point (a notion of stationarity for multiple objectives similar to stationarity in single objective optimization), which is often a point either at least one objective have achieved stationarity, or the point where the objectives begin to conflict with one another (see some discussion on this here [1]). Thus, this kind of point is usually a point that will favor one objective over the others (unless the objectives are perfectly aligned). In this sense, it is unclear how MGDA have achieved an interior point of the Pareto front, as implied by Figure 3.
* At the beginning of the paper (such as in the abstract) the authors pose the weakness of scalarization over SMTOs as the inability to “fully recover” the Pareto front, while the empirical evaluations suggests that SMTOs can also only recover a single Pareto optimal point, which seems to not align with the initial message.
Minor comments
* Definition of Pareto front introduced in the paper seems not standard (line 119). Specifically, in this paper it is referred to as the collection of Pareto optimal points (PO) $\theta^*$, while usually Pareto front is the set of objective values $\\{L_i(\theta^*)\\}_{i\in[k]}$ corresponding to Pareto optimal points.
[1] Liu, Xingchao, Xin Tong, and Qiang Liu. "Profiling pareto front with multi-objective stein variational gradient descent." Advances in Neural Information Processing Systems 34 (2021): 14721-14733.
Edit: Added missing reference.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: * What is the mathematical definition of the gradients depicted in Figure 3, and what is the precise relationship between the “gradient disagreement” and the inability to recover this point by optimizing a linear scalarization of the objectives?
* Could the authors provide some intuition or some physical meaning of the feasible regions described in the paper, and how they relate to the multiple objectives?
* Would an implementation change of the experiments as described in the previous section (point 4) change the solution distribution of each algorithm compared to Figure 3?
* Given the nature of MGDA algorithm as described in previous section (point 5), could the authors provide the reason for the convergence of MGDA like algorithms for the interior point of the Pareto front of the problem?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations of the proposed framework are discussed discussed in the paper. In addition to these limitations, the paper does have limited validation as to SMTOs can overome the weaknesses of linear scalarization, as described in previous sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer nBcV for their insightful comments and constructive feedback. We address concerns and answer questions posed by the reviewer below, following the order they were made.
**Why scalarization cannot explore intersection points (Weakness 1 and Question 1).** Please refer to point 3 of the general response.
**The origin point in the figures (Weakness 2).** Figures 1 and 3 are generated from MTL problems with three tasks, and the feasible regions indicate the achievable set of MSEs (each axis corresponds to the MSE on a task; the smaller the better). Hence, the origin point (0,0,0) indicates solving the MTL problem perfectly, whereas the point (1,1,1) corresponds to a bad solution and is not Pareto optimal. We will regenerate Figure 3 to ensure consistency with Figure 1 by setting the origin (0,0,0) on the front and the point (1,1,1) on the back.
**Advantages of SMTO methods (Weakness 3).** We agree this is a valid point, and have replied in point 4 of the general response. In short, we don’t think one SMTO method can fix the issue of scalarization and we were not advocating for any specific method. On the other hand, we do believe that developing novel SMTO methods is a valuable line of research as they provide flexibility in exploring different parts of the feasible region, and the prevalent argument that scalarization is sufficient for MTL does not hold in all the cases, at least in the under-parametrized regimes. We also perform additional experiments on MGDA and its variants to show that their capabilities of finding balanced solutions are not affected by the choice of random seed (see Figure C in the attached PDF in the general response).
**Unfair comparison (Weakness 4 and Question 3).** We respectfully disagree with this point since 1) using closed-form solutions will only favor scalarization. Therefore, if our conclusion is valid in the current setup, it’s going to hold with finite iterations as well; 2) in this paper, we study the weakness of scalarization on the *representation* level. Using finite iteration methods without resorting to the existing closed-form solutions will unnecessarily introduce artifacts from the *optimization* procedure.
**Why does MGDA converge to interior points? (Weakness 5 and Question 4)** After checking the reference, we failed to find a precise statement/theorem that claims or implies that MGDA will converge to a point on the periphery of the Pareto front. We would appreciate it if the reviewer could elaborate further on this point, or point to the precise location in the reference.
To further strengthen our empirical findings, we perform some additional experiments. First, we plot the optimization trajectories of MGDA and MGDA-UB in Figure A in the attached PDF. Both algorithms first overfit task 2 and approach scalarization optimal points, but then turn sharply twice and finally reach more balanced solutions. Second, Figure C demonstrates that the balancedness of the final solution is not affected by the random seed.
**Deviation from the initial message (Weakness 6).** We apologize for the confusion here. We did not intend to show that a particular SMTO method can overcome the weakness of scalarization. Instead, here are some high-level messages that we would like to deliver:
- Xin et al. (2022) observed that scalarization is capable of tracing out the PF when the objective functions are convex, and used it to support their hypothesis that there is no inherent advantage of SMTO methods and scalarization is sufficient for MTL. Our theoretical analysis demonstrates that the reasoning is untenable. Specifically, even with some mild non-convexity, scalarization is incapable of fully exploring the PF. This calls for further research on explaining the empirical success of scalarization and reconciling with the theoretical limit that we have revealed.
- For the experiment on MGDA, we did not intend to show that it can overcome the limitation of scalarization. As a matter of fact, we do not think there exists a single SMTO method that can fully explore the PF in every scenario and can be taken as the gold standard in MTL. Rather, we would like to demonstrate that certain parts of the PF, which are not achievable by scalarization yet are of potential interest (e.g., a region that contains more balanced solutions), can be achieved by certain SMTO methods. This implies that research on SMTO is not futile, and helps to reject the claim that scalarization is sufficient for MTL. In doing so, we hope to foster a healthier and more balanced development in the field of MTL.
**Explanation of the feasible region (Question 2).** Feasible region is originally defined as the set of all possible points in $\mathbb{R}^k$ (each dimension is a quadratic function, as shown in Eq. (4)) that can be achieved by varying $P_Z$, a projection matrix determined by the weight of the hidden layer $W$. Intuitively, the feasible region is a reflection of the network's representation power. In our work, we slightly modify the definition of feasible region by imposing some restrictions on $P_Z$ (see Eq. (5)). This helps to simplify the subsequent analysis, while keeping the PF intact, meaning that the original PF is still a subset of our defined feasible region.
The multi-surface structure arose when we were performing a fine-grained analysis of the feasible region. A possible hypothesis is related to game theory—viewing each task as a player, the number of surfaces equals the possible coalitions formed by the $k$ players ($2^k$). Each surface corresponds to one particular coalition and represents a set of possible outcomes the coalition can obtain, potentially favorable to the players within. Finally, the intersection of different surfaces indicates conflict among coalitions.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the clarifications, specially on the concepts of the feasible regions and the gradient disagreement. I have few follow up questions and comments based on these responses.
* In the 2D objective space figure provided in the link, is it possible to point out the corresponding feasible regions and the corresponding intersection point of these feasible regions?
* As mentioned in point 4 of the response, did the authors consider “artifacts of optimization procedure” might also affect (possibly in a favorable way) the SMTO methods?
* In the caption of Figure C in pdf, it says “Both algorithms tend to find solutions located in the interior of the feasible region”? Does this mean these are not necessarily in the Pareto front?
* Regarding point 5 of the response, the reference is not for some theoretical results on why MGDA can’t converge to the interior point on Pareto front, rather for empirical results, similar to what this paper is trying to provide. Some of the locations in the paper that discuss and demonstrate this issue are: Figure 1 and Example 1 (a simple toy example), Figure 2 (b) (stochastic version of MGDA, applied for several datasets).
* I feel the setting used in this paper is significantly incomparable to that of Xin et al. (2022) to imply that results in this paper can dispute the claims in Xin et al. (2022), as suggested in point 6.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response. We will answer your questions in a point-by-point manner:
- **Feasible region.** As stated in the caption of the figure, the shaded region (denoted as $\mathcal{O}$) is the set of achievable values. Rigorously speaking, the feasible region that we have defined is a subset of $\mathcal{O}$, but these two can be treated equally in our setting. The reason is that the feasible region contains the entire Pareto front (PF) (explained at the beginning of Section 3.1), and since our interest lies in the exploration of PF, ignoring the points that are Pareto dominated will not affect our analysis.
- **Intersection point.** The figure comes from a classical textbook [1] and demonstrates a *hypothetical* failure case of scalarization. In contrast, the intersection point is a new discovery in a concrete setting and hence cannot be reflected in the figure. As stated in the paper, what we have done partly is to 'penetrate through the surface and gain a holistic view of the feasible region'. By solely examining the surface, it is not possible to identify the intersection points or the phenomenon of gradient disagreement. We believe this is one of our core contributions: our work advances previous studies by offering new insights into the failure modes of scalarization, going beyond mere hypothetical examples.
- **Artifacts of optimization procedure.** This is not likely, especially in light of the additional experiments that we have performed. Specifically, we stick to the original implementation of MGDA and MGDA-UB with no modifications at all. To eliminate the effect of random initialization, we performed additional experiments and observed similar results (see Figure C of the attached PDF). Finally, we observed consistent results when varying the tasks, further consolidating the effectiveness of MGDA and MGDA-UB in our setting.
- **Interior point.** We are sorry for the confusion. We intend to claim that the two algorithms tend to find solutions located at the interior of the PF, which is a part of the *surface* of the feasible region. As stated previously, the feasible region contains the entire PF, and the PF must lie on the surface of the feasible region. Furthermore, MGDA and MGDA-UB are guaranteed to converge to Pareto optimal solutions (Theorem 2.2 of [2] and Theorem 1 in [3]). Therefore, the two convergent points must lie on the surface of the feasible region and belong to the PF. In our experiments, we also plot non-negative orthants at the two convergent points and observed that they do not intersect with the feasible region, meaning that they are indeed Pareto optimum.
- **Convergence of MGDA.** We agree it is possible to construct examples where MGDA converges to bad solutions. As stated in our previous response, we don't think one SMTO algorithm can work in every scenario, and the choice of algorithm should depend on the problem structure and specific requirement. Nevertheless, since our experiments are performed in a more practical scenario using real (despite simple) neural networks, and the experimental results are consistent across difference random seeds and tasks, we believe it is reasonable to claim that 'certain SMTO methods have the potential to find balanced solutions'. After all, it is not our goal to advocate for a particular algorithm; what we hope is to bolster research in SMTO and promote a healthier and more balanced developement in MTL.
- **Xin et al. (2022).** We do realize that the difference of settings makes the results in these two works not directly comparable. Nevertheless, we would like to clarify a few points:
- We do not intend to refute all the claims and conclusions in Xin et al. (2022), but we do feel that there are *some* reasoning gaps in their work that we would like to point out and clarify. Specifically, it is marked bold in their paper that 'in the convex setting it is provable that no algorithm can outperform scalarization', which the authors used to set the tone for their paper. However, their experiments are performed with deep neural networks, and the authors simply connect the theory and experiments by listing a number of open questions. Through our theoretical analysis, we found that the conclusions in the convex setting do not transfer. Therefore, the theoretical results in Xin et al. (2022) cannot be used to support their empirical findings. Instead, the community needs to develop a new theory to explain the empirical success of linear scalarization.
- We see our work as a complement to Xin et al. (2022). Putting together, they provide a more comprehensive view on the strengths and weaknesses of linear scalarization and SMTO.
We thank the reviewer for pointing this out and will tune down our tone in drawing comparison with Xin et al. (2022). The remaining points will also be addressed accordingly. We appreciate the reviewer's constructive feedback for our work, and will be happy to answer any further questions. | null | null | null | null |
LinkerNet: Fragment Poses and Linker Co-Design with 3D Equivariant Diffusion | Accept (spotlight) | Summary: In this paper, the authors formulated a new linker design task where the fragment poses are unknown. The authors proposed a 3D equivariant diffusion model which enables the co-design of fragment poses and the linker structure in a unified framework.
Strengths: 1. The paper is well-written and easy to follow.
2. The authors developed an effective fragment pose prediction module inspired by the Newton-Euler equations in rigid body mechanics, allowing for the accurate adjustment of fragment center positions and rotations. This is the main technical contribution compared with the previous work DiffLinker.
3. Comprehensive experiments on ZINC and PROTAC-DB datasets demonstrate the superiority of LinkerNet over other baseline methods in both unconstrained and constrained generation settings.
Weaknesses: 1. The authors do not show the application of LinkerNet for molecule generation conditioned on the target protein.
2. The code is not provided.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: please see the weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and suggestions. Please see below for our responses to the comments.
**Q1: The authors do not show the application of LinkerNet for molecule generation conditioned on the target protein.**
A1: Thank you for pointing this out! This is indeed one limitation as we discussed in the Conclusion section. We will consider the protein context and apply our model in a more realistic scenario in future work.
**Q2: The code is not provided.**
A2: We are committed to open-sourcing the data, training/inference code, and the model checkpoint once the paper is published.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I have read the reply and appreciate the author's reply. My concerns are mostly resolved. Thanks! | Summary: The article discusses the problem of designing linkers to connect different molecular fragments in order to form stable drug-candidate molecules, specifically in targeted protein degradation techniques such as PROteolysis TArgeting Chimeras (PROTACs). One significant challenge in these techniques is that existing models for linker design assume that the relative positions of the fragments are known, which may not be the case in real scenarios. This problem is addressed by the authors through the development of a 3D equivariant diffusion model that jointly learns the generative process of both fragment poses and the 3D structure of the linker, viewing fragments as rigid bodies and designing a fragment pose prediction module inspired by the Newton-Euler equations in rigid body mechanics.
The proposed 3D equivariant diffusion model, called LinkerNet, jointly learns the generative process of both fragment poses and the 3D structure of the linker. Fragments are viewed as rigid bodies and designed with a fragment pose prediction module inspired by the Newton-Euler equations in rigid body mechanics. To address the problem of designing linkers when fragment poses are unknown in 3D space, LinkerNet is able to co-design fragment poses and the linker. The model represents each fragment as a rigid body and its pose as the position of its center and the rotation. The linker is a 3D graph with atom positions, atom types, and bond types. LinkerNet is a diffusion model for the fragment poses and linker co-design task, involving an equivariant network and physics-inspired prediction module for denoising fragment poses and the linker. The model is able to generate chemically valid, synthetically-accessible, and low-energy molecules under both unconstrained and constrained generation settings.
Strengths: First, the proposed method, LinkerNet, differs from previous methods in linker generation by co-designing fragment poses and the linker in a 3D equivariant diffusion model, instead of assuming that the relative positions of the fragments are known. Previous methods mostly focus on 2D or 3D linker design with fixed fragment poses, while LinkerNet addresses the more general linker design problem where fragment poses are unknown in 3D space. This perspective is new and can be a good guidance for future works on linker design.
The authors also introduce a physics-based neural network for their task, which is also novel. The physics-inspired neural network in LinkerNet is advantageous in co-designing fragment poses and the 3D linker for stable drug-candidate molecules. It allows for predicting fragment poses in a way that takes into account the molecular geometry and leverages Newton-Euler equations in rigid body mechanics, making it more effective than simply predicting an invariant update in the local coordinate system. The neural network predicts forces on each fragment atom to compute the total force and torque on each fragment, which is more effective than predicting only the change in translation in the local frame.
Experimentally, the authors demonstrate that their proposed model can generate valid, synthetic-accessible, and low-energy molecules under both unconstrained and constrained generation settings, outperforming other existing generative models.
Overall, the paper presents a novel good-quality work with good motivation and theoretical justification. For experiments, LinkerNet achieves satisfying results on benchmark datasets compared with existing methods.
Weaknesses: The experimental results are not reported with error bars (standard deviation), and the training complexity, training time, and sampling time are not compared between different methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Question:
1. How is the training/sampling complexity compared with existing methods, especially with DiffLinker?
2. I would like to see the error bars (standard deviation) presented in the table results.
Typo:
1. Line 154, draw samples and computer $\rightarrow$ draw samples and compute.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and suggestions. Please see below for our responses to the comments.
**Q1: How is the training/sampling complexity compared with existing methods, especially with DiffLinker?**
A1: For the training complexity, DiffLinker converges within 300 epochs and takes 76 hrs with one V100 GPU as the original paper reported. Our model converges within 50 epochs and takes 48 hrs with the same type of GPU. For the sampling complexity, DiffLinker finished sampling linkers for 43 PROTAC fragment pairs (100 linkers for each pair) in 132 min while our model takes 761 min with the same NVIDIA 1080 GPU. We thank the reviewer's suggestion and will add the training/sampling complexity analysis in the updated manuscript.
**Q2: I would like to see the error bars (standard deviation) presented in the table results.**
A2: We have computed the error bars for the main results by running the sampling procedure 3 times with different random seeds. The results are shown in Table 1/2 in the general response.
---
Rebuttal Comment 1.1:
Title: Response by Reviewer
Comment: Overall I find this paper well-motivated and properly justified. I will keep my score for a weak accept. | Summary: The paper presents a diffusion model for molecular linker design. Given the 3D structures of two molecular fragments, the model generates a pose for each of the fragments, as well as types and positions of atoms that can be added to link the two fragments in a single molecule.
Strengths: The method and evaluation seem sound.
On the whole, the work and its relation to prior works are clearly explained.
If you are interested in designing PROTACs, then it is a nice practical contribution.
Weaknesses: In section 3.3 I found the description of the fragment pose prediction module hard to understand. I didn’t understand the description leading up to equation (12) and I would like to know the architecture of the neural net used for fragment pose prediction. Is it another GNN the same as the linker GNN?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: ‘in scenarios involving new drug targets… relative position between fragments may not be readily available’ – why is this the case for new targets and not older ones?
How much freedom is there really in relative fragment poses for PROTACs? It would be nice to see real pairs of examples with same heads, different relative poses, and both having high experimentally verified bioactivity. In line 274 I have no idea if the stddev of 0.1 fragment distance is realistic.
Line 83 ‘this is always the case’ should be ‘this is not always the case’
Line 120: What are ‘atom features’ as in ‘number of atom features’?
Line 116: say what Na, Nb, NF are first, then give the other definitions that use them.
Why do linker bond types need to be explicitly generated? Can’t they be inferred from linker atom positions?
Did you experiment with other samplers besides the one described in equation (2)?
How did you decide on the noise schedules for all the different things – fragment rotation, linker atom types, linker atom positions and so on?
Line 147 and line 154 should say ‘compute’, not ‘computer’.
Line 178: what is V_M under the summation? All the atoms in the molecule?
Is the ‘baselines’ comparison, where fragment poses are randomly corrupted, fair to DiffLinker etc.? If the fragments are presented in a relative pose for which no sensible linker can be designed, then not only does this give DiffLinker an impossible task, it also presents DiffLinker with an input that is presumably out-of-distribution relative to the partially obscured real molecules that it was trained on.
Are there always 2 fragments? Do you sample position and rotation for both? You could fix one without loss of generality.
Line 258 and table 1: is ‘Recovery rate’ how frequently a single sample is identical to the reference, or is it how often the reference appears in the list of 250 or 100 generated samples?
Line 321 and Table 4: is row (a) of table 4 just the same as row ‘ours’ of table 1, except that one is for ZINC and the other for PROTAC-DB?
Line 326: ‘liker’ should be ‘linker’.
Why is the fragment pose prediction module separate? Why not use a single GNN to predict ‘forces’ on all fragment and linker atoms, with a special readout head to do the update in equation (16)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The methodology seems very specific to this problem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed feedback and suggestions. Please see below for our responses to the comments.
**Q1: “In section 3.3 I found the description of the fragment pose prediction module hard to understand...”**
A1: Equation (12) describes a way to update poses by predicting the pose change and applying it to the current pose, i.e. $(R_t, p_t) \circ (R_{t \rightarrow 0}, p_{t \rightarrow 0}) = (R_t R_{t \rightarrow 0}, p_t + R_t p_{t \rightarrow 0})$, which is same as the formula in Supplementary 1.8 Structure Module in the AlphaFold2 paper [Jumper et al., 2021]. The neural network predicts the rotation and translation change $R_{t \rightarrow 0} = \phi_R $ and $p_{t \rightarrow 0} = \phi_p$, which are invariant to the global rigid transformation, and apply them to the current noisy rotation and translation $R_t$ and $p_t$ to obtain the denoised ones $\hat{R}_0$ and $\hat{p}_0$. We have added more explanation before introducing Equation (12). It’s also worth noting that this updated formula is suboptimal and is *not* the one we used. We compare this way with our proposed physics-inspired pose prediction module in Table 2 to show the superiority of our method.
The fragment pose prediction module is one graph attention layer built upon the linker GNN, instead of a separate GNN. As Figure 2 shows, we added one graph attention layer ($\phi_f$) on the top of $h^L$ to predict the force and torque. We have clarified it in the updated manuscript.
**Q2: “‘in scenarios involving new drug targets… relative position between fragments may not be readily available’ – why is this the case for new targets and not older ones?”**
A2: “Involving new drug targets” means the molecular fragments are bound with two protein targets instead of one. In this case, the relative position between fragments becomes unknown since the protein-protein binding pose has flexibility (see the example Fig.1 in the general response). We have clarified this point in the revised manuscript.
**Q3: “How much freedom is there really in relative fragment poses for PROTACs?”**
A3: We find two PROTACs whose fragments bind with the same protein target (BRD4) and E3 ligase (CRBN) as Fig.1 in the general response shows. The linkers differ with a ‘COC’ motif. As we can see, the fragment poses differ a lot even with a slightly different linker.
**Q4: “Why do linker bond types need to be explicitly generated? Can’t they be inferred from linker atom positions?”**
A4: The heuristic approach to infer bond types is sensitive to the choices of hyperparameters and atom positions (the bond distance distribution is very sharp), and is not always reliable. Thus, we choose to predict bond types simultaneously.
**Q5: “Did you experiment with other samplers besides the one described in equation (2)?”**
A5: No, this is the standard choice of the diffusion process since we can directly draw samples from $q(x_t | x_0)$ and compute the posterior in closed-form.
**Q6: “How did you decide on the noise schedules for all the different things – fragment rotation, linker atom types, linker atom positions and so on?”**
A6: We choose to use the cosine schedule [Nichol and Dhariwal, 2021] with s=0.01 as all noise schedules. This is a common choice in DDPM. We follow this way and find it works well empirically.
**Q7: “Is the ‘baselines’ comparison, where fragment poses are randomly corrupted, fair to DiffLinker etc.?”**
A7: In the constrained generation setting, we randomly sample fragment poses but also make sure the candidate anchors in two fragments are facing to each other and there is no clash between fragments. This setting has been close to the real scenario for generative models which can not predict fragment poses, otherwise most of the other generative models would all fail.
**Q8: “Are there always 2 fragments? Do you sample position and rotation for both? You could fix one without loss of generality.”**
A8: Great question! Yes, PROTAC always involve two proteins and thus two binding molecular fragments. Although fixing one and sampling the other is equivalent to sampling two poses mathematically, it will involve the fussy coordinate system transformation for linker atoms and may make the learning process more difficult. For instance, assume the current ligand pose is [-M1+] [L] [-M2+] and the true pose is [+M2-][L][-M1+], where M1, M2 are two fragments, L is the linker and +/- denotes the two sides of fragments. The model would have to move all linker atoms in L and M2 to the other side if we fix M1, but it could simply mirror rotate M1 to achieve [+M1-] [L] [-M2+] (same as [-M1+] [L] [-M2+]) if we sample poses for both fragments. In preliminary experiments, we test sampling one relative pos / sampling fragment distance and two rotations, but neither of them is as effective as sampling two poses.
**Q9: Line 258 and table 1: the definition of ‘Recovery rate’**
A9: We sample 250 or 100 linkers for each fragment pair in the test set. If any linker is same to the reference linker, we count it as recovered for this fragment pair. The recovery rate is averaged over the whole test set.
**Q10: Line 321 and Table 4: is row (a) of table 4 just the same as row ‘ours’ of table 1, except that one is for ZINC and the other for PROTAC-DB?**
A10: Yes. The molecules in PROTAC-DB are much larger than the ones in ZINC. Without appropriate constrained guidance, the energy is easy to be very large. Another reason is that we applied the hard bond mask based on the candidate anchor sets as described in Sec 3.4. Without guided sampling, the model may generate strange bond angles leading to high energy.
We thank the reviewer for pointing typos out! We will fix them in the updated manuscript.
**References:**
* John Jumper et al. Highly accurate protein structure prediction with alphafold. Nature, 2021.
* Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. ICML, 2021.
---
Rebuttal Comment 1.1:
Title: Thank you for your clear response.
Comment: I have read the reply and looked at the PDF and my questions are resolved. | Summary: The authors describe a novel method for the computational design of linkers using equivariant diffusion models. This is coupled with a fragment pose prediction step that allows the design of linkers without first knowing the relative orientation of the fragments.
Strengths: - Comprehensive related works section
- Novel and sensible approach, allows for constrained generation
- Good empirical results support the paper's claims
- Good ablation experiments for the pose prediction module
Weaknesses: - The authors need to better introduce the problem for a more general machine learning conference. Though well-defined for biologists, concepts like ubiquitination need some explanation for the NeurIPS audience.
- The writing needs to be improved. The authors need to explain the equation forms of their method, to give an intuitive sense of what their method does. This is especially critical between lines 149-164.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - In line 92, please clarify why this is suboptimal compared to your approach. This should be changed in the text to also be used as a point of comparison with your method.
- Is there any data or evidence to back up the claim in line 280? Otherwise, this is a big limitation of the method.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have mentioned limitations in the conclusion, albeit with a short sentence. This should be expanded upon.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and suggestions. Please see below for our responses to the comments.
**Q1: “The authors need to better introduce the problem for a more general machine learning conference… The writing needs to be improved … ”**
A1: We thank the reviewer for the suggestions on the writing part. We have updated our manuscript by adding more explanations of the biological terminologies for the more general ML audience and adding more descriptions between equations.
**Q2: “In line 92, please clarify why this is suboptimal compared to your approach.”**
A2: The SMILES representation is not an optimal choice since it fails to capture molecular similarities and suffers from the validity issue during the generation phase. A small modification of the molecule formula will significantly change the SMILES string. These drawbacks are clearly discussed in [Jin et al., 2018] and we have added more explanation in the Related Work.
**Q3: “Is there any data or evidence to back up the claim in line 280?”**
A3: In the original paper of 3Dlinker and DiffLinker where the fragment poses are fixed, the novelty and uniqueness for their approaches are 29% / 32% and 24% / 30% on ZINC, which are way lower than the results of 50%+ / 90%+ in our new setting. In addition, the energies of molecules generated by 3DLinker and DiffLinker in our new setting are high, indicating their molecules are less stable. This evidence indicates that the actual chemical linker design space to form a low-energy molecule is small, supporting our claim in line 280 and related results.
**Reference:**
Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder for molecular graph generation. In International Conference on Machine Learning, pages 2323– 2332. PMLR, 2018.
---
Rebuttal Comment 1.1:
Title: Increased score
Comment: Thank you for your responses. Since most of my criticism was due to writing and this has been addressed, I have increased the score of my review. | Rebuttal 1:
Rebuttal: We thank all reviewers for their efforts and time in evaluating our submission and providing valuable suggestions and feedback. In the general response pdf, we
* Add one example to show that the fragment poses are **not** fixed in the PROTAC design (Figure 1). Both PROTACs bind with the same protein target and E3 ligase, but the fragment poses and linkers are different.
* Add error bars for the main results in ZINC and PROTAC-DB datasets (Table 1, Table 2). The results still support our claims in the main text.
Please let us know if there are any additional concerns we can address for you to consider raising your initial rating, thanks!
Pdf: /pdf/6e043fd598962086abf827b5badc2f838800f0f6.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DDF-HO: Hand-Held Object Reconstruction via Conditional Directed Distance Field | Accept (poster) | Summary: The paper presents DDF-HO, a method for handheld object reconstruction based on Directed Distance Fields. Given a single RGB image containing a hand grasping an arbitrary object, DDF-HO reconstructs the object without requiring a template or depth priors. Previous methods addressing this problem have relied on the Signed Distance Field (SDF) for the same purpose. The paper includes experiments comparing DDF-HO with such methods on multiple datasets demonstrating the advantages of the proposed method in terms of accuracy.
**Final Rating**
After reading the numerous reviews and the responses of the authors, I believe that this is a paper that can/should be accepted. As I wrote in my previous post, thorough rewriting is needed in some parts to improve clarity.
Strengths: S1. DDF-HO obtains 3D reconstructions of much higher quality than recent methods [25, 32, 62]. The paper provides sufficient evidence that this is due to the use of DDF instead of SDF as the representation, the use of a ray-based feature aggregation scheme, and a 3D intersection-aware hand pose embedding. These contributions lead to significant increases in reconstruction accuracy.
S2. The method is described clearly and with sufficient detail to enable reasonable reproduction. The code is also included, but I did not try to work with it, or identify the key pieces. Handling of symmetry is the exception to this comment.
S3. Three large-scale, widely used datasets, one synthetic and two real, are used to generate the experimental results. The protocol follows that of IHOI [62], which is the most closely related prior work. Two additional recent baseline methods have been chosen for the experiments.
DDF-HO outperforms the baselines on all three datasets. It also shows good zero-shot generalization properties, suggesting that it does not overfit. The experimental results section includes some analysis of potential reasons for the differences in performance across algorithms. The ablation studies are also informative, especially the comparisons to IHOI.
Weaknesses: W1. My primary concern about the paper is the number and complexity of steps. DDF-HO requires sampling rays, computing and aggregating 2D and 3D features, measuring geodesic distances etc. The authors acknowledge that their method has higher complexity than the baselines in the limitations section, but I would like to see more analysis and data on the tradeoffs between speed and accuracy. How long do training and inference take for IHOI and DDF-HO on similar hardware and data? Is interactive deployment feasible?
W2. Handling of symmetric objects is not presented clearly overall. How is the reflective plane of symmetric objects discovered?
Minor Comments (not affecting my recommendation)
I find the use of “undirected” as a property of SDF confusing. The sign and the distance value of SDF direct us to the nearest surface. I do not have a good suggestion of an alternative.
Small language errors can be found throughout the paper. Examples include missing articles and minor inconsistencies.
98: “Arbitrarily” is more appropriate than “randomly.”
Figure 4 and Tables 1 and 2 can be placed closer to the text that refers to them.
Refereed, rather than arXiv, versions of papers should be cited whenever possible.
The first paragraph of Section 3 of the supplement is very important for understanding the algorithm, in my opinion. I suggest finding some space for it in the main paper. One of the weaknesses I had noted was lack of clarity on the ray sampling process.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Answers to the first two weaknesses listed above could go a long way toward improving my rating. (I went with a conservative rating at this stage.)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: More information on complexity and run times would have been useful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive review. Below we try our best to address your concerns and questions. We have already revised the paper according to the reviews.
## Q1. Efficiency and Model Complexity of DDF-HO
IHOI is a typical SDF-based hand-held object reconstruction method. We list detailed comparisons with IHOI in the following table.
||DDF|IHOI
|------|------|------|
Training time| 2d 8h |IHOI: 2d|
Parameters|24.5M| 24.0M
Running Speed|23ms/img (44FPS) |5.8ms/img (172FPS)|
From the above table, it can be seen that our method runs slower than IHOI but still achieves real-time performance. All experiments in the above table are conducted on a single NVIDIA A100 GPU. The slower inference comes from the attention calculation for 2D ray-based feature aggregation and the more elabored 3D feature generation. Ray sampling in the testing time only takes less than 1ms since only randomly sampling on unit sphere for ray starting points and directions is conducted. For model size, the two methods share a similar scale. Generally, the increased parameters mainly come from the cross attention mechanism in 2D ray-based feature aggregation (Sec. 3.3). Other modules are only adopted to collect features to model hand-object interactions and do not significantly increase the network size. Moreover, since the network needs to predict both the distance and visibility signal, the convergence speed of DDF-HO during training is also slower than IHOI.
## Q2. Symmetry Loss
Since other baseline methods don’t leverage this prior knowledge, we do not use the symmetry loss except in the ablation studies in Tab. 4, as explained in L.243 in the manuscript. Therefore, our results on different datasets and visualization results are all obtained without using the symmetry loss. From Tab. 4, we demonstrate that this symmetry loss can improve the results slightly.
Before our experiments, all symmetric objects in the datasets are preprocessed to be symmetric with respect to the XY plane $\{x=0, y=0\}$ . To determine whether the object is symmetric, we first flipping the sampled points P on the object surface w.r.t the XY plane, yielding P'. Then we compare the Chamfer Distance between the object surface and P'. If the distance lies below a threshold (1e-3), then the object is considered symmetric. As for building the two bijection sets B1 $\{P_1, \theta_1\}$ and B2 $\{P_2, \theta_2\}$, we first randomly sample origins $\{P_1: (x_1, y_1, z_1)\}$ and directions $\{\theta_1: (\alpha_1, \beta_1, \gamma_1)\}$ to construct B1. Then, we flip B1 with respect to the reflective plane to generate B2 by $\{P_2: (x_1, y_1, -z_1)\}$, $\{\theta_2: (\alpha_1, \beta_1, -\gamma_1)\}$. Since the object is symmetric, the DDF values of corresponding rays in B1 and B2 should be the same, which establishes our symmetry loss term. (L. 206 in the manuscript). In the final version, we refine our explanation in L. 207—L.209 by adding mathematical symbols and also a figure to illustrate the construction of the symmetry loss in the Supplementary Material.
## Q3. Minor Comments
Thank you very much for the detailed suggestions.
(1) We'll consult with native speakers to identify a more fitting term to characterize the SDF's nature. We acknowledge that 'undirected' primarily pertains to graph descriptions, and while we are yet to discover a superior alternative, we remain open to suggestions.
(2) We checked the typos again and corrected them.
(3) Arbitrarily —-> Randomly
(4) The positions of the figures are adjusted in the final version.
(5) All citations are checked again.
(6) Thank you again for the suggestion. We also find the information introduced in Sec. 3 is very important to understand our method. We put it in the main paper in the final verison.
---
Rebuttal Comment 1.1:
Title: Comments on rebuttal
Comment: I appreciate the authors’ efforts in responding to all comments from a large number of reviewers. I have read all reviews and responses, but I will limit this post to my comments. I will only point out that the additional ablation studies are useful.
The response to W1 is informative. The proposed architecture is not much heavier than IHOI, but inference is about 4 times slower. A summary of the detailed response should be included in the paper.
The response to W2 reveals lack of clarity in the paper. In light of the response, lines 38-40 over-emphasize the shortcomings of previous work when handling symmetric objects. Later on the same page, the geometric loss that handles symmetry is presented as an important contribution of the paper. Lines 243-245 indeed state that the symmetry loss is not included in most experiments for fairness. In my opinion, some rewriting is warranted to present symmetry more clearly. This is not an argument for rejecting the paper by any stretch.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer hGDA
Comment: Thank you again for taking the time to thoroughly read our paper! We are very pleased that you can acknowledge our efforts. Thanks.
After carefully reading your comments, we have also realized that our description of symmetry loss was not clear enough. In the final version, we reorganized the logic concerning the introduction of symmetry loss and provided more details. We have employed the following intuitive explanation in the 'introduction' section to clarify why SDF cannot naturally model the symmetry of objects.
### SDF has inherent theoretical flaws in modeling object symmetry.
@@@@@reconstructed surface@@@@@
*P
\
~~reflective plane~~
@@@@@reconstructed surface@@@@@
*Q
The above table can be viewed as a 2D extreme case illustrating why SDF fails to model symmetry.
The reconstructed surface is marked with '@'.
The ground truth object surface should be symmetric with respect to the reflective plane.
Obviously, the current reconstruction is wrong and not symmetric.
Therefore, using SDF as shape representation, we can also sample two bijection sets (but only sample points not directions) to enforce a symmetry loss like in DDF-HO.
Take the two randomly sampled point-pair P and Q as an example.
It can be clearly seen from the above table that although the reconstruction is wrong, P and Q still have the same SDF value!
If this happens in training, it will confuse the network and further impair the performance.
In summary, although such extreme cases are rare in real-world experiments, it still verifies that SDF has inherent theoretical flaws in modeling object symmetry.
Therefore, SDF can not 'naturally' model object symmetry, unless some additional check is adopted to remove such cases.
DDF can solve this problem by incorporating directions into the input.
We added this intuitive explanation in the final version and hope this can help readers quickly catch the superiority of using DDF in hand-held object reconstruction. Thanks again. | Summary: This work proposes a novel pipeline that uses DDF as the shape representation for hand-held object reconstruction from a single image. DDFs provide benefits over SDFs, eg. they are directed, provide intersection with object information, and can capture symmetry. Extensive experiments on ObMan, HO3D, and MOW datasets show the effectiveness of the proposed approach over existing methods.
Strengths: - This work proposes to use DDF, which is more expressive than SDF, for hand-held object reconstruction. While SDFs are undirected and cannot capture symmetry, DDFs provide a directed distance to the surface along with intersection with object information (visibility) and can capture symmetry.
- The proposed approach uses ray-based feature aggregation and intersection-aware hand features to better capture hand-object interaction compared to existing SDF-based methods.
- Extensive experiments on ObMan (Table 3), HO3D (Table 1) and MOW (Table 2) datasets show the effectiveness of the proposed approach over existing methods.
- Ablation studies on different components (Table 4) and robustness to noise in hand pose (Table 5) are helpful in understanding the capabilities of the proposed approach. Also, error bars over 5 training seeds are provided in the supplementary.
Weaknesses: - The HO3D splits used are different than IHOI[62]. Is there any reason for this?
- The IHOI[62] scores on MOW (Table 2) are different than those reported in IHOI paper, even though the same splits are used (L221-222). Why is this the case?
- It'd be useful to have ablations on the different ray sampling strategies used during training, as stated in Sec. 3 in the supplementary. Specifically, how well does uniform sampling perform by itself? This could be a limitation when scaling DDF-HO to cases where 3D ground truth object models are not available (eg. in-the-wild settings).
- How does the training & inference time for DDF-HO compare to IHOI? Since several rays need to be sampled, it seems that DDF-HO would be slower than IHOI.
- It'd be helpful to include more details about ray sampling during testing in the main paper, eg. how many rays, how are the origin & directions sampled.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Clarifications required are mentioned in the weaknesses above.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations are discussed.
---
I appreciate the additional ablations and clarifications provided by the authors. After reading the rebuttal, other reviews, and discussion, I think that the authors have addressed the main concerns pointed out in the reviews. So, I am retaining my rating of `Weak Accept`.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for taking the time to thoroughly review our paper. Your insights are highly valuable and provide us with guidance to enhance the paper. Below we try our best to address your concerns and questions. We have revised the paper according to your suggestions.
## Q1. Different Splits with IHOI
The released code of IHOI is incomplete. We found that the preprocessed SDF data provided by IHOI doesn’t match its split files on HO3D and MOW. Thus we contacted the first author of IHOI for this issue in early April, 2023 during development of DDF-HO. She replied that her released data is not the same as used in the experiments in the paper. Under her help, we re-trained and re-evaluated all baseline methods on the two datasets using the split files provided in the released code of IHOI. Therefore, the results on HO3D and MOW are a little different from the results reported in IHOI.
## Q2. Different Results on MOW
As explained in L. 221–L. 222, we follow the splits of the released code of IHOI. The split files in the released code are a little different from the ones used in the original paper. We use the new split files to evaluate the performance, leading to slightly different results.
## Q3. Ablations on Different Ray Sampling Strategies
Thanks for the suggestion! We have conducted the ablations of different ray sampling methods on ObMan in the following table (Sampling methods 1 -- 5 are introduced in Sec. 3 of the Supplementary Material). It can be seen clearly that applying multiple kinds of sampling strategies contributes to improvements in performance, which is also verified in [DDF]. Please note that in testing stage, DDF-HO only uses uniform sampling since no prior shape information is accessible (introduced in the first paragraph in Sec. 3 of the Supplementary Material). Therefore, it can handle in-the-wild hand-held objects.
| | F5|F10 | CD|
|------|------|------|------|
1 | 0.53 | 0.66 | 0.17
1,2 | 0.53 | 0.66 | 0.16
1,2,3 | 0.54 | 0.66 | 0.15
1,2,3,4 | 0.55 | 0.66 | 0.15
1,2,3,4,5 | 0.55 | 0.67 | 0.14
## Q4. Efficiency of DDF-HO
IHOI is a typical SDF-based hand-held object reconstruction method. We list detailed comparisons with IHOI in the following table.
||DDF|IHOI
|------|------|------|
Training time| 2d 8h |IHOI: 2d|
Parameters|24.5M| 24.0M
Running Speed|23ms/img (44FPS) |5.8ms/img (172FPS)|
From the above table, it can be seen that our method runs slower than IHOI but still achieves real-time performance. All experiments in the above table are conducted on a single NVIDIA A100 GPU. The slower inference comes from the attention calculation for 2D ray-based feature aggregation and the more elabored 3D feature generation. Ray sampling in the testing time only takes less than 1ms since only randomly sampling on unit sphere for ray starting points and directions is conducted. For model size, the two methods share a similar scale. Generally, the increased parameters mainly come from the cross attention mechanism in 2D ray-based feature aggregation (Sec. 3.3). Other modules are only adopted to collect features to model hand-object interactions and do not significantly increase the network size. Moreover, since the network needs to predict both the distance and visibility signal, the convergence speed of DDF-HO during training is also slower than IHOI.
## Q5. Ray Sampling During Inference
As introduced in L. 24—L. 25 in the supplementary material, during inference, the ground truth object in the datasets is already canonicalized and resized to fit in a unit sphere. Thus, we simply use the uniform sampling to generate 20K 3D rays. The origins are randomly sampled inside the sphere and the directions are also randomly sampled from the uniform distribution. Since the sampled 3D rays may not intersect with the object, the yielded point cloud from DDF prediction contains less than 20K points. Thus, we randomly sample 12K points for evaluation. For fair comparisons, all other methods are trained with 20K points and tested using 12K points (This is also in accordance with the released code of IHOI). We clarified this information in the final version.
[DDF] Aumentado-Armstrong, T., Tsogkas, S., Dickinson, S., & Jepson, A. D. (2022). Representing 3D shapes with probabilistic directed distance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 19343-19354).
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I have read the rebuttal and other views. I appreciate the additional ablations and clarifications provided by the authors.
I have a few more questions:
- Which evaluation setting is used for ablation on ray sampling strategies (is it ObMan)? The difference seems to be marginal between different strategies. It'd be helpful to provide some insights into why this is the case. For training SDF-based models, points need to be sampled near the surface otherwise the model does not work well. In case of DDF, it seems like sampling rays uniformly is nearly as good as sampling near the surface.
- At inference, the model does not require any prior shape information. However, 3D ground truth is required at training time. Any thoughts on how this can be extended to setting where 3D ground truth is not available during training?
---
Reply to Comment 1.1.1:
Title: Response to Reviewer dmTA's Questions
Comment: We sincerely appreciate your valuable and constructive comments. Thank you again for taking the time to thoroughly review our paper!
## Performance Using Different Ray Sampling Strategies
In Q3, we conduct experiments on ObMan. We keep the number of rays from each sampling strategy unchanged except the ablated one to demonstrate the overall effect of each sampling strategy in the reconstruction. As for the marginal performance gains when incrementally incorporating different sampling strategies, we identify the following possible reasons.
First, among all the sampled 3D rays, those from uniform sampling make up half (10K rays), while the rest of the sampling strategies together make up the other half, as explained in detail in L.37 in Sec. 3 of the supplementary material. The dominance of uniform sampling is underscored by its usage in inference, where only uniform sampling is available due to lack of object priors. From the ablation study, only using rays from uniform sampling is sufficient to train a satisfactory model, but other sampling strategies still contribute a little improvement.
Second, we follow the recommendation of DDF [2] to utilize all the 5 sampling strategies. "As the single field experiment shows (Sec. 4 in DDF [2] and Sec. C in the supplementary material of DDF [2]), each sampling method can collect information that the other types cannot completely make up for." In DDF [2], complex objects like sculptures of dragons and rabbits are used in inference, which highlights the role of different sampling strategies. In our paper, on ObMan, HO3D and MOW, the objects are hand-held and usually not very complex. Thus the effect of incorporating different sampling strategies is weakened. However, considering applying our method in real-world scenarios, complex hand-held objects are not rare. Using all 5 sampling methods can ensure stable and consistent performance.
Third, reconstructing hand-held object is inherently very challenging. As other reviewers mentioned, the visualization results of DDF-HO are still not satisfactory, although we already surpass all baselines. Currently, the recovery of basic object shape is still not very accurate due to heavy occlusions from hand and lack of priors. Therefore, representing fine-grained higher-order geometry with different sampling strategies is not the main problem.
Last but most importantly, our approach of ray-based feature aggregation mitigates the necessity for samples to be in close proximity to the object surface. By collecting features along rays, we effectively capture information from both the ray's starting and ending points, as comprehensively detailed in L.48-L.50 and Sec. 3.3 of the main paper. Our method allows good performance with only uniform sampling.
## Training Without 3D Ground Truth
Thanks for the constructive comments. DDF-HO does not require any prior shape information at inference, which enables it to handle in-the-wild hand-held objects. Currently, all baseline methods need 3D ground truth for training, under the single-image reconstruction setting.
Training without 3D ground truth is an interesting but challenging and open problem to be solved. We propose several tentative ideas here.
First, reconstruction with multi-view input. Given multiple views describing the same hand-held object, we can establish epipolar geometry for supervision (like in [R-1]) and leverage the MANO hand model to provide coarse prior knowledge of the object shape.
Second, reconstruction with single RGB-D image. This can be achieved via recent proposed SinNeRF [R-2]. However, how to deal with the occlusion caused by hand and leverage the hand pose information are still open problems.
Third, reconstruction with single RGB image. Recently, a single image decomposition method [R-3] seems to be feasible in this case. Given a single image with known object category, [R-3] can directly recover the geometry and texture information after unsupervised training. However, such methods usually perform poorly in real-world scenes. Moreover, the occlusion caused by hand also significantly influences the image decomposition process.
[R-1] Truong, Prune, et al. Sparf: Neural radiance fields from sparse and noisy poses. CVPR 2023.
[R-2] Xu, Dejia, et al. SinneRF: Training neural radiance fields on complex scenes from a single image. ECCV 2022.
[R-3] Monnier, Tom, et al. Share with thy neighbors: Single-view reconstruction by cross-instance consistency. ECCV 2022. | Summary: This paper proposes a directed distance field-based method for hand-held object reconstruction from a single RGB image. The paper aggregates 2D ray-based features to capture ray-object intersection and 3D geometric features of ray-hand intersection. In particular, it extracts local to global cues via the above features and introduces a symmetry loss term to handle symmetric objects. Experiments on three datasets and ablations on the introduced modules show the effectiveness of the proposed method.
Strengths: 1. The proposed idea of extracting local 2D image features along the ray and local 3D features from the ray-hand relationship to provide geometric cues is novel in hand-held object reconstruction.
2. Supporting experiments show that DDF is a more suitable representation than SDF for handheld object reconstruction.
Weaknesses: 1. Unclear descriptions. It's better to provide a clearer description of how to sample the bijection sets on the reflective plane for symmetry loss, preferably with an illustration.
2. Insufficient qualitative results. It's suggested to provide qualitative results of the ablation studies, especially the ablation on symmetry loss term. Besides, it should be declared that whether adding such a symmetry loss term leads to a wrong distance field for asymmetric objects.
3. Insufficient ablations on input hand poses. It would be better to add Gaussian noises at a larger scale (e.g., \sigma=1.0,1.5) to the input hand poses.
4. Insufficient ablations on K_l and K_{3D}. Why K_l=8 is set to sample points along the projected 2D ray. And why K_{3D}=8 (nearly half of hand joints) is set to select the neighboring hand joints, since considering fewer hand joints seems to be more efficient for capturing local ray-hand features.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. The baselines mentioned in the paper (HO, GF, AlignSDF, and IHOI) all consider the intersection between hand and object and report the corresponding metric Intersection Volume [1], which are not mentioned in this paper. The authors should provide experimental results of Intersection Volume and state why the proposed method is able to outperform other methods on this metric (if applicable) without considering the contact/intersection between hands and objects.
2. In this paper, the canonical hand shape of MANO is used and only the articulation parameters of the hand are taken for modeling. However, hand shape is also a geometric cue, important for hand-held object reconstruction, but difficult to learn, which is considered in the baselines (HO and GF). The authors should make sure that they are comparing these baselines in a fair setting.
[1] Hasson, Y., Varol, G., Tzionas, D., Kalevatykh, I., Black, M.J., Laptev, I., Schmid, C.: Learning joint reconstruction of hands and manipulated objects. In: CVPR (2019)
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper has well clarified the limitations of the proposed methods, which are inherited from the shortcomings of DDF.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable and constructive comments. Our detailed responses are listed below and we revised the manuscript accordingly.
## Q1. Unclear Descriptions of Symmetry Loss
Following other baseline methods, we exclude the symmetry loss except for ablation studies (L. 243 in the manuscript). Hence, our results across different datasets and visualizations are symmetry loss-free. Tab. 4 shows a slight improvement due to the symmetry loss. Our updated paper and the uploaded PDF attachment now include qualitative object comparisons with and without the symmetry loss.
Before our experiments, all symmetric objects in the datasets are preprocessed to be symmetric with respect to the XY plane $\{x=0, y=0\}$ . To determine whether the object is symmetric, we first flipping the sampled points P on the object surface w.r.t the XY plane, yielding P'. Then we compare the Chamfer Distance between the object surface and P'. If the distance lies below a threshold (1e-3), the object is considered symmetric. As for building two bijection sets B1 $\{P_1, \theta_1\}$ and B2 $\{P_2, \theta_2\}$, we first randomly sample origins $\{P_1: (x_1, y_1, z_1)\}$ and directions $\{\theta_1: (\alpha_1, \beta_1, \gamma_1)\}$ to construct B1. Then, we flip B1 with respect to the reflective plane to generate B2 by $\{P_2: (x_1, y_1, -z_1)\}$, $\{\theta_2: (\alpha_1, \beta_1, -\gamma_1)\}$. Since the object is symmetric, the DDF values of corresponding rays in B1 and B2 should be the same, which establishes our symmetry loss term. (L. 206 in the manuscript). In the final version, we refine our explanation in L. 207—L.209 by adding mathematical symbols and also a figure (shown in the uploaded attachment) to illustrate the construction of the symmetry loss in the Supplementary Material.
## Q2. Insufficient Ablations on Symmetry Loss
Please note that adding this symmetry loss is a natural choice of using DDF representation. Therefore, symmetry loss is not our main contribution. For fair comparisons, we do not use the symmetry loss except in the ablation studies in Tab. 4, as explained in L.243 in the manuscript. Moreover, as explained in L. 207, we apply this loss only on symmetric objects during training. Thus, it should make little influence on the asymmetric objects. Such observation is made on our ablations shown in the table below.
||F5 | F10 | CD
|-|-|-|-|
Symmetric objects (w/o symmetry loss) |0.57 | 0.69 |0.12|
Asymmetric objects (w/o symmetry loss)|0.50 | 0.62|0.17|
Symmetric objects (w/ symmetry loss) | 0.59 | 0.70 | 0.11|
Asymmetric objects (w/ symmetry loss)| 0.51 | 0.62|0.16|
## Q3. Additional Experiments on the Noise of Hand Pose
We also conduct hand pose ablations with larger Gaussian noise on ObMan (the upper block) and HO3D (the bottom block). The results are exhibited below. Adding too large noise on hand poses will lead to drop on the reconstruction precision (F5). However, the CD metric remains to be 0.31 on ObMan even with sigma=1.5 Gaussian noise, which is still lower than SDF based baselines. This demonstrates the robustness of DDF.
||F5 | F10 | CD
|-|-|-|-|
Pred (no noise) | 0.55 | 0.67 | 0.14|
sigma=1.0 | 0.42 |0.55 | 0.25|
sigma=1.5 | 0.38 | 0.52 | 0.31|
Pred (no noise) | 0.28 |0.42 |0.55|
sigma=1.0|0.17 | 0.27 |0.98|
sigma=1.5 | 0.14|0.25 | 1.24|
## Q4. Additional Ablations on $K_l$ and $K_{3D}$
We show the relevant ablations of $K_l$ and $K_{3D}$ on ObMan in the table below. When setting $K_l=12$, the performance will be enhanced a little, but with the cost of larger network parameters and slower inference speed. Increasing $K_{3D}$ to $12$ does not provide any improvements towards the reconstruction since (1) DDF-HO has already considered the global hand feature $F_{3D}^G$; (2) collecting local intersection feature $\mathcal{F}_{3D}^L$ in a larger field will not provide more local information.
|keep $K_{3D}=8$ | F5|F10 | CD|
|-|-|-|-|
$K_l=4$ | 0.53 | 0.65 | 0.16|
$K_l=8$ | 0.55 | 0.67 | 0.14|
$K_l=12$ | 0.56 | 0.68 | 0.13|
|keep $K_{l}=8$ |||
$K_{3D}=4$ | 0.54 | 0.66 | 0.15|
$K_{3D}=8$|0.55|0.67 |0.14|
$K_{3D}=12$ | 0.55 | 0.67| 0.14|
## Q5. Results w.r.t Intersection Volume
Results are shown in the following table. It can be seen clearly that DDF-HO outperforms IHOI by 0.07 on ObMan and 0.57 on HO3D. Our method explicitly models the hand-object interactions, or in other words, 'considers the contact/intersection between hands and objects', in two aspects. First, on the 2D image, hand information along the projected 2D ray is encoded into the corresponding 2D features $F_{2D}$ (Sec. 3.3). Second, Our 3D embeddings $F_{3D}$ encapsulate both global hand shape feature and local hand-object geometric features (Sec.3.4).
|| ObMan|HO3D |
|-|-|-|
|HO |8.64| 10.03 |
|GF |1.84 | 7.16|
|IHOI| 1.74| 4.92|
|Ours| 1.67 | 4.35 |
## Q6. Fair Comparison with Baselines
We strictly follow the setting of the SDF-based baseline IHOI. Given an RGB image and predicted hand pose as input, we reconstruct the hand-held object. Therefore, our comparisons with IHOI are strictly fair. For other methods, since they also predict the hand shape while we directly get the hand mesh with the predicted hand pose and MANO models, we mainly compare the object reconstruction quality with them, like IHOI in its paper. Sharing the exactly same setting towards IHOI, we can ensure the fairness when comparing with such methods. On an overall level, our method and all other methods can start from an RGB image and ultimately obtain the reconstructed result of the hand-held object. Therefore the comparisons are supposed to be fair. | Summary: The authors introduce a novel approach called DDF-HO, which uses Directed Distance Field (DDF) for 3D hand-held object reconstruction. Unlike SDF, DDF includes origins and directions of the views in the 3D space. They show that the ray-based feature aggregation scheme and 3D intersection-aware hand pose embeddings are more suitable for 3D hand-held object reconstruction. The DDF-HO method outperforms prior work.
Strengths: 1. Proposed a new data structure DDF for hand-held object reconstruction.
2. Ray-Based Feature Aggregation technique has better representations of local geometric features.
3. The interaction modeling reflects the interaction between the hand and object rather than just hand as in the prior work.
4. Significant improvements over prior work.
5. Comprehensive evaluation and ablation studies.
Weaknesses: 1. Lack of efficiency comparison between SDF-based methods and proposed DDF. Such as the amount of resources required for reconstructing the same input.
2. No appearance on the reconstruction results
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How long does it take to reconstruct one image?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: 1, As the authors mentioned, it is hard to train DDF on higher dimensional inputs.
2. As the authors mentioned, the reconstructions do not reflect the translucency, material, and appearance of the objects
3. The reconstruction quality is not good, especially on real-world images
Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate you for the precious review time and valuable comments. We revised the manuscript according to the review. Below we try our best to address your concerns and questions.
## Q1. Efficiency
IHOI is a typical SDF-based hand-held object reconstruction method. We list detailed comparisons with IHOI in the following table.
||DDF|IHOI
|------|------|------|
Training time| 2d 8h |IHOI: 2d|
Parameters|24.5M| 24.0M
Running Speed|23ms/img (44FPS) |5.8ms/img (172FPS)|
From the above table, it can be seen that our method runs slower than IHOI but still achieves real-time performance.All experiments in the above table are conducted on a single NVIDIA A100 GPU. For model size, the two methods share a similar scale. Generally, the increased parameters mainly come from the cross attention mechanism in 2D ray-based feature aggregation (Sec. 3.3). Other modules are only adopted to collect features to model hand-object interactions and do not significantly increase the network size. Moreover, since the network needs to predict both the distance and visibility signal, the convergence speed of DDF-HO during training is also slower than IHOI.
## Q2. No Appearance in the Reconstruction Results
In this paper, we focus on recovering the geometric shape of the hand-held object. As other reviewers point out, currently reconstructing the shape of hand-held object without prior knowledge of the object is still a challenging and open problem, since no existing methods demonstrate satisfactory performance. Experiments showcase that our method achieves state-of-the-art performance in real-world and synthetic datasets. As for the appearance, in the future work, a coloring network like in PC2 [PC2] can be adopted to predict the color for each pixel. Moreover, since DDF shares similar input as NeRF, neural view synthesis may be also possible by slightly adjusting the DDF representation, which will make the problem very interesting.
[PC2] Melas-Kyriazi, L., Rupprecht, C., \& Vedaldi, A. (2023). PC2: Projection-Conditioned Point Cloud Diffusion for Single-Image 3D Reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 12923-12932).
---
Rebuttal Comment 1.1:
Comment: Thank you for providing the efficiency experiments and addressing my questions.
I think all my concerns have been satisfactorily addressed, and my rating continues as Strong Accept, as the proposed Ray-Based Feature Aggregation method opens new ways for hand-held object reconstruction, and compared to previous SDF-based methods, the new method has better reconstruction results and as stated still achieves real-time performance. | Rebuttal 1:
Rebuttal: We sincerely appreciate the valuable work by all ACs and reviewers. We are delighted that DDF-HO is considered to show "improvements over baselines on nearly all metrics" [9guR, e2Ui, dmTA, hGDA], "stronger modelling capability" [7sso], "novelty in hand-held object reconstruction" [1CC4]. We revised the manuscript according to the suggestions. Below we try our best to address the common concerns and questions.
## Q1. Efficiency [All Reviewers]
IHOI is a typical SDF-based hand-held object reconstruction method. We list detailed comparisons with IHOI in the following table.
||DDF|IHOI
|------|------|------|
Training time| 2d 8h |IHOI: 2d|
Parameters|24.5M| 24.0M
Running Speed|23ms/img (44FPS) |5.8ms/img (172FPS)|
From the above table, it can be seen that our method runs slower than IHOI but still achieves real-time performance.All experiments in the above table are conducted on a single NVIDIA A100 GPU. For model size, the two methods share a similar scale. Generally, the increased parameters mainly come from the cross attention mechanism in 2D ray-based feature aggregation (Sec. 3.3). Other modules are only adopted to collect features to model hand-object interactions and do not significantly increase the network size. Moreover, since the network needs to predict both the distance and visibility signal, the convergence speed of DDF-HO during training is also slower than IHOI.
## Q2. Occupancy as the Shape Representation [9guR, 7sso]
Since no off-the-shelf hand-held object reconstruction pipelines leverage occupancy as the shape representation, we design a baseline method ourselves following the widely-used 2D-3D lifting scheme in single-view reconstruction [Pix2Vox]. We first use the same backbone as our method (ResNet34) to extract per-pixel features. The extracted features are then back-projected to the volume (32x32x32, the same as [Pix2Vox]). For each voxel inside the volume, the predicted hand pose (the same as used in DDF-HO, parameterized as the MANO model parameters) is also concatenated to its feature vector. Finally, we predict the occupancy as in Pix2Vox [Pix2Vox]. The results on ObMan (the upper block) and HO3D(V2) (the bottom block) are shown as follows.
|Method|F5|F10|CD|
|------|------|------|------|
|IHOI| 0.42 | 0.63 | 1.02 |
|DDF-HO| 0.55 |0.67 |0.14 |
|Pix2Vox |0.24 |0.45 |1.81 |
|------|------|------|------|
|IHOI|0.21 |0.38 |1.99|
|DDF-HO|0.28 |0.42 |0.55|
|Pix2Vox |0.06 |0.17 |6.12|
Thereby we explain why occupancy is also not a suitable choice in hand-held object reconstruction. First, like SDF, occupancy representation is also undirected. It can only model the local shape of either object or hand. For voxels that are not near the hand, the occupancy representation also cannot naturally capture the relationship between the hand and the object. Second, the reconstruction quality is limited by the resolution of the volume. Since the entire volume is typically uniformly divided into voxels, for object regions with particularly complex geometric structures, large voxels might limit the network's expressive capacity, resulting in decreased accuracy of the results.
## Q3. Symmetry [hGDA, 1CC4]
We provide a figure in the attachment to demonstrate the sampling method of building the bijection sets for the symmetry loss. We also provide qualitative comparisons of results with or without symmetry loss.
Pdf: /pdf/eb437c61e71adca32613da98ede311c367ae0700.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes an algorithm for reconstructing hand-held object from a single RGB image. Instead of using the traditional Signed Distance Fields (SDF), this paper proposes to leverage Directed Distance Field (DDF) as the shape representation. Experiment shows that the proposed algorithm outperforms SOTA.
Strengths: The proposed pipeline utilizes DDF as the shape representation to reconstruct hand-held objects, which has a stronger modelling capability for this specific task, e.g., reconstruction for hand-held objects from a single RGB image. It introduce a 2D ray-base feature aggregation and 3D intersection-aware hand pose embedding.
The experiments are conducted on both real and synthetic dataset,
Weaknesses: The paper did not mention the running speed, model complexity, I wonder if it's comparable with SDF based models.
The paper mentioned that the DDF is harder to train, and requires more complex data, algorithm, and network structure, I wonder if the paper can give more quantitative measurement? e.g. 1 or 2 magnitude harder/longer/more parameters?
Another alternative to SDF will be the Occupancy, the paper did not mention Occupancy at all. Won't Occupancy be a strong baseline model? or replacing SDF with Occupancy will make the algorithm (proposed and SOTA) perform better?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: did not talk about potential negative societal impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive review. We revised the manuscript accordingly. Below we try our best to address your concerns and questions.
## Q1. Running Speed and Model Complexity
IHOI is a typical SDF-based hand-held object reconstruction method. We list detailed comparisons with IHOI in the following table.
||DDF|IHOI
|------|------|------|
Training time| 2d 8h |IHOI: 2d|
Parameters|24.5M| 24.0M
Running Speed|23ms/img (44FPS) |5.8ms/img (172FPS)|
From the above table, it can be seen that our method runs slower than IHOI but still achieves real-time performance.All experiments in the above table are conducted on a single NVIDIA A100 GPU. For model size, the two methods share a similar scale. Generally, the increased parameters mainly come from the cross attention mechanism in 2D ray-based feature aggregation (Sec. 3.3). Other modules are only adopted to collect features to model hand-object interactions and do not significantly increase the network size. Moreover, since the network needs to predict both the distance and visibility signal, the convergence speed of DDF-HO during training is also slower than IHOI.
## Q2. Occupancy as the Shape Representation
Since no off-the-shelf hand-held object reconstruction pipelines leverage occupancy as the shape representation, we design a baseline method ourselves following the widely-used 2D-3D lifting schemed single-view reconstruction method [Pix2Vox]. We first use the same backbone as our method (ResNet34) to extract per-pixel features. The extracted features are then back-projected to the volume (32x32x32, the same as [Pix2Vox]). For each voxel inside the volume, hand pose is also concatenated to its feature vector. Finally, we predict the occupancy as in Pix2Vox [Pix2Vox]. The results on ObMan (the upper block) and HO3D(V2) (the bottom block) are shown as follows.
|Method|F5|F10|CD|
|------|------|------|------|
|IHOI| 0.42 | 0.63 | 1.02 |
|DDF-HO| 0.55 |0.67 |0.14 |
|Pix2Vox |0.24 |0.45 |1.81 |
|------|------|------|------|
|IHOI|0.21 |0.38 |1.99|
|DDF-HO|0.28 |0.42 |0.55|
|Pix2Vox |0.06 |0.17 |6.12|
Thereby we explain why occupancy is also not a suitable choice in hand-held object reconstruction. First, like SDF, occupancy representation is also undirected. It can only model the local shape of either object or hand. For voxels that are not near the hand, the occupancy representation also cannot naturally capture the relationship between the hand and the object. Second, the reconstruction quality is limited by the resolution of the volume. Since the entire volume is typically uniformly divided into voxels, for object regions with particularly complex geometric structures, large voxels might limit the network's expressive capacity, resulting in decreased accuracy of the results.
[Pix2Vox] Xie, H., Yao, H., Sun, X., Zhou, S., \& Zhang, S. (2019). Pix2vox: Context-aware 3d reconstruction from single and multi-view images. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 2690-2698). | Summary: This paper presents a system for joint hand and hand-held object 3D reconstruction. The authors propose a pipeline to (1) predict the hand (MANO hand model) and camera poses with an off-the-shelf pose estimator; (2) extract image features with an off-the-shelf ResNet; (3) sample "3D ray representations", project them to the 2D image plane, and extract the corresponding image features and aggregating them with cross-attention along the ray directions; (4) for the same 3D points, extract 3D features from the global hand embedding from the geodesic nearest joint; (5) predict a directed distance function (DDF) with an MLP, using the above features as input. Experiments show that the proposed system predicts more accurate shapes.
Strengths: - The proposed method is evaluated on synthetic and real hand-object interaction datasets, showing improvements over baselines on nearly all metrics. Ablative analyses are also provided to better understand the behaviors of the proposed DDF-HO.
Weaknesses: - The presentation is poor. Specifically, it is unclear what the use of ray sampling is, and how the random ray directions transform to the directions pointing to the target hand (R-A to R-B in Fig 2). Also, it is unclear how one would know a sampled ray would (not) point to the direction closest to the hand shape, without the DDF even being optimized. In addition, it seems that the major contribution of the paper is a system proposal as a whole, rather than the choice of representation (DDF vs. SDF). I think the message of the paper is somewhat misleading in this sense.
- Why is the final output in the form of DDF, if the end goal is to extract the surface of the hand shape? Predicting an SDF or occupancy field would serve the same purpose as well. Also, it is not clear how DDF is being taken advantage of in the proposed system. The evaluation metric is based on surface point cloud representations as well.
- It is not clear to me how the proposed method is better in the real-world datasets. Visually, they seem to be very far from good predictions of the object shapes.
- It would be good to analyze failure cases to better understand the limitations of the proposed system.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: It would be great if the authors could address the questions raised in the weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable and constructive comments. We are delighted that you found our method “predicts more accurate shape” than competitors. Our detailed responses are listed below and we revise the manuscript accordingly.
## Q1. About Ray Sampling from (R-A) to (R-B)
We define DDF in L.106 — L.113 in Sec. 3.1. It maps a 3D ray (consisting of an origin and a view direction) to a non-negative scalar field (L.109) and a binary visibility field (L.111). The binary visibility field plays a crucial role in identifying which sampled rays do not intersect with the hand-held object. This identification is significant as these non-intersecting rays do not contribute to the overall reconstruction process. Therefore, in the pipeline Fig. 2, we only showcase the rays that are necessary in reconstruction. The ray sampling algorithm is introduced in detail in Sec. 3 in the Supplementary Material. To avoid misleading the readers, we have incorporated a distinct color to denote rays that do not intersect with the object in Fig. 2 in the final version. This visual distinction serves to enhance the accuracy of the demonstration.
## Q2. Our Contributions
We first explain why SDF is not a suitable choice for hand-held objection reconstruction in our paper (L.27 – L.40 in Introduction, L.102 – L.105 in Sec. 3.1, L.142 – L.159 in Sec. 3.3). SDF is undirected and too compact that fails to naturally encapsulate local interactions between hand and the object (L.27 – L.40). Then we point out that DDF is a better choice and design DDF-HO which utilizes DDF as the shape representation for hand-held object reconstruction. We list our contributions in L.60—L.66 in the main paper. We briefly summarize them here for clarity. We present DDF-HO, a novel pipeline that utilizes DDF for hand-held reconstruction, outperforming SDF-based competitors. Based on the DDF representation, we propose a novel ray-based feature aggregation scheme to model hand-object relationship, which boosts the overall reconstruction quality. Extensive experiments and ablation studies demonstrate the effectiveness of DDF-HO.
## Q3. Occupancy as the Shape Representation
Since no off-the-shelf hand-held object reconstruction pipelines leverage occupancy as the shape representation, we design a baseline method ourselves following the widely-used 2D-3D lifting scheme in single-view reconstruction [Pix2Vox]. We first use the same backbone as our method (ResNet34) to extract per-pixel features. The extracted features are then back-projected to the volume (32x32x32, the same as [Pix2Vox]). For each voxel inside the volume, the predicted hand pose (the same as used in DDF-HO, parameterized as the MANO model parameters) is also concatenated to its feature vector. Finally, we predict the occupancy as in Pix2Vox [Pix2Vox]. The results on ObMan (the upper block) and HO3D(V2) (the bottom block) are shown as follows.
|Method|F5|F10|CD|
|------|------|------|------|
|IHOI| 0.42 | 0.63 | 1.02 |
|DDF-HO| 0.55 |0.67 |0.14 |
|Pix2Vox |0.24 |0.45 |1.81 |
|------|------|------|------|
|IHOI|0.21 |0.38 |1.99|
|DDF-HO|0.28 |0.42 |0.55|
|Pix2Vox |0.06 |0.17 |6.12|
Thereby we explain why occupancy is also not a suitable choice in hand-held object reconstruction. First, like SDF, occupancy representation is also undirected. It can only model the local shape of either object or hand. For voxels that are not near the hand, the occupancy representation also cannot naturally capture the relationship between the hand and the object. Second, the reconstruction quality is limited by the resolution of the volume. Since the entire volume is typically uniformly divided into voxels, for object regions with particularly complex geometric structures, large voxels might limit the network's expressive capacity, resulting in decreased accuracy of the results.
## Q4. DDF in DDF-HO
We use DDF to represent the 3D shape in our pipeline DDF-HO. As introduced in Sec. 3.5 in the paper, after extracting corresponding features of each ray (Sec. 3.3 and Sec.3.4), we leverage an 8-layer MLP to map the features to the corresponding DDF values: distance and binary visibility signal. (L.139 – L.140, L.200 – L.201). DDF can then be converted into other commonly used 3D representations including point cloud, mesh and vanilla SDF (L.112 – L.113), using the algorithms introduced in [2, 27].The superiority of DDF to SDF is accentuated in our main paper (L.27 - L.46, and also in Fig. 1). We demonstrate the DDF is a more suitable representation for hand-held object reconstruction.
## Q5. Unsatisfactory Visualization Results
Our goal is to reconstruct the hand-held object from a single RGB image without any prior knowledge of the object. This is a recently emerging and very challenging research topic. Although our method still does not completely solve the problem, we make a significant leap forward in accuracy compared to previous methods (Tab. 1, 2, 3). In the future, combining other techniques, like diffusion models, with our DDF based reconstruction framework may further boost the performance.
## Q6. Failure Cases
MOW pencil is a typical failure case (Row 3 in Fig. 3 of the Supplementary Material). Currently, reconstructing very thin objects is still a big challenge for all methods. In the final version and the submitted PDF attachment, we have added more visual results of the failure cases to comprehensively showcase our method.
[Pix2Vox] Xie, H., Yao, H., Sun, X., Zhou, S., \& Zhang, S. (2019). Pix2vox: Context-aware 3d reconstruction from single and multi-view images. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 2690-2698). | null | null | null | null |
In-Context Learning Unlocked for Diffusion Models | Accept (spotlight) | Summary: The authors propose an image generation framework. Given the textual description of a task and an input-output example pair, Prompt Diffusion can inpaint the missing output in a consistent way like in the input-output example and the textual guidance instruction. Prompt Diffusion is trained (in a supervised manner) over 6 different tasks and shows some generalization to new tasks and demonstrates emerging image editing capabilities.
Strengths: - Overall, I think the problem setting of visual prompting conditioned on both visual examples and text is novel and interesting.
- The paper is well written and the presentation is clear
- By starting from pretrained diffusion models, the authors bring the advantages of visual quality and text guidance to the prompt diffusion framework.
Weaknesses: My main concern is with the framing of the paper, claiming in-context learning and generalization to new tasks. I think this is a very broad claim which is not backed up by the results. Please see the questions box below for more elaborate feedback/questions.
1. The authors claim the model is "capable of in-context learning" and "generalization to new tasks". Given the empirical results here I find this is hardly convincing.
2. The model is trained on a discrete number of tasks, so it is unclear whether visual examples are actually needed or if text-only guidance is enough.
3. In general, I think more results are necessary to establish the motivation for the new proposed framework.
---
Post rebuttal: given that the authors decided to acknowledge the limitation we discussed, I've decided to update my score.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. The model is "capable of in-context learning" and "generalization to new tasks".
* Can the model extend to different input-output structures in the test time? For example, by utilizing more than one visual example?
* Given that the input-output examples are concatenated in channels dimension, can the model extend to input-outputs that are *not* aligned pixel-wise?
* Considering edge prediction as in-context learning after supervised training for depth/hed is a stretch. These tasks are super similar. Similarly, with normals given depth/segmentation supervision.
* Given the previous points, is it possible that the model performs in-context learning of style (Pix2Pix prompting)? I find this claim more accurate, and I think this is still novel.
2. The model is trained on a discrete number of tasks
* Are visual examples actually needed? How does the model perform with standard image input image output with text guidance? (similar to InstructPix2Pix).
3. In general, I think more results are necessary to establish the motivation for the new proposed framework. For example:
* Which modality instructions are more important better? Is it text? image?
* What are the computation tradeoffs? I assume the text is more efficient than images and easier to obtain. Textual prompts are faster, easier to obtain, and have less memory overhead.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: I think the authors should consider emphasizing the following limitations which were not mentioned:
* the current framework is not very flexible and it assumes a fixed structure of the input-output and query image.
* is the in-context learning limited to style interpolation? (e.g normals based on depth/segmentation, edges instead of depth/hed)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer Gb8x for acknowledging the novelty of our proposed framework, appreciating our presentation, and proposing constructive feedback. Below, we address the concerns raised in your review point by point.
> Claiming in-context learning and generalization to new tasks is very broad.
We acknowledge that in-context learning within the realm of vision is still a step behind the impressive advancement in large language models. These language models benefit from extensive datasets and the capacity to tackle a wide range of tasks with remarkable flexibility. Considering that the datasets and designed tasks are still in the exploratory process, we believe it will be good for the community to not set the bar unrealistically high by comparing the accomplishments of in-context learning progress in language models against those in vision models.
The pursuit of in-context visual learning with the ability of generalizing to new tasks has attracted increasing popularity within the vision research community. Our innovation, Prompt Diffusion, distinguishes itself from prior works [1,2] by incorporating this pivotal attribute into the state-of-the-art generative framework, Diffusion Models. This amalgamation enables a robust capability to effectively process diverse input samples and manifests a powerful capacity for comprehending distinct modalities encompassing both images and text. Consequently, in contrast to solely relying on image-visual prompts, Prompt Diffusion accommodates multi-modal vision-language prompts, culminating in the generation of high-resolution and photo-realistic images. Quoting Reviewer foSi, the external dimension brings more versatility and applicability to our model.
Prompt-Diffusion starts from a diffusion-based approach to solve the in-context visual learning problem. We are optimistic that through meticulous architecture design and the utilization of a diverse and comprehensive dataset, encompassing a wide range of tasks, it could significantly enhance the performance of Prompt-Diffusion in the context of visual learning.
Following the reviewer's suggestion, we plan to add a limitation discussion in our revised manuscript, as shown in the following. “It is essential to acknowledge that the field of in-context learning in vision is currently in a developmental phase and encounters specific challenges in aligning with the accomplishments attained by large language models. Despite the progress achieved through Prompt-Diffusion, it is conceivable that constraints may exist concerning the extent and adaptability of tasks that can be proficiently accommodated. Going forward, further exploration and improvements in architecture and dataset diversity are needed to overcome these limitations and advance the progress of in-context visual learning.”
Question 1
- > Q1.a Can the model extend to different input-output structures in the test time? For example, by utilizing more than one visual example?
Our current design has a fixed number of stacked convolutional layers to encode example and query images, so the use of CNN based encoder limits the number of visual examples that can be plugged into. However, modifying the encoder to a ViT based model would potentially bring more flexibility in test time for encoding visual examples.
- > Q1.b Can the model extend to input-outputs that are not aligned pixel-wise?
For the current designed tasks, the input-outputs are aligned pixel-wise, which is determined by the dataset that we use. If we have a dataset that supports unaligned input-output pairs, then training on the data would make it work.
- > Q1.c The conducted tasks are super similar.
Note the tasks look similar from human knowledge but that doesn’t mean the model trained on one task could easily transfer to the others. We conducted one ablation study in Figure 10. We show that a ControlNet finetuned for an individual task on the dataset easily fails to generate high-quality images for another task.
- > Q1.d Is it possible that the model performs in-context learning of style (Pix2Pix prompting)?
We follow the convention in the literature in claiming in-context image generation.
Question 2
- > Q2.a The discrete number of tasks is limited by the dataset that we have.
We believe utilizing a larger and more diverse in-context learning dataset could significantly strengthen Prompt-Diffusion.
- > Q2.b Are visual examples actually needed?
Yes, see our ablation study in Figure 8, where we investigate the effects of our vision-language prompt, which consists of the example pair, text guidance, and image query.
- > Q2.c How does the model perform with standard image input image output with text guidance? (similar to InstructPix2Pix).
Our model can take standard images as inputs for forward tasks, such as depth/hed/segmentation map generation. For image editing (similar to Instruct Pix2Pix), we show examples in Figure 6, and the two-step strategy uses standard images as inputs.
Question 3
- > Q3.a Which modality instructions are more important better? Is it text? Image?
We provide ablation study in Figure 8, which validates that both texts and images are important during image generation.
- > Q3.b What are the computation tradeoffs?
Yes, text is more efficient than images and easier to obtain.
Limitations
- > The current framework is not very flexible
The current structure of the input-output and query image is fixed, but we think it could be extended to a more flexible version and we left that for future work.
- > Is the in-context learning limited to style interpolation?
We follow the convention in the literature in claiming in-context image generation.
References:
[1] Bar, Amir, et al. "Visual prompting via image inpainting." NeurIPS 2022.
[2] Wang, Xinlong, et al. "Images speak in images: A generalist painter for in-context visual learning." CVPR 2023.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: I'm very disappointed with the author's rebuttal which has not addressed my main critic. I still believe the claim for in-context learning is unwarranted and not backed up by empirical evidence. In-context learning is an emergent capability of large neural networks and I don't see anything emergent here.
Specifically:
1. In-context learning was tested on tasks there are very similar to the ones they were trained on (in a completely supervised manner). Claiming that in-context learning emerges on "unseen" tasks (e.g., edges to image) when it was trained on a very similar task (e.g., segmentation to image) is a huge overclaiming. I proposed the authors take one of the pretrained models and share style transfer results, which is a different task that is not similar to the ones trained on. Unfortunately, the authors did not share these results. Therefore, I am still not convinced, what I do see here is multi-task learning that benefits from text conditioning.
2. The architecture structure assumes pixel-wise correspondences between input-output examples and new test input. This constraint stems from concatenating the images channel-wise in the architecture. The authors claim that using data when there is no pixel correspondences in the tasks will still work. I strongly disagree. Given there is no empirical evidence I will not change my stance here.
3. The architecture structure assumes 3 input images. The authors acknowledge this in the rebuttal but claim changing to ViT can solve this. However, the Stable Diffusion model is not a ViT, and in this case, the authors will not be able to build on it. Therefore I don't see how this helps is currently practical.
Given that the authors' have not addressed my concerns, my current recommendation is to reject this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for the response, which has helped us to better understand your concerns. We have now followed your request to test Prompt Diffusion not only for style transfer but also under misaligned example input-output pairs. We believe these newly added results have further validated the in-context learning ability of Prompt Diffusion. Due to the policy, we provide the sharing link with AC and need to wait for further instructions about how it could be shared with reviewers.
> I still believe the claim for in-context learning is unwarranted…
Our experimental results show that the proposed Prompt Diffusion is able to incorporate the visual changes exhibited in the example image pair, aided with text guidance that includes style information or not, to inform how the model will perform image generation given the input query image. The newly added examples per your request, concerning style changes and misalignment, further show that Prompt Diffusion has in-context learning ability in contextualizing its generation based on the visual changes in the example image pairs. In particular, the first row of the newly added examples show that when given a very generic text condition as “A high-quality image.”, Prompt Diffusion can perform style transfer following what is implied in the example image pair.
We also note that we follow the convention of claiming in-context visual learning in the literature. Painter [2], which follows the setting of [1], claims in-context visual learning on solving high-level understanding and low-level processing tasks, such as semantic segmentation, instance segmentation, depth estimation, keypoint detection, denoising, deraining, and image enhancement. Painter [2] is also trained on a discrete number of tasks. Our work followed their namesake of "in-context learning", but instead focuses on multiple image generation tasks with a diffusion-based backbone. Detailed comparison with Painter [2], in particular how we bring into substantial novelty over its original framework, could be found in our response to Reviewer foSi. We are not overclaiming in the namesake of "in-context learning".
[1] Bar, Amir, et al. "Visual prompting via image inpainting." Advances in Neural Information Processing Systems 35 (2022): 25005-25017.
[2] Wang, Xinlong, et al. "Images speak in images: A generalist painter for in-context visual learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
> Claiming that in-context learning emerges on "unseen" tasks (e.g., edges to image) when it was trained on a very similar task (e.g., segmentation to image) is a huge overclaiming.
We disagree we had overclaimed the ability of Prompt Diffusion:
- Tasks look similar from human perception does not necessarily mean the model trained on one task could easily transfer to the others. We conducted one ablation study in Figure 10 and show that a ControlNet finetuned on an individual task easily fails to generate high-quality images on other tasks. Therefore, it could be misleading to claim two tasks “very similar” by just visual experience.
- We note the newly added results show without style-specific guidance from texts, Prompt Diffusion successfully performs style transfer (following your recommendation), which is a task distinct from the six training tasks used to train Prompt Diffusion. This result further solidifies our in-context task unseen generalization ability, and we will happily include this new result into our final draft
> Therefore, I am still not convinced, what I do see here is multi-task learning that benefits from text conditioning.
We do not think multi-task learning conflicts with in-context learning. As highlighted and validated in the T5[3] paper, multi-task learning serves as a foundational step towards in-context learning in large language models. Training across multiple tasks with expansive data samples is crucial for the emergence of in-context learning. We will clarify this interconnection in the revised paper, but our additional clarifications and new results provided above shall have justified “in-context” learning well.
[3] Raffel, Colin, et al. "Exploring the limits of transfer learning with a unified text-to-text transformer." The Journal of Machine Learning Research 21.1 (2020): 5485-5551.
> I proposed the authors take one of the pretrained models and share style transfer results, which is a different task that is not similar to the ones trained on.
Sorry we did not fully understand your request in the last round - and we have conducted the required experiment now. As previously mentioned, we have kindly requested AC to help pass along the image style transfer results.
---
Reply to Comment 1.1.2:
Comment: > This constraint stems from concatenating the images channel-wise in the architecture. The authors claim that using data when there is no pixel correspondences in the tasks will still work. I strongly disagree.
Using pixel-aligned input-output pairs is a standard setting not only in previous in-context visual learning works[1,2], but also in traditional vision tasks (segmentation, keypoint detection, image processing, etc.). Note in our relevant prior arts, all the tasks in Painter [2] also only consider pixel aligned inputs, due to the positional encoding layer inside their transformer architecture. The practical significance of the misalignment case is unclear to us, i.e., why one would desire output images misaligned with their corresponding inputs.
Nevertheless, as per your request, we have provided image generation results with specifically misaligned input-output pairs. Please check our new results, which (although we feel as artificial) indeed shows our flexible “in-context” generalization to pixel-unaligned input-output tasks.
> The authors acknowledge this in the rebuttal but claim changing to ViT can solve this. However, the Stable Diffusion model is not a ViT, and in this case, the authors will not be able to build on it.
We believe there is a misunderstanding on how the ViT backbone is applied to our framework and we would like to clarify the details. Stable Diffusion does not need to be a ViT. Either ControlNet or Prompt-Diffusion constructs separate branches to capture conditions, where the extracted condition feature is aggregated with Stable Diffusion’s feature. Therefore, the design principle of this additional branch does not conflict with Stable Diffusion itself. Under this circumstance, we believe that adopting a ViT based image encoder tailored for the newly constructed branch could potentially enable the ability of flexible token length.
In fact, we are actually building this now and see no blocker. As it is an ongoing work and is also beyond the scope of this paper, we are refraining ourselves from further discussing it during the review. | Summary: The paper proposes a novel prompt design and model called Prompt Diffusion, for in-context learning in vision-language tasks. The vision-language prompt is designed by replacing text examples with paired image examples and the text query with an image query. The paper conducts extensive experiments to demonstrate Prompt Diffusion as a strong versatile vision-language foundation model that is capable of in-context learning.
Strengths: This paper pioneered a new and timely problem setting: extending in-context learning beyond LLMs, to text2image diffusion models. The proposed model, Prompt Diffusion, integrates the learning of multiple tasks into one vision-language foundation model, and it acquires in-context learning ability by learning across multiple tasks and generalizes effectively across new, unseen tasks.
The framework consists of two main components: a vision-language prompt and a diffusion model. The vision-language prompt replaces text examples with paired image examples and the text query with an image query, allowing for a new input-output pair format that could generalize the input-output configuration of most vision-language tasks. The implementation borrowed ideas from ControlNet. The diffusion model then takes the vision-language prompt as input and is trained jointly on six different tasks using these prompts. The resulting Prompt Diffusion model becomes the first diffusion-based foundation model capable of in-context learning.
The model demonstrates high-quality in-context generation for the trained tasks and effectively generalizes to new, unseen vision tasks using their respective prompts. It also demonstrates strong image editing ability. The proposed framework aims to facilitate research into in-context learning for computer vision.
Weaknesses: While this pilot study is highly interesting, it is hard to assess if the diffusion model has the potential to achieve truly generalizable “in-context learning” as LLMs do. Currently, the model is only trained jointly across six pre-defined tasks, and those tasks are similar dense prediction type. The limited diversity of tasks evaluated in this paper constitutes an important hurdle to assess the work’s true value.
It is also hard to predict whether expanding the range of joint tasks would scale up Prompt Diffusion’s in-context learning capability. Additionally, the model has poor robustness in handling misaligned query images (Figure 7). I am curious how the authors think this can be fixed: shall we expect to encompass more diverse training tasks, training data, etc?
The current generation results of high-fidelity, real-life images look underwhelming. It is noted that the authors used SD v1.5 backbone. I’ll be curious to see if in-context learning can work better if starting from SD v2.1.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have thoroughly discussed limitations in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer vt4T for acknowledging the innovativeness of our approach to the problem and appreciating the results of our experiments. Below, we address your questions and concerns:
- We acknowledge the potential benefits of utilizing a larger and more diverse in-context learning dataset to strengthen Prompt-Diffusion. At present, our version is constrained by the available data, which incorporates as few as 6 tasks.
- In Figure 7, we demonstrated that the generation quality is limited when the task is misaligned with our training tasks, and the input text lacks sufficient information. We believe that incorporating more tasks will render the model more adaptable to out-domain tasks.
- As stated in section 5, "As our base dataset is synthesized by Stable Diffusion with proper filtering, it is difficult for the model to generate high fidelity, real-life images." We attribute the underwhelming real-life image generation partly to the base dataset from Instruct Pix2Pix, which is a critical factor since the finetuning dataset is entirely generated by Stable Diffusion and then filtered by CLIP. We firmly believe that a more diverse and high-quality dataset containing real-life images would significantly address this issue. In our preliminary experiments, we did not observe substantial differences between using SD v1.5 and v2.0. | Summary: Prompt Diffusion is a novel framework that enables in-context learning in diffusion-based generative models. It consists of a vision-language prompt and a diffusion backbone, which is trained jointly on six different tasks using their respective prompts. The resulting Prompt Diffusion model becomes the first diffusion-based vision-language model capable of in-context learning, demonstrating high-quality in-context generation for the trained tasks while generalizing to unseen vision tasks.
Strengths: 1) The proposed vision-language prompt design is novel, which replaces texts with paired images and replaces the text query with an image query. This design allows for a new input-output pair format that is able to generalize the input-output configuration for most vision-language tasks.
2) The Prompt Diffusion model is the first diffusion-based versatile vision-language foundation model capable of in-context learning. The model acquires in-context learning ability by multi-task learning pre-training, so as to generalize “zero-shot” to additional new tasks with no further adaptation
3) The authors conducted extensive experiments to illustrate their capablity of in-context learning. An ablation study is conducted to investigate the effects of vision-language prompt, which further supports the effectiveness of the proposed design.
4) The model also yields compelling text-guided image editing results, which demonstrate the controllability of the model in generating images based on text guidance.
Weaknesses: 1) I would be cautious to call the current model to have " strong in-context learning ability", since the unseen tasks tested in this paper are still similar to the known ones. For example, "HED2Image" task versus "Canny2Image" and "Scribble2Image" tasks. The paper would be strengthened greatly if more diverse unseen tasks could be learned in context.
2) For text encoding, the authors used a pre-trained CLIP. I wonder that if replace CLIP with an LLM encoder module, some of which have its own in-context learning ability in language domain, will enhancethe in-context learning ability? This is explored by a rencent work: "LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models".
3) The current framework follows instructive pix2pix in most cases and the implementation leverages ControlNet straightforwardly. However, this is not my major concern due to the unconventionality and novelty of the proposed method.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please refer to the weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 7sFz for evaluating the significance of our work and providing encouraging feedback. We provide more clarifications below.
- We commit to exercising greater prudence in claiming the ability of in-context learning in our revised manuscript.
- Appreciate your insight. Indeed, integrating stronger text encoding models, such as LLMs, might potentially enhance the in-context learning ability of our model. Currently, we are actively investigating the application of LLMs within our framework. Thanks for pointing out the recent work. We acknowledge this noteworthy work and intend to cite it while discussing its potential future implementation in our next revision.
- Our data originates from the IP2P dataset, and we acknowledge that employing a larger and more diverse in-context learning dataset could further augment the capabilities of Prompt-Diffusion.
---
Rebuttal Comment 1.1:
Title: About the rebuttal
Comment: I am happy with the rebuttal from the authors, which has addressed most of my concerns. Therefore, I have raised the score.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your response and are thankful for your reconsideration in raising the score. | Summary: This work introduces Prompt Diffusion. It is a framework designed to facilitate in-context learning within diffusion-based generative models.
By providing a pair of task-specific example images, such as depth from/to image and scribble from/to image, along with textual guidance, the designed solution can automatically comprehend the underlying task and replicate the same task on a new query image following the provided text instructions.
Specifically, the authors propose a vision-language prompt that can effectively model various vision-language tasks. The diffusion model, which takes the prompt as input, is trained in a joint manner on six distinct tasks utilizing these prompts.
Strengths: --The paper is well-written and easy to follow.
--The authors have done an excellent job of clearly articulating the motivation behind their work.
--The problem statement is well-defined and the objectives are clearly outlined.
Weaknesses: -- The work appears to be an extension of a previous work titled "A Generalist Painter for In-Context Visual Learning". The design principles and motivations are similar, while there are notable differences that distinguish this work from the previous one.
-- Differences from Prior Work: The authors have introduced text commands into prompts, which adds a new dimension to the model and potentially increases its versatility and applicability. Additionally, the foundation model used in this work is different from the one used in the prior work. This change could potentially lead to different performance characteristics and should be explored in more detail.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N.A.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Overall, this paper presents an interesting extension to previous work in the field. The clarity of the writing and the strong motivation make it a valuable contribution. However, a more detailed comparison with the prior work could strengthen the paper and provide more insights into the novelty and significance of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer foSi for providing positive feedback. We hereby provide a more detailed comparison between our work and Painter.
- **Task.** Painter follows [1] and focuses on solving high-level discriminative and low-level processing tasks, such as semantic segmentation, instance segmentation, depth estimation, keypoint detection, denoising, deraining, and light enhancement. Our Prompt Diffusion model, instead, further explores condition-guided image generations.
- **Prompting.** Painter follows [1] and stitches example images, query images, and output images into one large image as the input prompt, which will dramatically increase the computational cost, especially in high-resolution cases. However, we use concatenation in the channel dimension and split the first stage encoding of example images and query images via two independent stacked convolutional layers. Note our model could generate high-resolution, photorealistic images, such as the 512x512 images shown in the paper.
- **Model Architecture.** Painter follows [1] to formulate the problem as a demasking problem and a Transformer-based image inpainting model (ViT) is trained to predict the masked output tokens. Our work formulated the problem as an image generation problem and the state-of-the-art diffusion model, Stable Diffusion, is our backbone model for finetuning.
- **Modality.** Painter only allows image-visual prompts, while our work supports multi-modal vision-language prompts. Quoting your comment, the external dimension brings more versatility and applicability to our model.
Drawing upon the aforementioned statements, we point out that Prompt Diffusion explores a different direction of in-context learning in visual models, instead of "an extension of a previous work, Painter." We believe that both works offer distinct and promising viewpoints on in-context visual learning.
References:
[1] Bar, Amir, et al. "Visual prompting via image inpainting." Advances in Neural Information Processing Systems 35 (2022): 25005-25017.
---
Rebuttal Comment 1.1:
Title: Response to authors' rebuttal
Comment: I am glad that my previous concerns have been addressed. I will change my rating. Thank you.
---
Reply to Comment 1.1.1:
Comment: We thank Reviewer foSi for taking the time to review our response and reconsider your rating. We greatly appreciate your constructive feedback and are pleased that we could address your concerns. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Causal Fairness for Outcome Control | Accept (poster) | Summary: The authors analyze a new dimension of decision making: outcome control and benefit from decisions. Specifically, this work uses causal analysis to introduce new definitions of fairness on benefits and outcome control. The authors also propose new algorithms to study fairness and design fair algorithms for outcome fair decisions.
Strengths: * The paper proposes new fairness notions for outcome control, that is an important aspect of many decision making settings
* The authors use a simple example to walk through all the different aspects and notions, that greatly helps the readability of the paper.
* Through the synthetic use case, the authors show when and how benefit fairness is important, and how causal path analysis could also become critical for decision making while accounting for fairer outcomes.
Weaknesses: * While the appendix provides additional results on a real-world COVID dataset, the main paper has limited results on various different settings.
* While the authors assume causal knowledge, the paper does not discuss limitations explicitly for when computing counterfactuals and especially path-specific counterfactuals might become challenging in practice.
* Section 3 should have better explained which counterfactuals we compute to look at the path-specific factors and how the 3 different components are found out by decomposition.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * Two works [1, 2] seem somewhat related to the authors’ notions and framework. How does the framework of decision outcome control relate to the thinking of fairness of “treatments” in [1]? Similarly, how does the notion of “benefit” the authors introduce relate to the notion of “harm” in [2]?
[1] Madras, David, et al. "Fairness through causal awareness: Learning causal latent-variable models for biased data." Proceedings of the conference on fairness, accountability, and transparency. 2019.
[2] Richens, Jonathan, Rory Beard, and Daniel H. Thompson. "Counterfactual harm." Advances in Neural Information Processing Systems 35 (2022): 36350-36365.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: * While the paper’s primary focus is on introducing a new theoretical framework of fairness in decision making related to outcome control and benefit, more analysis and results on different experimental settings might have been an added benefit.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the provided comments, which were encouraging. We provide further clarifications regarding the points raised below.
---
[W1: Limited Evaluation] Please see global comment [G1].
---
[W2: Computing Counterfactuals] Please see global comment [G2]. Also, we have added the following:
- (i) We add a Proposition that states the identification property of $\Delta$. In particular, the statement is the following:
“Under the assumptions of the extended Standard Fairness Model from Fig. 1, the benefit $\Delta(x,z,w)$ can be identified from observational data.”
- (ii) Further we also added a comment on the fact that direct, indirect, and spurious effects of $X$ on $\Delta$ are identifiable in this settings.
- (iii) We also remark that the $\Delta(x, z, w)$ itself may be easily adjusted to remove the direct effect of $X$. However, indirect effect counterfactuals at a covariate-specific level $x, z, w$ are more difficult to compute and may require further assumptions such as additivity or monotonicity of the response.
We hope the above three changes adequately address the concern. Please let us know.
[W3: Terms in the decomposition] Thanks for noting this. We have implemented this suggestion. We now spell out the expression for the direct, indirect, and spurious effects, appearing in the decomposition, to increase transparency.
---
[Q1: Two related works] Thanks for pointing us to these references. You are indeed correct – the setting considered in [1] is related to ours. However, the approach taken therein differs from our approach. In particular, [1] computes three causal effects of interest: (i) total effect X -> Y; (ii) total effect D -> Y; and (iii) total effect X -> D. Then, the paper suggests that fairness should be assessed based on these three quantities. Our framework, however, integrates the above effects more coherently, and brings forward a new definition of fairness based on first principles. We now also added a citation for [1].
Regarding [2], we also found it quite interesting. However, the concept of harm / benefit defined in [2] is qualitatively different from our approach, due to the max operator used in Eq. (6) in [2]. The notion of harm / benefit in [2] considers what is in the literature known as canonical types [3] or principal strata [4]. However, our notion of benefit corresponds to the conditional average treatment effect (CATE), which measures the proportion who benefit minus the proportion of those harmed (in the language of [2]). We now clarify in the text that our notion of benefit differs from that in [2], to better contextualize our approach. Thanks for mentioning these works!
[3] A. Balke and J. Pearl. Counterfactual probabilities: Computational methods, bounds and applications. In Uncertainty Proceedings 1994, pages 46–54. Elsevier, 1994
[4] C. E. Frangakis and D. B. Rubin. Principal stratification in causal inference. Biometrics, 58(1): 21–29, 2002.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for addressing all my questions and doubts and adding 2 suggested fairness papers in their related work. I do think that the real-world empirical analysis is a strong case for your framework and can strengthen the theoretical work even more. This is especially true since it directly shows a tangible application to a causal method, which is generally hard to show! I do think the work is very interesting and adds a new view to fairness, so I maintain my acceptance suggestion.
---
Reply to Comment 1.1.1:
Comment: We again thank the reviewer for the suggestions during the review process, and also for acknowledging our answers. We are also encouraged by the fact that the reviewer sees the work as adding something novel and interesting to the literature, thank you! | Summary: The paper proposes a new fairness notion called benefit fairness that considers fairness in the outcomes of the decisions. In order to avoid the unidentifiability issue of the principle fairness, this paper proposes to condition on the conditional average treatment effect (CATE). The paper provides an algorithm that is proven to be optimal and satisfy benefit fairness. The paper in addition provides an algorithm to satisfy both benefit fairness and counterfactual benefit fairness.
Strengths: The proposed new fairness notion is quite interesting. It is different from traditional fairness notions, yet it is well-motivated and -explained using a real-world example.
The proposed algorithms come with theoretical analysis that shows their optimality and fairness satisfaction.
A budget is involved in the problem formulation that makes the algorithms more generally applicable.
Weaknesses: The connection between benefit fairness and traditional fairness notions other than principle fairness, such as demographic parity, equal opportunity, etc., is not clear. Are they consistent or do they conflict to some extent? A rigorous study may be out of the scope of this paper but some discussions will be helpful.
The measure of benefit fairness depends on the observation of the outcome Y. Can we generally assume that Y is available when we construct the decision policy? In the running example, Y is observed after a 2-year period. How can we estimate Y when we construct the decision policy before the 2-year period?
Some statements and notations are not very clear. For example, in Definition 3, $y_C$ is not defined and the definition of the pathway is not clear. My understanding is that the paper attempts to formulate path-specific counterfactual fairness, but the notations are quite confusing. In addition, $\Delta_{C}$ is not defined either. The two terms CBF and BFC may easily confuse the readers.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Although I didn’t check the correctness of the proof, Theorem 2 seems quite strong. Are there any assumptions about the policy or the causal model in Theorem 2?
My understanding of benefit fairness is that, it considers the causal effect of the link X->Y. If we remove this link, then $\Delta(x_0,z,w)$ is always equal to $\Delta(x_1,z,w)$, and benefit fairness degrades to the conditional demographic parity. Thus, the meaning of benefit fairness is to make the causal effect of the link X->D match the causal effect of the link X->Y? Is this understanding correct?
If my above understanding is correct, does that mean that CBF cannot be satisfied if the link X->Y exists, because CBF generally means that the decision should not depend on X?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the provided comments, it was nice to see that the main strengths were appreciated! We respond point-by-point in the sequel.
[W1: Connection with traditional notions] Great question, the manuscript indeed does not touch on this. We make the following observations and add the following paragraph to the text:
“If the benefit distributions between groups are substantially different, a decision satisfying BF will not achieve demographic parity. However, for cases in which all causal pathways (direct, indirect, and spurious) between the protected attribute and the benefit are either zero or removed by adjustment, the Causal BF criterion implies demographic parity (i.e., equal allocation of resources by $D$)”.
Furthermore, we also add a formal statement on this, which is added to the appendix due to considerations of space:
“Suppose that the distribution of the benefit $\Delta$ is equal between the groups $x_0, x_1$. Then, benefit fairness implies demographic parity."
The simple proof is also quite insightful for showing what is going on.
Proof:
\begin{align}
P(d \mid x_1) = \sum_\delta P(d \mid x_1, \Delta = \delta) P(\Delta \mid x_1)
\overset{\text{using BF}}{=} \sum_\delta P(d \mid x_0, \Delta = \delta) P(\Delta = \delta \mid x_1)
\end{align}
\begin{align}
\overset{\Delta \mid x_1 \overset{d}{=} \Delta \mid x_0}{=} \sum_\delta P(d \mid x_0, \Delta = \delta)P(\Delta = \delta \mid x_0) = P(d \mid x_0),
\end{align}
implying parity. We thank the reviewer for raising this point; it indeed adds an important connection to the existing literature.
[W2: Dependence on outcome Y] This is a good point. The label may indeed not be available for the cohort at hand in some settings. However, the upside is that it may be possible to use retrospective data, under the assumptions encoded in Figure 1. Therefore, one may use past data to assess the $\Delta$ quantity, possibly adjust it, and design a new and fair policy. On the other hand, if no data at all is available on the label $Y$, then designing a policy with such fairness guarantees is not possible, at least, with the level of assumptions currently considered.
[W3: Notation Clarity] Thanks. We have added a clarification of what a clause $C$ is; further, we also define a pathway. The terms CBF and BFC are indeed too similar. We for that reason, throughout, use CBF for Causal Benefit Fairness, and BF for just Benefit Fairness. We hope this distinction between BF and Causal BF will be easier to follow.
[Q1: Theorem 2 proof] Good question; the main assumptions are the lack of confounding assumptions required for performing the identification of the effects necessary for constructing the policy. The second part of the answer is that the statement considers an infinite-sample, population-level case. We remark there is an additional level of difficulty for providing finite sample guarantees, which is not covered by our theorem statement. We feel these are intriguing challenges for future work.
[Q2: X->D, X->Y effects] These are both great questions, thank you. There are two distinct parts of the CBF definition.
(i) The first part ascertains the fairness of the $\Delta$ quantity,
(ii) The second part ascertains that at equal levels of $\Delta$, the probability of treatment does not depend on the protected attribute.
Your question seem to relate to part (i), if we understood it correctly. If there is no link X->Y, then $\Delta(x_1, z, w) = \Delta(x_0, z, w)$, which is indeed accurate. Put differently, as you noted, the absence of a direct X -> Y link implies conditional demographic parity for the $\Delta$ quantity. Note, however, that this does not imply anything about the parity for decision D, related to part (ii).
Now, more generally, if the link X -> Y exists and it is considered to be discriminatory, our proposal is to first _adjust the_ $\Delta$ quantity accordingly. In other words, CBF cannot be satisfied unless we _remove the discriminatory effect from_ $\Delta$. After having this condition ascertained, benefit fairness from part (ii) can be applied to ensure a notion of equity. Thus, the fairness requirement can be thought of as a “two-step” process. We hope this makes sense but happy to provide further elaboration.
The note on balancing the effects X->Y and X->D is very interesting but we haven’t thought about doing that explicitly. Stll, in order to achieve the described notion of fairness, both of these pathways are affected by our procedure. Please let us know if you have any further questions, or if you see the balancing of the effects from a different perspective.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. Your explanations about the CBF definition are quite helpful. I have a separate question.
I consider the treatment benefit $\Delta$ as some sort of stratification of the population. From this perspective, how do you suggest we perform the conditioning on $\Delta=\delta$ in practice, especially when each individual has a different treatment effect? For example, do you suggest we bin the treatment effects and then condition on bins?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for acknowledging our responses.
Regarding $\Delta$ in practical applications: exactly, our approach would then be to use a fixed number of bins, on which we condition. The bins can correspond to quantiles of the $\Delta$ distribution, for example. The larger the sample size, the more bins we could use in practice, of course, and in the infinite sample limit this would correspond to conditioning on fixed $\Delta = \delta$ values. Actually, the approach with bins was also used in the experiment in Appendix E, where we conditioned on the percentiles of the $\Delta$ distribution (which was possible due to quite large sample size).
Please let us know if there are any further questions! | Summary: This paper studies the problem where a decision maker must allocate a treatment $D$ to optimize an outcome variable $Y$ while ensuring that the decision is fair, formulating fairness as the protected attribute $X$ not having an influence on $D$. It uses a clinical decision-making process (a running example on Cancer Surgery, Fig. 2(b)) to showcase its proposed method.
The paper proposes the notion of benefit based on the potential outcomes (PO) framework. $Y_{d_0}$ denotes the outcome of a patient that didn’t undergo surgery while $Y_{d_1}$ denotes the outcome of a patient that did undergo surgery. Under PO, only one of the two potential outcomes are observed per patient. Benefit is defined as $Y_{d_1} - Y_{d_0}$. Hence, a decision maker will derive a $D^*$ that maximizes overall benefit: i.e., those patients for which $Y_{d_0}=0$ (died without surgery) and $Y_{d_1}$ (survived with surgery).
Through the running example, the paper shows how a decision maker (under imperfect information) can be unfair via $D$ despite optimizing for the benefit. It then introduces a set of algorithms to reach fair benefit.
Strengths: [S1] The topic is relevant and novel within (causal) fairness. Under a heterogenous population of individuals, it is reasonable to expect that not all individuals will benefit from a treatment (captured by $D$ and its causal effect on $Y$ in the paper). The notion of benefit becomes relevant as (i) it formalizes which individuals need $D$ and which do not; (ii) it raises the potential issue of what happens when those that benefit under $D$ don’t embody societal representational fairness goals.
[S2] The use of the running example was very useful for understanding the paper’s goal.
[S3] Fig. 3 illustrates well the problem of allocating a treatment without knowing the type of patient w.r.t. to $D$.
[S4] Def. 2 (Benefit Fairness), putting aside the estimation issues, is a nice extension to other group-level fairness definitions (like Equal Opportunity). It states that individuals across the protected attribute $X$ that have the same (potential) benefit $\delta$ to gain from treatment should have the same probability of getting treated.
Weaknesses: [W1] PO limitations should be discussed further: Given the use of the PO framework, there’s a lack of treatment on how the benefit ($\Delta = Y_{d_1} - Y_{d_0}$) can be estimated in a consistent and unbiased way, at least, at the individual level. For each individual, we can only observe $Y_{d_0}$ or $Y_{d_1}$: for each individual, the one observed is the factual and the other one the counterfactual. However, the paper lacks a meaningful discussion on how we can estimate the counterfactuals in the non-oracle setting, which is the one of practical interest. Even in the running example, isn’t there a risk that the clinicians are using a biased $\Delta$? If so, how would that affect the proposed algorithms? The limitations should be discussed explicitly.
In particular, the derivation and justification of Eq. (16), which is central to the algorithm(s) is not clear. Is it a statement on the same tuple $(x, z, w)$: if so, how can the same individual be benefited and harmed at the same time by the treatment? Or is it a “match” between two individuals? Also, what is $z$ in the running example (this random variable is only presented and used in Fig. 1)?
Counterfactual generation/estimation is not exploited in the example. For instance, if we have access to the SCM model, couldn’t we generate the counterfactual distribution via Pearl’s abduction, action, and prediction steps for $Y$. We could then identify the “doomed” and “safe”? This is important as Algorithm 1 can only operate meaningfully if it can identify the borders of the types of individuals (w.r.t. the notion of benefit). Line 1 in Algorithm 1 needs to be discussed further.
[W2] Stronger Section 2: This section could be tighter. For instance, what’s the role of the budget $b$ from Def. 1? Even in the oracle setting, it is possible that we can’t treat all 100 patients if the budget is low enough, no? Conversely, if the budget is high enough and $\Delta \geq 0$, we could treat all patients, no?
Further, as a follow up to W1, if there is the pair \{$Y_{d_0}$, $Y_{d_1}$\} for each individual then there is also a $\Delta$ parameter. Now, under the oracle, we can see through the future and allocate $D$ based on $\Delta$. In the non-oracle setting, there’s a brief discussion about using $W$ as a proxy that leads to unfairness: how does that translate into estimating $\Delta$? Otherwise, if we can estimate it, then why are we even considering the proxy $W$? What I’m hinting at here – do correct me if I’m wrong – is that Def. 2 needs to have some measure of uncertainty to highlight that the non-oracle allocation will have some error w.r.t. to the oracle allocation (for a fixed $b$). Otherwise, it seems like we are still in the oracle setting.
[W3] Limited evaluation: The running example is essentially the only use case. Although it shows the proposed methodology, it would’ve helped to, e.g., test the algorithms under different parameters for the same synthetic data. Similarly, could the algorithms handle other variables on top $W$ or under a second protected attribute like race? The evaluation, even for the single use case, could’ve been pushed further to show the robustness of the approach.
[W4] Some definitions are unclear or not fully explained. For instance,
In Def. 1, the role of the budget $b$ is not explained. It would’ve also helped to formulate it under the PO framework explicitly.
In Def. 2, please define $\Delta$ and $\delta$ within the definition itself. These are presented later.
In Def. 3, is it at the individual level? Under the SCM presented (Fig. 2, but also Fig. 2), how are the conditionals the same under the interventions captured by the causal pathways. Don’t we need to update for downstream effects under $\mathcal{C}$?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See my questions in the Weaknesses section. I’m willing to increase my score if these questions are addressed.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Further discussion on $\Delta$ is missing. It’s a considerable limitation for the application of this definition and it’s not evaluated here theoretically.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: In the weaknesses section, the questions were good and we believe to have addressed them in a factual and robust way. Please let us know if there are any further issues!
---
[W1a: PO Limitations] Thanks for pointing this out. See global comment [G1] for a detailed response.
[W1b: Biased $\Delta$] This is indeed possible and is likely to be the case in practice. In some sense, using $\Delta$ is “optimal”, while clinicians probably use a less-than-perfect version of $\Delta$. We’ve included a specific remark to acknowledge this. It’s worth noting that our goal is to understand the conditions under which decision-making should be made, rather than describing how currently stakeholders make decisions – which, as you pointed out, may be suboptimal. We hope our work can serve as a normative guide to enhance decision-making in practice.
[W1c: Eq. (16) justification] Thanks for the opportunity to clarify this issue, which is central to our contributions. The key distinction here is between ‘covariate-specific’ and ‘unit-level’. By ‘covariate specific’, we mean all units $u$ of the SCM that are compatible with the observed values $(x, z, w)$. The value of $\Delta$ is computed for a group of units with fixed covariates. However, for each of these units, the outcome can only be 0 or 1, as you rightly pointed out. $\Delta$ represents an average over these outcomes. Unfortunately, both ‘covariate-specific’ and ‘unit-level’ approaches are called “individual” in the literature, leading to much confusion. To address this, we have dded the following for clarity:
“The values of $c, d$ are covariate-specific, and indicate the proportions of patients helped and harmed by the treatment, respectively, among all patients coinciding with the observed event $X=x, Z=z, W=w$ (i.e., all $u \in U$ s.t. $(X, Z, W)(u) = (x,z,w)$). “
[W1d: What is Z?] Generally, the $Z$ can be an arbitrary set of observed confounders, whereas in our running example $Z = \emptyset$. Thanks for noting this. We have added this explicitly in the description of Figure 2.
[W1e: $\Delta$ estimation] Agreed! Please see suggested improvements (i)-(iii) we mention in global answer [G1]. However, we remark that identifying doomed and safe groups would not be possible in the absence of the SCM. However, identifying the difference of the proportions $c(x,z,w) - d(x,z,w)$ is possible from observational data.
---
[W2: Section 2] Thanks – both assertions are indeed correct. Firstly, we make the following addition to the text, just below Definition 1 :
“The budget b is relevant for scenarios when resources are scarce, in which not all patients possibly requiring treatment can be given treatment. In such a setting, the goal is to treat patients who are most likely to benefit from the treatment, as formalized next in the text.”
Further, note that if b is large enough, only the $\Delta > 0$ condition needs to be checked. This is reflected in Line 2 of Algorithm 1, and also Line 2 of Algorithm 3 (and corresponds exactly with the intuition you described). So, the importance of having a budget comes for cases when not all patients who benefit from treatment can be treated. In this case, the reasoning becomes more involved, and we need to estimate $\Delta(x,z,w)$ for all $x,z,w$ values, and treat patients with the largest benefit $\Delta$, as described in the Algorithms 1 and 3.
---
[W3a: Limited Eval] Please see global response [G2].
[W3b: Other Ws and second attribute] We appreciate the question, thank you. Once again, we can provide some positive answers. Firstly, the set of variables $W$ can be multidimensional (see Appendix E for a specific real-world example). Furthermore, the example in Appendix E also shows how we can have a set of confounders $Z$. Therefore, the framework is compatible with datasets of a larger dimension.
Regarding the extension of the protected attribute – this is indeed possible. The simplest way of seeing this is to define the protected groups as all elements of the Cartesian product of the domains of the two attributes (e.g., multiple (sex, race) combinations). Then, for example, Eq. (15) could be an equality relating more than two different groups, but would in principle remain the same.
---
[W4a: Def. Explanations] We hope the discussion on the budget provided above helped to clarify this issue. Still, let us know if you think this is sufficient. Definitions of $\Delta$ and $\delta$ are now put in Definition 2.
[W4b: Individual Level Def. 3?] Thanks for asking; Yes, the first part of the definition is covariate-specific (Eq. (25)), and the second part is specific to a fixed $\delta$ level (Eq. (26)), but across covariate tuples. We now updated the text to make this transparent:
“CBF requires that treatment benefit $\Delta$ should not depend on the effect of $X$ on $Y$ along the causal pathway $\mathcal{C}$, and this condition is covariate-specific, i.e., holds for any choice of covariates $x, z, w$. Additionally, the decision policy $D$ should satisfy BF, meaning that at each degree of benefit $\Delta = \delta$, the protected attribute plays no role in deciding whether the individual is treated or not. This statement is for a fixed value of $\Delta = \delta$, and possibly considers individuals with different values of $x, z, w$.“
[W4c: Same conditionals?] The question pertains to Figures 4(a,b), right? (there may be a typo in the question). But you are correct – in Figure 4(a), we plot the densities of $\Delta \mid x_0$, $\Delta \mid x_1$ which correspond to Males and Females, respectively. Then, after adjusting for the indirect effect, the two densities become the same, as noted in your comment. We have now updated the Figure 4(a) label to have labels $\Delta \mid x_0$, $\Delta \mid x_1$ instead of Male, Female, to make things clearer. Furthermore, we increase the label size for Figure 4(b), so it is more clearly legible that the indirect effect is removed, i.e., the subscript is $W_{x_0}, x_1$.
---
Rebuttal Comment 1.1:
Comment: Dear authors, thank you for the very detailed answers. You really did go over all of my concerns. Under these proposed changes/clarifications, on top of the general comments (G1 and G2), I see a stronger paper. I will update my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for the constructive review process, in which we were able to improve our paper (specifically regarding identifiability results for counterfactuals). We also thank the reviewer for acknowledging our response, and for adjusting the grade as well. | Summary: The paper focuses on outcome control from a causal perspective of the decision-maker. The authors introduce benefit fairness taking the perspective of the decision maker and provide theoretical guarantee that the algorithmic result is optimal and satisfies benefit fairness. To support the decision maker in indicating potential discrimination early on, algorithm 2 evaluates the difference in benefit to demographic groups. And the paper defines causal benefit fairness with an accompanying algorithm and a guarantee of optimality.
Strengths: S1 - In fairness research, the need for more causal perspectives has been a growing discussion. This research is significant in contributing to an underdeveloped area.
S2 - The research problem is well developed and simple guarantees provided with proofs in the appendix.
Weaknesses: Exposition clarity needs minor improvement. For instance, what is the relationship between the first and second paragraph of the introduction? What do the authors see as the connection between outcome control and historical biases?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The authors do not actually provide a clear definition of “outcome control” as used in this paper. Based on the content, I assumed they are using the high level definition put forth by [Procedural Justice in Algorithmic Fairness, CSCW 2019], which defines outcome control as enabling the correctability and possible recourse against an individual decision. But this paper is not cited in the present paper. The authors reference [Causal Fairness Analysis, 2022], which is an ICML tutorial where section 5.3.3 contains exact wording and sentences from the present paper lines 35-38. The tutorial includes more insight clarifying that outcome control setting requires that the institution and the individual have the same utility function. It would be helpful to clarify this in the main text of the present paper. What is the definition of “outcome control” being used in this paper?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: It is not clear how generalizable this framework is given the demonstration on clinical examples.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reviewing the paper. We are encouraged by the fact that the reviewer appreciated the main strengths of the paper. We provide further clarifications to the questions raised in the sequel.
[W1: Historical biases] We appreciate this insightful and fundamental question, thank you. Following historical biases, certain demographic groups differ in their distribution of covariates (either confounder Z, or mediators W). The overall benefit from treatment may then _be lower in the protected group_, due to a difference in the covariates that originates historically. Examples of this are numerous (we mention in passing the example of kidney malfunction, more common in some minority groups, which in turn reduces the possible benefit of heart surgery – this example falls under outcome control). We have now added this remark to the Intro, to make the transition more smooth. We thank the reviewer for this suggestion!
[Q1: Outcome Control definition] Thank you for pointing this out. We have now made clear in the Introduction that outcome control in the context of the paper refers to the specific setting described by the causal diagram in Figure 1, in which the standard fairness model is extended to include a treatment $D$ that precedes the outcome $Y$. More formally, in outcome control, the goal is to maximize a specific outcome using a known control (e.g., maximizing survival using surgery). Furthermore, it is true that this setting is often characterized by the fact that individuals and institutions have the same utility function. This is now clarified in the text. Furthermore, we also add the following explanation and citation relating to the work that you mentioned:
“Therefore, in the setting of outcome control, an institution may attempt to maximize an outcome (such as survival or well-begin) using a known control (such as surgery or a welfare intervention). This is different from the concept of outcome control in the procedural justice literature [1], which refers to the settings when individuals have some influence over assigning their own outcomes.”
[1] Lee, Min Kyung, et al. "Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation." Proceedings of the ACM on Human-Computer Interaction 3.CSCW (2019): 1-26.
[L1] Here, we remark that the examples of outcome control are numerous, and go beyond just the clinical setting. For instance, consider a setting of outcome control in the context of criminal justice. A judge may wish to minimize the number of individuals who recidivate, that is, minimize the outcome $Y$. The judge can make the decision $D$ to detain the individual. This example, therefore, falls under outcome control, since the judge would attempt to use the decision $D$ to minimize the outcome $Y$. The COMPAS dataset is collected from a relatively similar setting to this one. In a further example, (Coston et al., 2020) analyzed a specific setting of a child hotline. There, the intervention is to send a social welfare team (decision $D$), while the intent is to maximize welfare and minimize harm (the outcome $Y$). The main considerations of the approach, for these settings, would remain largely the same as for the clinical examples in the paper. That is, the judge would attempt to detain those individuals for whom the detention decision reduces the probability of re-offending the most; a hotline responder would intervene to calls that are most urgent, and for which there is the greatest reduction in harm; both of these are analogous to treating patients who benefit the most from treatment in a clinical setting. We reflected this discussion in the manuscript, but, please, let us know if the answer is satisfactory or further elaboration would be desirable; thank you.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. | Rebuttal 1:
Rebuttal: The authors would like to sincerely thank all the reviewers for this paper. The main strengths and novelty were clearly appreciated, and the questions raised were quite useful for us to revise and improve the paper.
In our response, we index all weaknesses with W, questions Q, and limitations L. We do not fully cite reviewer's questions (due to character limit), but try our best to provide a caption for each W/Q/L.
Here, we would like to provide two global responses, which are then cited in the individual responses as well.
---
[G1: Limited Evaluation] Unfortunately, during the submission process, the external links of the document have stopped working. In particular, the submission included the following links (the anonymized has now been shared with the Area Chair):
- (i) In a vignette that accompanies the paper, we performed inference for the running example on finite samples, but this has not been highlighted due to the missing link. Please, have a look at Supplementary Material, source-code/vignette.html. Therein, we perform the inferences described in the paper, based on the data itself (and not the SCM).
- (ii) Furthermore, in Appendix E, we performed a real-world experiment based on the MIMIC-IV dataset. This application, as the citations in the Appendix also show, comes from the literature in intensive care medicine, and is a well-known issue (sex-specific bias in allocating respirators). We hope the reviewers will find this real application compelling.
In the current version of the writing, we decided to prioritize theoretical underpinnings rather than practical demonstrations. Of course, this is a matter of taste. We thought this type of exposition would maintain the most clarity in the concepts, while we deferred the empirical studies to the supplemental materials. However, we now suggest moving the real-world experimental results from Supplement E to the main text. We are happy to hear further suggestions on how to improve the presentation from the reviewers.
---
[G2, Identification of Counterfactuals] The paper leaves the issue of identification of $\Delta$ implicit but we agree that making it explicit could add better transparency and improve the flow. We changed the manuscript to add the following points more explicitly.
- (i) We added a Proposition that states the identification property of $\Delta$. In particular, the statement is the following: “Under the assumptions of the extended Standard Fairness Model (Fig. 1), the benefit $\Delta(x,z,w)$ is identifiable from observational data.”
- (ii) We further added an Appendix with the proof of identification using the counterfactual axioms (Ch. 7, Pearl, 2000). In the Appendix, we also provide the identification expression for $\Delta(x,z, w)$.
- (iii) Below the proposition, we pointed to more general identification strategies. Interestingly, these are also applicable to cases with both observational and experimental data.
“More generally, the approach of [1] can be used for testing identifiability of $\Delta(x,z,w)$, and this approach also handles cases when both experimental and observational data are available.”
- (iv) Further we also added a comment on the fact that direct, indirect, and spurious effects of $X$ on $\Delta$ are identifiable in this settings. A proof of this claim, together with identification expressions, is now added to the appendix.
- (v) We also remark that the $\Delta(x, z, w)$ itself may be easily adjusted to remove the direct effect of $X$. However, indirect effect counterfactuals at a covariate-specific level $x, z, w$ are more difficult to compute and may require further assumptions such as additivity or monotonicity of the response.
Thus, we hope this clearly demonstrates that our approach is applicable to real data; that is, our approach does not assume the knowledge of the SCM itself. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Logarithmic-Regret Quantum Learning Algorithms for Zero-Sum Games | Accept (poster) | Summary: The paper presents an improved quantum online learning algorithm for approximating the Nash equilibrium of a zero-sum game. Notably, this is the first algorithm to achieve $\tilde{O}(1)$ regret with quantum speedup. The presented algorithm can then be applied to linear programming problems using the primal-dual approach. Additional contribution of the paper is a sampling method for Gibbs distribution for obtaining multiple samples with quantum speedups on both the number of samples and the number of possible outcomes.
Strengths: - Multiple novel contributions on quantum algorithms: speedups for multi-Gibbs sampling and logarithmic regret Nash equilibrium approximation
- The paper is well-written and easy to follow
Weaknesses: While the presented theoretical results are interesting, I consider the necessity of QRAM as a minor weakness since (to my knowledge) it is still a highly theoretical model whose physical realizability is unclear.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The presented algorithm requires QRAM to work. How does this compare to the quantum algorithms of regret $\tilde{O}(\sqrt{T})$, i.e., do they too require QRAM?
Typos:
- Line 285 should probably say "Line 7"
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: I do not consider the work to have any serious limitations other than the ones that follow from the limited practicality of quantum computing (for now).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer taking the time to thoroughly review our paper and provide helpful feedback.
Regarding the use of QRAM, we would like to clarify that prior works (Refs. [50] and [7]) also have exactly the same requirement of QRAM (though under slightly different names). Specifically, Ref. [50] utilizes a "tree-like data structure in QCRAM" as described on Page 7, Line 4. Furthermore, Theorem 9 of Ref. [50] emphasizes the use of "a quantum computer with QCRAM." Ref. [7] also states they employ the same classical variable data structure as Ref. [50] per Lemma 1. In summary, QRAM was commonly assumed in these prior works.
Thank you for catching the typo on Line 285. We will correct this in the revised version.
---
Rebuttal Comment 1.1:
Title: Reply to Authors
Comment: Thank you for addressing my concerns in a satisfying way. While I still consider the necessity of the QRAM as a minor weakness, it then seems like a necessary evil that is hard to get rid of in this case (as is the case with many quantum algorithms).
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thanks for your comments!
We agree that QRAM seems inevitable in designing efficient quantum algorithms for zero-sum games and linear programming solvers. | Summary: The paper studies the online learning quantum algorithms for zero-sum games. It proposes the online quantum algorithm for zero-sum games with near-optimal regret, which makes progress to online learning algorithms in the quantum setting. The algorithm quantizes classical algorithms based on the MMW method and incorporates a fast quantum multi-sampling procedure for the Gibbs sampling problem. By developing a sample-based stochastic version of the optimistic multiplicative weight update method, this method achieves a quadratic improvement over classical algorithms and presents a fast quantum linear programming solver as an application.
Strengths: 1. The paper addresses a timely and relevant topic in quantum learning theory, proposing the first online quantum algorithm for zero-sum games with near-optimal regret.
2. The algorithm achieves a quadratic improvement over classical algorithms for computing the approximate Nash equilibrium.
3. The paper presents a fast quantum linear programming solver as an application.
Weaknesses: 1. The main algorithm is derived from classical methods, and the explanation of the features/contributions of quantum computing in achieving the speedup or advantage could be further elaborated.
2. As a theory paper, it could benefit from a clear and well-motivated example demonstrating the application of the proposed algorithm. Including such an example would help readers better understand the potential practical implications and usefulness of the work.
3. Could the authors discuss whether this online quantum algorithm could be dequantized?
4. In the abstract, the authors claim that "a fast quantum linear programming solver" but haven't identified under which setting.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Refer to the part about weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: NA.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments.
For the contributions of quantum computing in achieving the speedup, we would like to explain as follows: to obtain the speedups over prior work, we proposed Sample-based Optimistic Multiplicative Weight Update (SOMWU for short) as the framework for solving zero-sum games. By integrating SOMWU with quantum techniques, we achieve near-optimal regret, surpassing prior quantum approaches at a comparable cost. However, efficiently implementing SOMWU requires a tailored quantum sampling algorithm. Existing methods are insufficient, motivating our novel multi-Gibbs sampler design. We will highlight these ideas in the future version of our paper.
To demonstrate an application of our zero-sum game algorithm, we discuss its use for solving linear programming problems, as noted in the Introduction (Section 1, Page 3). Specifically, we show how our approach can be leveraged as a quantum linear program solver in Section 5 (Page 9), with further details provided in Appendix E (Page 30). We will expand this example in the revised paper to further illustrate the practical utility.
Regarding dequantization, quantum-inspired algorithms usually assume certain conditions of the matrices, such as the low-rank condition (e.g., Ewin Tang. “A quantum-inspired classical algorithm for recommendation systems” and Nai-Hui Chia et al. “Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning”). It is unclear if our algorithm can be efficiently dequantized, as our problem setting does not assume that the input matrix is sparse or low-rank. Therefore, while dequantization may be possible in certain constrained cases, it is unlikely a classical version could match our performance for general inputs. Further investigation of the dequantization question remains an interesting area for future work.
As for the setting of our quantum linear programming solver, the solver requires quantum oracles to the input matrix and vectors of the linear program, and it outputs a classical value as the approximate optimal value of the program. This setting is standard in quantum algorithms for linear programming. We will expand Section 5 to include the detailed setup of the quantum linear programming solver.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply that helps me better understand. I have no further questions. | Summary: This paper considers the problem of developing quantum, low-regret, algorithms for solving zero-sum games. This paper provides a quantum algorithm which matches the state-of-the-art for solving zero-sum games while improving upon the regret of the associated algorithm. The paper achieves this result by providing a stochastic variant of optimistic mirror descent which they then implement by a quantum algorithm for developing multiple samples from a suitable Gibbs distribution. Additionally, the paper discusses implications and lower bounds for linear programming more broadly.
Strengths: Zero sum games are an important optimization problem in many disciplines. The mere fact that the paper shows how to depart from previous quantum algorithms for solving zero sum games (in using optimistic mirror descent versus mirror descent) while retaining state-of-the-art bounds is interesting and potentially useful for further study of the problem. Additionally, by following this alternative approach the paper obtains improved regret and a type of parallelism in solving the problem. This is an interesting advancement for the study of the problem. Finally, this alternative optimization method gives rise to a different Gibb’s distribution sampling problem than the one considered in prior work. Furthermore, the paper gives an interesting algorithm for solving it which could motivate further study. Finally, the additional discussion of linear programming provided in the paper (though seemingly straightforward) is a nice addition as well.
Weaknesses: It is not clear how substantial the departure from this paper from prior work is, how novel the advancements are, and how complete the comparison of this paper to prior work is. While I think the advancements of the paper are interesting and potentially exciting (as discussed in the strength section), I fear that the paper does not well-articulate them or situate them. For example:
* A comparison of the techniques in the Gibb’s sampling algorithm of this procedure and those in prior quantum-zero sum games algorithm does not seem to be provided. This seems particularly important as the techniques were key to recent advances in quantum algorithms for solving zero-sum-games.
* It is not clear what exactly the benefits of low regret are over prior work and why previous results couldn’t be used to obtain low-regret (e.g. by only considering a subsequence of the iterates taken). I think the low-regret aspect is interesting and this algorithm is potentially more parallel than prior algorithm’s with state-of-the-art complexity and the sampling problems considered may be simpler than in prior work; however I am not sure that the paper argues this clearly.
* It is not clear what obstacles were present to obtaining the result. It would help if the paper clarified whether moving from mirror-descent to optimistic mirror-descent was the main insight and the rest was straightforward (though technical and important) or if further insights were needed.
Some additional comments along this line are given “Questions.”
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: My main questions / comments are the items in the previous “Weaknesses” section. Beyond what was written there: How does the Gibb’s sampling compare to prior work? Is it simpler in any way? The same? Is a completely different techniques used? Is there a major insight of the paper beyond the use of stochastic optimistic mirror descent?
Beyond this, below are some more detailed suggestions, questions, and comments:
* Line 1: “for zero-sum games” --> “for solving zero-sum games”
* Line 4: “yielding a quadratic improvement” as written, I am concerned this is suggesting that it is the first such improvement.
* Line 27: “For, this approximation task, online learning becomes significant” – I am not exactly sure what was meant by this.
* Line 33: It might be beneficial to define regret or explain the model more clearly earlier so the improvement over prior work is clearer. (Regret is well-known in general, but when stated for a static optimization problem, the definition is less clear.)
* Line 39: I believe zero-sum games were shown to be solvable in $\tilde{O}(1/\epsilon)$ iterations earlier, e.g. “Prox-method with rate of convergence O (1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems” by Nemirovski in 2004. I’m less sure about whether this works in regret context but in any event I believe should be mentioned when discussing the history of solving zero-sum games.
* Line 81: it might be good to clarify whether this is a new feature of this algorithm or not
* Line 130 – 135: it might be good to clarify why prior work could or could not be used for this problem
My main questions / comments are the items in the previous “Weaknesses” section. Beyond what was written there: How does the Gibb’s sampling compare to prior work? Is it simpler in any way? The same? Is a completely different techniques used? Is there a major insight of the paper beyond the use of stochastic optimistic mirror descent?
Beyond this, below are some more detailed suggestions, questions, and comments:
* Line 1: “for zero-sum games” --> “for solving zero-sum games”
* Line 4: “yielding a quadratic improvement” as written, I am concerned this is suggesting that it is the first such improvement.
* Line 27: “For, this approximation task, online learning becomes significant” – I am not exactly sure what was meant by this.
* Line 33: It might be beneficial to define regret or explain the model more clearly earlier so the improvement over prior work is clearer. (Regret is well-known in general, but when stated for a static optimization problem, the definition is less clear.)
* Line 39: I believe zero-sum games were shown to be solvable in $\tilde{O}(1/\epsilon)$ iterations earlier, e.g. “Prox-method with rate of convergence O (1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems” by Nemirovski in 2004. I’m less sure about whether this works in regret context but in any event I believe should be mentioned when discussing the history of solving zero-sum games.
* Line 81: it might be good to clarify whether this is a new feature of this algorithm or not
* Line 130 – 135: it might be good to clarify why prior work could or could not be used for this problem
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations:
This is primarily a theory paper, and it is unclear how applicable this is. However, the weaknesses and questions discussed reflect limitations for which further discussion might be beneficial.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's feedback highlighting opportunities to better articulate the novelty and situate our techniques relative to prior work.
Specifically, we will include a detailed comparison of our Gibbs sampling approach to existing methods from quantum zero-sum game literature. Our quantum multi-Gibbs sampler is different from the samplers appearing in prior works (Ref. [50] and Ref. [7]).
The sampler in Ref. [50] is a special case of ours when the sample count $k = 1$; also, the learning algorithm in Ref. [50] only requires the case when $k = 1$.
In the following we compare the sampler from Ref. [7] with ours:
- Firstly, the two samplers are designed for different goals. We focus on the optimistic multiplicative weight update, while Ref. [7] adopt the non-optimistic version. As a result, we need a good procedure to obtain many samples at a time to reduce the variance. They use the dynamic Gibbs sampler to reduce the amortized time complexity of obtaining a single sample which changes at every step.
- Secondly, the implementation details are very different. In our quantum multi-Gibbs sampling procedure, we develop a proper version of consistent amplitude estimation to guarantee the correctness of the quantum k-maximum finding. In the dynamic Gibbs sampler, they maintain certain invariants and a hint vector to achieve a good amortized time complexity.
Regarding regret bounds, we will emphasize the concrete advantages of our approach over prior quantum algorithms. Prior works (Ref. [50], [7]) employed the Sample-based Multiplicative Weight Update (SMWU) framework (by Ref. [25]), while we proposed Sample-based Optimistic Multiplicative Weight Update (SOMWU for short) which is a generalization of the Optimistic Multiplicative Weight (OMWU) proposed in Ref. [48]. The regret bounds of these quantum algorithms inherit from their classical framework. However, whether the MWU (or SMWU in Ref. [25]) can achieve $\tilde{O}(1)$ regret in the zero-sum game setting has been open for more than a decade (see, for example, Ref. [19, Future work in Page 3]). To the best of our knowledge, we are not aware whether previous results (Refs. [50], [7]) can be used to directly obtain low regret under the game setting (this is even not known achievable in the classical setting).
The reviewer rightly notes that a core insight is moving from mirror descent to optimistic mirror descent. In addition, the following insights are crucial in designing our algorithm:
- Our first insight is the sample-based generalization of OMWU, which, to the best of our knowledge, is a new framework for solving zero-sum games.
- Our second insight is to point out that SOMWU, equipped with quantum computing, is able to achieve near-optimal regret, beating prior quantum approaches while retaining the computational cost. To implement SOMWU efficiently, we need a specifically-designed sampling algorithm, while known quantum samplers do not apply here as they are not efficient enough.
- Our third insight is the quantum multi-Gibbs sampler, which, we believe, can serve as a basic subroutine in future quantum machine learning algorithms.
Regarding the detailed questions for lines, we will address them as follows:
- We will revise the wording on lines 1, 4, 27, and 81 to improve clarity as suggested.
- We will define regret earlier and explain the model more explicitly to make our paper more understandable (Line 33).
- Thanks for pointing out the work of Nemirovski in 2004, which investigates how to find the saddle point in convex-concave minimax optimization problems. This work is highly related to our work, though it has a non-online problem setting. We will cite the paper and discuss the proximal method and other histories of solving minimax optimization problems that are related to solving zero-sum games to properly credit prior work (Line 39).
- We will explain why existing techniques could not achieve our efficiency gains, to motivate the need for new methods (Line 130-135), as noted in the second insight.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for the reply. I believe adding the information in your rebuttal (and better defining / articulating what exactly the low regret setting / problem means for this paper) would all improve it.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thanks for your further comments.
We will definitely incorporate our discussion in the rebuttal into future versions of this paper.
Regarding defining/articulating what exactly the low regret setting/problem means for this paper, we want to explain more as follows:
- Lower regret means fewer interactions between players when they reach an approximate Nash equilibrium in the game setting and fewer rounds for the algorithm to find a good approximation of the optimal value for the linear programming solver. In previous quantum approaches (Refs. [50, 7]), they all have $\tilde O (\sqrt{T})$ regret; that is, the regret would be bounded by the square root of the number of rounds. By contrast, our logarithmic-regret algorithm has a near-constant bound (up to logarithmic factors) of regret. Thus, our algorithm has a good theoretical dependence on the number of rounds for finding the Nash equilibrium of zero-sum games and for finding an approximate optimal value for linear programming problems.
We will add these explanations in the revised version of our paper. | Summary: In this paper the authors consider the task of computing the eps-approximate Nash equilibrium of zero-sum games. In particular, for a matrix A in R^{m x n}, the goal is to find the minimum distributions x,y s.t., max_y x^t A y - min_x x^t A y is at most epsilon, given quantum query access to A. In this paper the authors are further concerned with the online learning/regret model, i.e., in each round the players need to choose probability distributions (u,v) and their goal is to pick these distributions (u,v) such that it is as close as possible to the optimal choice of eps-Nash equilibrium.
Prior known quantum algorithms for computing the Nash equilibrium ran in time sqrt{m+n}/eps^2.5 but achieved a regret of sqrt{T} for a T round algorithm. In this paper, the authors show how to obtain a regret of O(1) with the same complexity as sqrt{m+n}/eps^2.5.
Strengths: The main strength of this paper is in a way it solves an important question of computing Nash equilibrium with a O(1)-based regret algorithm. Using this, they algorithm obtain a state of the art quantum algorithm for linear programming solvers. I think both of these contributions are interesting on their own right!
Weaknesses: As for weakness, i feel there are a couple of weaknesses:
(i) The main techniques used in this paper, aren't very novel. Indeed, they do improve upon prior works and need new tools, but the novelty of the main contributions towards obtaining their speedups doesn't make it a strong contribution. For example, obtaining k copies of a Gibbs state in time sqrt{nk} (compared to the trivial k sqrt{n} algorithm) isn't too surprising (indeed new, but not surprising given Hamoudi's algorithm and also complexity of Grover's search with k marked items).
(ii) The speedup to obtain the regret bound is by quantizing the optimistic online algorithm. I think there are some subtelties when quantizing this algorithm, but the main ideas involved aren't novel.
Having said the above, i think overall for the ML community, i think the main results and these interesting quantum subroutines above are nice.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I think the definitions of online learning and regret can be defined much better. In particular, the online learning/regret model for Nash equilibrium is very messily written and I'd have appreciated a better explanation.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer identifying potential areas for improvement. However, we believe the core techniques demonstrate non-trivial novelty for the following two points.
First, our framework of Sample-based Optimistic Multiplicative Weight Update is considered for the first time in both classical learning theory and quantum algorithm research. The framework of our Sample-based Optimistic Learning algorithm contains a new and detailed analysis of the regret (see Appendix B, Page 15). The regret analysis differs from the classical counterpart that does not involve sampling; moreover, it also differs from all previous quantum approaches that do not consider the technique of Regret bound by Variance in Utilities (RVU, Definition A.2, Page 15).
Second, while the multi-Gibbs sampler adapts existing ideas of designing quantum samplers like Hamoudi's algorithm (Ref. [29]), creatively extending these methods to efficiently generate independent samples from the Gibbs distribution when only implicitly knowing the parameters represents an innovative application. We propose our novel multi-Gibbs sampler to satisfy the requirement of the sample-based optimistic multiplicative weight update framework, as existing quantum samplers do not. We agree the building blocks of quantum singular value transformation, amplitude estimation, etc. are known. However, we tailored these techniques in novel ways for our application.
In summary, we appreciate the reviewer highlighting opportunities to further demonstrate novelty. In the revision, we will expand the discussion part to better highlight our contribution.
As for the question raised by the reviewer regarding the definitions of online learning and regret for finding Nash equilibria, we agree that concisely explaining these concepts would make our paper more understandable. In short, online (machine) learning studies the situation when data arrives in a sequential order and each time the (meta-)algorithm is required to output a predictor for future data. Regret is a commonly-used performance measure in online learning that compares the losses of predictors produced by the algorithm and the optimal fixed predictor. As a concrete example, the well-known multiplicative weight update algorithm is an online learning algorithm with regret $\tilde{O} (\sqrt{T})$. We will carefully revise Section 2 to provide an easy-to-follow introduction to online learning and regret for finding Nash equilibria.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their reply. For the time being, I'll keep my rating and keep in mind your comments for future discussions.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for reading our response! Please do not hesitate to reach out if you have further questions. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Counterfactual Memorization in Neural Language Models | Accept (spotlight) | Summary: This paper defines counterfactual memorization as the difference between a model's expected performance on a sample when it is included in the training data and that when it is not. The performance is measured by per-token prediction accuracy. The paper studies common patterns across counterfactually memorized samples. It further defines counterfactual influence similarly to counterfactual memorization, except that the performance is measured on any sample, and studies the correlation between counterfactual memorization of a sample and its counterfactual influence maximized over some set of data.
Strengths: - It is interesting to study whether the existence of one training document will affect model predictions.
- The mathematical definition in Eq. (1) - (4) are reasonable.
- Ablation studies including 3.3 and 3.4 are well-explained.
Weaknesses: Overall:
- The research problem can be more crisply defined. It is unclear what type of information the paper aims to study. Overall per-token top-1 accuracy is not sophisticated enough to be reflective of memorization of "sensitive information" (L2) or "details of specific events" (L111).
- There are technically mistaken statements. E.g. L32 says counterfactual memorization is a measure of an expected change. But it is indeed the difference between two expected values where the expectations are over different distributions.
- The quantitative definition about which information is considered "common" or "rare" is unclear (L48). While the paper uses 2^21 documents (L95), only 38000 are identified as being a near-duplicate with at least one other example (L199). Is two near-duplicated examples considered common, and one considered rare? The 38k seems to be a small portion and it's unclear if the notion of "common" vs. "rare" is significant enough to study.
- It is unclear what the common patterns found across counterfactually memorized text are (L50).
- Please use "counterfactual memorization" instead of simply "memorization", whenever proper, to avoid confusion.
Analyzing Counterfactual Memorization:
- There are contradicting claims. E.g. L107 says high (counterfactual) memorization happens for unconventional texts and multilingual texts, while L126 says ill-formatted text or foreign languages are hard and L125 says hard examples have low counterfactual memorization scores.
- There are non-refutable claims. E.g. L123 says easy examples tend to have low scores. "Simple samples" are defined as those with high accuracy, and two high accuracy scores should apparent have a relatively small difference.
From Memorization to Influence:
- Authors claim that small memorization scores have small max-influence scores and larger influence scores requires larger memorization scores (L253-254), but there doesn't seem to be a concentration of densities around the x=y line in Figure 4(b).
Some typos:
- L21: revious $\rightarrow$ Previous
- L38: considerded $\rightarrow$ considered
- L109: most often $\rightarrow$ are most often
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - L22-25: Are sensitive data, such as, phone numbers and usernames, "common" and frequently occurring?
- L30: How do you quantitatively define "common" and "rare"?
- L38-39: Why is the memorization of $x$ $\textit{counterfactual}$?
- L38-39: Could you clarify what "predicts $x$ accurately" means? It is implausible that given an initial token, an autoregressive language model will be able to generate an entire document $x$ perfectly via greedy decoding.
- L75: Does |D| refer to the number of documents?
- L93: But T5-base has ~220M parameters, instead of ~112M.
- There seems to be no control over the number of subsets S and the number of subsets S' being similar and both big. It is possible that a training sample appears in all 400 samples. Then the expected measure M is a bad approximation. What's a theoretical error bound?
Please also see Weaknesses. If the review misses important aspects, please feel free to address them.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors discussed limitations but not ethical concerns. Both are reasonable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank the reviewer for the helpful suggestions and detailed questions. It seems there is some confusion regarding the definition and analysis of counterfactual memorization. Due to the 6000-character limit, we focus on the main questions below. We hope the reviewer could increase the rating if the answers clarify those confusions. We are happy to answer any further questions during the discussion period if anything is still unclear.
---
**Q1**: per-token top-1 accuracy is not sophisticated enough
We used teacher-forcing when evaluating under per-token accuracy, so the evaluation is similar to perplexity in the sense that we are measuring the likelihood of predicting the next token accurately given the *true* context, instead of asking the model to construct the entire document accurately. We have evaluated the per-document perplexity to measure memorization, and the ranking of memorized examples is largely consistent with per-token accuracy. We ended up using the latter because it has a numerical range of [0, 1], which leads to scores that are more easily interpretable. We note that the intermediate-to-high memorized examples do end up containing details from news reports of specific events (Table 1 and Appendix L), and such memorized documents are found to have strong influence on the validation set documents describing related events (Table 2 and Appendix M).
**Q2**: “expected” change (L32) inaccurate
By “expected” change (L32) we meant “anticipated” change, instead of the formal “expectation” as in probability. We will rephrase it to avoid ambiguity.
**Q3**: quantitative definition of common / rare is unclear; is 38k near-duplicates worth studying?
The fundamental question is how to compare if two documents are “identical” or “encode the same information”. We can define common and rare by counting the occurrences via approximate textural matching, as previous work has done. In this paper, we try to quantify it by whether or not the document can be easily predicted by a language model if the model has not seen it during training.
Section 4 is a study of how our metric relates to the number of near duplicates. Though there are only 38k examples that have near-duplicates, the number of duplicates goes up to hundreds for many examples. Moreover, the text-matching based near duplication detection does not capture all related documents. In Appendix D, we further studied the impact of data deduplication on memorization analysis, and found that related documents that are missed by text matching deduplication can still be identified by counterfactual memorization. Finally, we are more interested in rare examples than common ones due to the large impact on model predictions.
**Q4**: common patterns (L50)?
We found examples with the highest counterfactual memorization are generally unconventional text such as all-capital letters. After those artificial examples, we see news reports of specific events. Examples with low counterfactual memorization encode common information, many are templated documents.
**Q5**: contradiction on difficulty and counterfactual memorization
There is actually no contradiction here. The counterfactually memorized examples are mostly difficult, but the converse is not true. In fact, it is shown (L125, and Fig.1) that the **hardest** examples generally have low counterfactual memorization, because even when included in the training set, the models are having a hard time learning them. We will revise the text to make it more clear.
**Q6**: no concentration of densities around x=y in Figure 4(b).
Note the concentration won’t necessarily be on x=y, because the max-influence on a validation example does not necessarily equal the memorization (self-influence). Moreover, the main probability mass is on low memorization / low influence because the counterfactual memorization mostly captures examples in the tail. For those examples with large max-influence, we observe that memorization is also large. But the converse is not true, because the finite validation set might not contain a document that encodes matching information for each counterfactually memorized training document.
**Q7**: Are sensitive data, such as phone numbers and usernames, "common" and frequently occurring?
It depends on the context. Phone numbers of a customer service would be considered “common” as it is likely to be found in many places. Whereas personal phone numbers are generally not “common” unless it is leaked or intentionally posted online in many places.
**Q8**: Why “counterfactual” memorization?
Counterfactual here refers to the procedure to make such measurement: we estimate the model’s prediction change if a given training document is withheld from its training corpus.
**Q9**: what "predicts x accurately" means? It is implausible for LM to generate an entire document perfectly.
We measure the next token prediction accuracy. Following the convention in language model *evaluation*, we use teacher forcing, meaning that the prediction of the next token is always conditioned on the *true* context. So the model only needs to predict each of the next token accurately given the true context, instead of generating an entire document perfectly via decoding. Although language models are also capable of doing that for *some* training examples, as shown in previous studies of text-matching based memorization.
**Q10**: T5-base has ~220M parameters
We use the decoder-only architecture for the auto-regressive language modeling task, which has around half of the parameters.
**Q11**: is it possible that a training sample appears in all 400 samples?
When generating the subsets, each example is included with probability 0.25. The probability that none of the 400 samples contain a given example is therefore 0.75^400 which is in the order of 10^-50. So it is possible but extremely unlikely. More generally, this is a binomial distribution, and so has very small tails.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply, which clarified many of my doubts, and look forward to the progress during the rebuttal being added to the final paper. | Summary: This paper introduces a concept "counterfactual memorization," defined as the anticipated change in a model's output when a specific training example is omitted. Experimental analysis was conducted on three widely employed text corpora in the domain of language modeling, and the phenomenon of memorization was identified in all instances. Furthermore, the authors assess the impact of each memorized training example on the validation set and generated texts. This novel approach offers direct evidence pinpointing the origin of memorization during testing. Despite these intriguing insights, the paper has notable limitations. The experiments performed lack robustness and some of the assertions made are lacking in clarity.
Strengths: 1.The paper well-written and easy to follow.
2.It is of great significance to explore memorization in neural language models.
3.The proposed new definitions of can serve as references for future works, and the experimental analysis is detailed and interesting.
Weaknesses: 1.The paper lacks a section on related work, which should summarize and analyze the current state and challenges of relevant research.
2.This paper only uses the Transformer-based language backbone equivalent to T5-base (encoder-decoder framework). Can the experimental results be applied to other architectures and models of different scales?
3.Some typos need to be fixed. For example, “revious” (line 21) should be “Revious”. The word at the beginning of a paragraph is capitalized.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1.Does the result of the model depend on the size of the subset ? Is the result sensitive to the value of r? How did the author determine that r=0.25?
2.Some domains are represented much more frequently than others in the datasets we studied (Line 136). What are these domains, and what are their characteristics?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank the reviewer for the positive review and helpful comments. Please see below for detailed clarifications.
---
**Q1**: lacks a section on related work
The “related work” section is put in Appendix A due to space constraint. We will rearrange the contents in the updated manuscript to make it more prominent.
**Q2**: this paper only uses T5 (encoder-decoder framework) backbone, how about other architectures and models of different scales?
We actually used the decoder-only version of T5, which better matches the architecture used for training auto-regressive language models. We will emphasize this in the manuscript. We are very interested in exploring the scaling law of counterfactual memorization as we increase the model sizes exponentially. We leave it as future work as it is currently computationally very challenging to evaluate much larger models without non-trivial improvements to the estimation algorithms.
**Q3**: Does the result of the model depend on the size of the subset ? Is the result sensitive to the value of r? How did the author determine that r=0.25?
When r is too large (close to 1) or too small (close to 0), we would end up having a very unbalanced number of samples on the two terms of Eq (1), therefore inaccurate results. We chose r=0.25 by considering such balance factor, computational efficiency, and utility of models trained on the subset data. However, we did not “tune” this parameter very carefully because apart from very extreme values, the results are consistent within a large range of r values.
**Q4**: Some domains are represented much more frequently than others in the datasets we studied (Line 136). What are these domains, and what are their characteristics?
In Fig 2(a)(b), we plot the number of documents on the y-axis for each domain. We annotated a few representative domain names. For example, on RealNews, reuters.com has orders of magnitudes more documents than hotair.com, therefore is represented more frequently. Especially on the c4 dataset, we observe that domains with more documents tend to concentrate on intermediate memorization values, while smaller domains have a wide spread along the memorization spectrum.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer PCjJ
Comment: Thank you for the clarification, and it has addressed many of my concerns. I would like to raise my score to 7. I eagerly anticipate seeing the updates from the rebuttal incorporated into the final paper. | Summary: This paper proposes a metric for whether a sequence is memorized based on the degree to which exposure during training increases the probability of producing the sample.
Strengths: The proposal here is strong, and remedies a real problem in the memorization literature: a failure to consider the *inherent* predictability of any given sequence (a property which this paper calls "simplicity"). This is a significant gap that I'm happy to see work on.
The results cover basic questions such as the impact of the number of epochs. They also explain the reasoning behind choosing the number of model runs they apply their method on, and this analysis is sound.
I also like that they measured not only the impact of a given sample on the model's tendency to emit itself but also its influence on the model's tendency to emit a different sample.
Weaknesses: As far as I can tell, it is hard to actually get a counterfactual metric for memorization of highly duplicated sequences because they are more likely to appear across different training subsamples. This metric inherently favors rare samples, because prior exposure to the duplicated samples will decrease the impact of each additional exposure.
This paper doesn't distinguish between "simplicity" due to duplication and simplicity due to inherently predictable sequences, like repetitions of a single character.
"the choice of data has a relatively minor effect on memorization" but the corpora used are all ones that avoid code and multilinguality, favoring natural English language text. It is not clear that this property would hold for significantly different corpora. The authors acknowledge this issue to some degree, pointing out that one of the largest differences between the datasets is likely due to the higher-quality texts in Wiki40B.
The authors only qualitatively check their conjecture that "if a validation set example x0 contains similar information, infl(x ) x0) could be large", but it should be fairly easy to quantify whether the similarity between any pair of samples increases the influence.
Minor:
- fair number of typos, please proofread before publication
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I'm curious about counterfactual memorization with respect to the training timeline. This paper uses training subsampling as a way of controlling whether a particular sequence has occurred, but I think it is worth asking whether that sequence was counterfactually memorized when the model actually encountered it for the first time, which may be at different points of training. Given a model trained on a known ordering of its corpus, we could consider the impact of a sample in the context of training by comparing checkpoints before and after exposure. Have you considered this question, explored to some degree by Biderman et al. (https://arxiv.org/pdf/2304.11158.pdf)? I would love to see results if so.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 4 excellent
Limitations: The corpora studied are all fairly similar (focusing on natural language English text), so the results are limited to those settings. The authors don't acknowledge this as completely as I'd like, although they do mention that higher quality text has different behavior than a corpus with malformatted text or multilingual contamination.
The authors also acknowledge that counterfactual memorization is inherently going to assign less memorization value to sequences with near-duplicates throughout the corpus, and describe this as a decision about what type of memorization they consider.
The authors also acknowledge that the models they are using are much smaller than the large language models of interest today.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank the reviewer for the strong support of our paper and helpful suggestions!
---
**Q1**: This paper doesn't distinguish between "simplicity" due to duplication and simplicity due to inherently predictable sequences, like repetitions of a single character.
Thanks for the comments. We agree those are important questions, which we think can be formulated into two main questions: 1) how does the frequency of a phrase correlate with its simplicity; 2) how does the (distribution of) frequencies of internet based training corpus correlate with that of “real” natural languages. For 1), we agree that there are some simple patterns such as “repeated same characters” or “short sequence lengths” that tend to make a sequence easier, but that is not always the case. We did some preliminary studies of how simple metrics such as token diversity correlate with memorization, but we did not see significant patterns. In the end, we believe the frequency is probably a general indicator than the intrinsic structures for simplicity. For 2), we studied how deduplication impacts the measurement of memorization in Appendix D. We found that it is very hard to choose the appropriate threshold for text-matching based deduplication method, and the counterfactual memorization metric actually helps to identify near-duplicate documents that are exactly along the cutting threshold of edit distances. Furthermore, the ranking for high memorization examples remains consistent after deduplication.
**Q2**: quantify whether the similarity between any pair of samples increases the influence.
One key difference between counterfactual influence and similarity is that the former tries to capture rare memorization. For example, a sequence with a common fact could have high similarity with a validation set example. But because it is a common fact, the validation set example receives influence from many training examples containing this fact, thus the influence from one particular training example could be very diluted and end up being small.
**Q3**: counterfactual memorization with respect to the training timeline?
Thanks for the pointer to Biderman et al. We will include a discussion of it in the updated manuscript. One main challenge we run into here is that neural network training has large intrinsic stochasticity. Even if we retrain the network on exactly the same training set again, the prediction could end up being quite different. To increase the signal to noise ratio for stable measurement of counterfactual memorization, we train hundreds of models to average out the stochasticity (we included a study of how many models are needed in Fig. 3a). For this reason, it is very challenging to control the noise in the measurement using the training trajectory of a single model. Maybe there is a way to perform a similar averaging procedure if we consider multiple model training trajectories with different randomly shuffled data order. In the early stages of our experiments, for implementation efficiency, instead of sampling data uniformly randomly, we tried to partition the data into chunks and sampling random chunks instead. The measurement ended up capturing the artifacts of co-occurrence of documents in the same chunk. So we suspect to successfully make such measurements along the training timeline, we will need very precise control over the data orders in the data loading pipeline.
**Q4**: The corpora studied are all fairly similar (focusing on natural language English text), so the results are limited to those settings.
Thanks for pointing this out! We will revise the limitation accordingly.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: I’ve engaged with another reviewer elsewhere here, but I think you’ve addressed my own comments. | Summary: This work studies memorization in neural language models. The major scientific question in this work is how to filter out common memorization in language models. To this end, this work first formulates a notion of counterfactual memorization which
characterizes how a model’s predictions change if a particular document is omitted during training. Then, this work estimates counterfactual memorization on three standard text datasets, and confirms that rare memorized examples exist in all of them. In addition, this work also identifies an inverse correlation between number of duplicates and counterfactual memorization as compared with previous definitions of memorization. Finally, this work extends the definition of counterfactual memorization to counterfactual influence, and studies the impact of memorized examples on the test-time prediction of the validation set examples and generated examples.
Overall, this paper is well-written and flows smoothly. This work begins with a detailed definition of counterfactual memorization and then estimates and analyzes counterfactual memorization of training examples in standard datasets. The experiments identify the examples sampled at high, intermediate and low memorization. Analysis covers the impact of number of models, number of training epochs. This work also shows the potential for inference attacks in large language models in a learning theoretical perspective.
Strengths: 1. This is an interesting paper that analyzes the counterfactual memorization in neural language models. The paper is written in a clear and concise manner. The problem is well defined, and the analysis is technically sound with insightful findings.
2. This work extends the definition of counterfactual memorization to counterfactual influence, and studies the impact of memorized examples on the test-time prediction of the validation set examples and generated examples --- thus has the potential to provide new perspectives of studying the inference attack problem in large language models.
Weaknesses: 1. The efficiency of the proposed approach can be improved for a practical use. Currently, identifying the memorization levels of the training examples, as well as the counterfactual influence, requires extensive computation overheads by training multiple models --- which may become a major concern for its application in large language models and with larger scale training sets.
2. The experiments can be also broaden to leveraging different backbone language models to provide more useful insights towards how existing language models vary in the effects of counterfactual memorization. This may lead to an intuition on how those language models would be sensitive to inference attacks and stimulates the development of defense approaches.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Line 21: "revious" -> "previous"
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have provided detailed and useful descriptions about the limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank the reviewer for the positive review and useful feedback!
---
**Q1**: computation cost may become a major concern for its application in large language models and with larger scale training sets
We acknowledge this limitation of our work. Our main focus for future work along this line is to improve the computational cost of estimation.
---
**Q2**: The experiments can be also broaden to leveraging different backbone language models to provide more useful insights towards how existing language models vary in the effects of counterfactual memorization. This may lead to an intuition on how those language models would be sensitive to inference attacks and stimulates the development of defense approaches.
Thanks for the suggestion! We used decoder-only transformer based models which are commonly adopted for language models. It is indeed quite interesting to try out alternative architectures. We believe that the results will not be that different among the different variants of transformer-based architectures that are currently used in practice. However, we are very interested in studying the “scaling law” behaviors as we increase the models and data sizes. Though that depends on the future work of more efficient estimation methods.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response! | Rebuttal 1:
Rebuttal: Thank all the reviewers for their useful comments and suggestions. We appreciate the reviewers found that our paper is clearly written, provides “strong proposal”, “sound analysis”, “useful metrics”, “novel perspective and tools”, “insightful findings” to the study of language model memorization.
We will fix the typos and minor issues pointed by the reviewers in the updated manuscript. The main questions from each reviewer will be clarified in the individual replies below. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The study proposes two novel metrics: “counterfactual memorization” and “counterfactual influence”, the first one can measure how a model’s predictions change if a sample is omitted in training, and the later one measure how a sample in the training set influences the prediction of a validation sample. The authors also analyze the two metrics on 3 standard datasets and discover some interesting results, like “different data sources display different memorization profiles”, ”number of models needs”, “memorization does quantitatively decrease when data is repeated more often”, the strong influencer ”also only receive tiny influence from all the rest of the training examples”.
Strengths: 1. Great motivation and important research question, how to measure a sample’s influence on the trained model’s prediction and the sample’s influence on a validation sample.
2. The counterfactual viewpoint is interesting and easy to understand. “counterfactual memorization” and “counterfactual influence” are useful metrics.
3. The analysis and some findings are insightful for inspiring future study.
4. The presentation is clear and easy to follow.
Weaknesses: 1. I am surprise that the authors do not refer the causal inferences studies, since “counterfactual” is an important concept in statistical causality. Judea Pearl, Donald Rubin and many scholars did many great work about counterfactual and causal effect, such as structural causal model, potential outcome model [2]. From the perspective of causality, the equation 1 and 3 could be reformulated by causal effect, counterfactual inference, which are important research topic and have been studies well in statistical causality communities. So I suggest the authors can study some classical causality studies [3][4], which can help you to make the research better.
2. I am curious the proposed two metrics on simple models, such logistic regression, SVM. Large language model (LLM) is popular and important, while the simple models are also useful and efficient for many applications. In the meantime, I think the simple models, like logistic regression, are much easier to measuring the two metrics, since the LLM is complex and have more unobserved confounders. So my second suggestion is that the authors could try some simple models.
3. It is much clearer to summarize your analysis and findings in one table.
4. Typos: e.g. revious in line 21.
[1] Causal inference in statistics: An overview, J Pearl
[2] Causal Inference Using Potential Outcomes, DB Rubin,2005
[3] Causality: Models, Reasoning and Inference, J Pearl 2009
[4] Causal Inference for Statistics, Social, and Biomedical Sciences, G.W. Imbens, 2015
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The authors could study some classical causality studies [3][4], which can help you to make the research better;
2. The authors could try some simple models and do the similar analysis.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Besides the limitations the authors list in section 7, the authors also mentions that it is hard to evaluate the impact of a single training example for the costly computation, I hope the authors could compare their methods with influence function in [5], which can identify training sample most responsible for a given prediction, and discuss their advantages and disadvantages.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank the reviewer for the positive review and useful suggestions!
---
**Q1**: pointers to classical causality studies
Thanks for the pointer! We will revise the manuscript to add missing references to the classical causality studies and discuss the connections. Our formulation does not involve complex graphical models and intervention techniques from statistical causality. We are interested in how removal of a training document impacts the model’s prediction, and formulated it as a *direct measurement*. It would be interesting future work to see if any advanced techniques in causality studies could be used to improve our measurement.
---
**Q2**: counterfactual memorization analysis on simple models
Since we focused on language tasks in this paper, we did not try simpler models as they generally do not fit well to the sequence-prediction task. However, we agree that simpler models in general could be quite interesting, and there may be specialized algorithms that can estimate memorization efficiently for specific simple models. For example, for SVMs, if a training example is not a support vector, then removing it from the training set will have zero impact on the model’s decision boundary. Similarly, for k-nearest-neighbor classifiers, the removal of a training example will have very localized impact on the model’s prediction. For linear regression models, since the optimal weights have a closed form solution, we can also calculate the memorization more easily.
---
**Q3**: It is much clearer to summarize your analysis and findings in one table.
Thanks for the suggestion! We will update the manuscript and try to format the summary of findings into a table.
---
**Q4**: the advantage and disadvantage comparing to influence function [5]
It seems that the reviewer forgot to list the citation [5]. We assume the reviewer meant the following [5]. We note that the estimation in [5] relies on an approximation using the inverse of Hessian, which is computationally prohibitive to calculate for large models. As a result, both methods are computationally expensive, but the cost of [5] scales with the model size, and our method scales with the cost of training multiple models, which can be easily parallelized. Moreover, while the approximation used in [5] works well for simple linear models, it is shown to be very fragile in deep neural networks (e.g.[6]). We will include discussions on this in the updated manuscript.
- [5] Koh, P. W., & Liang, P. (2017, July). Understanding black-box predictions via influence functions. In ICML.
- [6] Basu, S., Pope, P., & Feizi, S. (2021). Influence functions in deep learning are fragile. ICLR.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications, especially for the analysis and discussion about influence function. | Summary: This paper formulates a notion of counterfactual memorization to measure the "one-point" generalization performance of the model (they do not convey this definition as this). Equipped with this definition, the authors conduct plenty of experiments to explore the components that are related to such counterfactual memorization of large language models.
Strengths: This paper is well-written, and the analysis of what kind of samples tend to be memorized is interesting. The experiments in this paper are systematic as many of the related components are considered.
Weaknesses: Despite the strengths mentioned above. I think this paper has the following weakness.
1) The definition of counterfactual memorization is nothing but the generalization performance of the model on one single point. In fact, taking expectation to $x$, the counterfactual memorization becomes expected generalization. Thus, in my opinion, studying it from the perspective of distribution is exactly studying the generalization of LLM, which obviously is not a new topic.
2) The definition of counterfactual memorization is quite similar to some classical notations in generalization theory, e.g., algorithmic stability and $\epsilon$-differential privacy.
3) Here is my major concern. The performance of the model is evaluated by the 0-1 prediction loss on the task of next-word prediction (NWP). However, I think this is not a reasonable metric to measure the performance of NWP. The metric is reasonable for the classification task. However, for the NWP task, I do not think there exists a "ground-truth" label as in classification. For example, the sequence "I like eating _" followed by the words "apple" or "banana" are all valid predictions.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: As the authors clarified, "Both the neural language models and training sets used in this work are orders of magnitude smaller than modern standards such as GPT-3". Though conducting experiments on these LLM is impossible. My question is, does the samples that are proven to be easily memorized can not have diverse patterns during inference, even for larger models?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: As the authors clarified in their paper, they have no experiments on larger-scale pre-trained model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank the reviewer for the comments and suggestions! It seems there are some misunderstandings of how our metrics relate to existing notions of generalization and how the 0-1 prediction loss is chosen. We answer those questions below and hope the reviewer could raise the rating if the answers clarify the confusion. We are happy to answer any further questions during the discussion period.
---
**Q1**: relationship between counterfactual memorization and generalization.
We appreciate the reviewer for observing the connection with the generalization gap in learning theory. This connection actually provides another strong motivation to the definition of counterfactual memorization. As noted by the reviewer, the standard generalization gap measures the *expected* difference of model performance on unseen and seen examples. When this gap is large, the model is said to overfit, or memorize the training set without generalization capability. By making the measurement at the example level (and averaging over multiple models, instead of averaging over test examples), we measure the same “generalization gap”, but now we can characterize whether a particular example x is memorized or not. This key difference allows us to do all the analysis in this paper, and enables the extension to measure counterfactual influence. Therefore, we respectfully disagree with the reviewer that this is identical to measuring the generalization.
---
**Q2**: relationship between counterfactual memorization and algorithmic stability / $\varepsilon$-differential privacy
We agree that counterfactual memorization is related to many important notions, which actually supports that our metrics are grounded on the same fundamental notion of generalization-memorization trade-off widely studied in the field.
We also clarify the key differences here: the relation to algorithmic stability is similar to the relation to generalization gap as explained in the previous question. We focus on characterizing the behavior of each single example, while stability (e.g. leave-one-out stability) is formulated as a property of a learning algorithm. Moreover, while our analysis is heavily dependent on the underlying data, the study of stability generally consider *uniform* stability, which is a property of the learning algorithm that is supposed to hold for *arbitrary* data (i.e. data independent).
The relation to differential privacy (DP) is similar: DP quantifies the *worst case* changes under arbitrary replacement of an example, which is a property of the learning algorithm and independent of the data (DP guarantee should hold for arbitrary data). While we focus on a characterization of each example (as opposed to the algorithm), and measure how each training example influences the trained model in relation to the other training examples, which is heavily dependent on the underlying data distribution. On the empirical side, it is known that outlier examples (which can be identified via high counterfactual memorization scores) tend to have high membership inference attack success rate, and membership inference attacks can be used to provide empirical lower bound for the DP privacy parameter.
We also note that DP and stability are closely related to each other in the formulation, but it does not decrease the importance of either notion because they have very different perspectives.
---
**Q3**: justification of 0-1 loss in next word prediction task
It is true that for short contexts, many possible “next words” exist in the training data. But the distribution of possible next words becomes sharp as the context length increases. Moreover, such kind of ambiguity also exists in image classification. For example, many ImageNet images are real world scenes containing multiple different kinds of objects, yet the model is trained to maximize the likelihood of a single labeled class and evaluated with 0-1 loss because it is simple and effective. In language model, such ambiguity is also not *explicitly* addressed, the training objective is exactly the same (cross entropy loss) at each word (token) level. The model implicitly learns about the relation between similar words. When evaluating a language model, it is common to use perplexity, which boils down to measuring the per-token likelihood on the exact groundtruth token (again, without explicitly considering other possible words that are equally valid). We have evaluated measuring memorization using perplexity, and the ranking of memorized examples are largely consistent with the per-token accuracy measurements. However, we ended up using the latter because the per-token accuracy has a well defined numerical range of [0, 1], which leads to score values that are easier to interpret.
---
**Q4**: "does the samples that are proven to be easily memorized can not have diverse patterns during inference, even for larger models?"
Could the reviewer clarify the question? What do you mean by having “diverse patterns during inference”?
While we cannot make formal guarantees that all the results transfer to LLMs at GPT-3 or larger models, we believe most of the observations would still hold. In particular, manual inspection of the memorized examples (see the Appendix for a sample of them) matches the intuition, and we believe larger models will behave similarly. Our future work focuses on finding approximate solutions to the algorithm so that it can be applied to the larger models.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for the authors' responses. The response addresses some of my concerns. But my major concern "The performance of the model is evaluated by the 0-1 prediction loss on the task of next-word prediction (NWP)" does not been addressed.
The authors admit that there exist many possible words in the NWP task, and they said ambiguity exists in the image classification task as well. However, the main difference between the image classification task and the NWP task is that during the training and inference stage, "underlying ground truth" labels do not exist in the training stage. For example, suppose we have an image classification model which is trained to recognize the species of animal, we will not expect it to recognize the background of images, as the background labels do not involved in the training stage. However, for the NWP task, all possible words exist in the training stage (with the format of a vocabulary list), thus it is improper to compare image classification and the NWP task.
Back to the possible words situation, I insist on my opinion that for the NWP task, we should recognize it as a generative task instead of a classification task. For example, if the sentences "I like eating apples" and "I like eating bananas" are existed in the training set. Obviously, the optimal solution of ERM objective predicts "apples" and "bananas" respectively with a probability of $0.5$, under the condition of "I like eating _" (this makes training loss become zero). Since this paper explores memorization in a theoretical manner, I highly suggest the authors consider this situation (I think commonly existed).
By the way, I note a highly related paper https://arxiv.org/pdf/2308.03296.pdf recently published, which studies the memorization paper as in this paper. This paper studies memorization at the distribution level. The opinion conveyed in the reference seems to be opposite of this paper. I directly copy it on page 15 of the reference as follows.
"Second, while the true training tokens are used as the inputs to the network, the “labels” for the pseudo-gradient calculation are sampled from $P(yˆ|x)$. While it may appear odd for the labels not to match the inputs in an autoregressive setting, "
---
Reply to Comment 1.1.1:
Title: Further clarification on 0-1 prediction loss
Comment: Thanks very much for the prompt response!
**Image classification**: when we say the same ambiguity exists in image classification, we meant that there could be multiple **foreground objects** in the same image, and the same image can actually be predicted as multiple labels "correctly". There are many examples of such images exist in standard image classification training sets. While we cannot post link directly, one example we could point the reviewer to is Fig. 5 of [1]. For example, under the "Mountain Bike" category, there is an image of a cat behind the bike wheel, and another image with a truck loading lots of bikes. Under such inputs / contexts, both labels are equally likely or correct. In the terminology of NLP, all possible classes (bike, cat, truck) exist in the same vocabulary and are involved in the training stage.
**Common practice in the field**: We acknowledge such ambiguity exist (in both image classification and language modeling), and we do not claim comparing with the groundtruth label is the best way to handle it. However, this is a simple and effective metric that is widely adopted in both image classification (top-1 accuracy) and language modeling (validation perplexity computed by the likelihood on the groundtruth next token). We follow this convention in our study here. Moreover, as reviewer 9fjM (Thank you!) mentioned, we focus on study of memorization in this paper, previous studies in language model memorization actually mostly rely on text matching, and our study is an important step towards distinguishing memorization and generalization.
**Opposite opinions**?: Thank the reviewer for a pointer to the interesting paper studying LLM generalization with influence function. Regarding the quote provided by the reviewer (emphasis added by us):
> Second, while the true training tokens are used as the inputs to the network, the “labels” for the pseudo-gradient calculation are
sampled from $P(\hat{y}|x)$. While it may appear odd for the labels not to match the inputs in an autoregressive setting, this is indeed the correct **sampling procedure when the goal is to approximate** $\mathbf{G}$.
We note that this paper suggested such operation specifically for *the purpose of approximating* $\mathbf{G}$, which is defined as an expected value and approximated via such a *sampling procedure*. This is not directly arguing against using the groundtruth next-token as a *performance evaluation criterion*. Therefore, we do see contradiction in the messages conveyed by the two papers.
**Study on generated texts**: Finally, while not a main focus of this paper, we did study the counterfactual influence on model generated text in the last paragraph of section 5 (L279). We hope this study and the clarification above address the reviewer's concern.
[1] Siddiqui, S. A., Rajkumar, N., Maharaj, T., Krueger, D., & Hooker, S. (2022). Metadata archaeology: Unearthing data subsets by leveraging training dynamics. arXiv preprint arXiv:2209.10015. | Summary: The paper studies an important problem of the "memorizing" effect in neural language models, particularly focusing on rare or isolated pieces of information in the training data that get memorized, as opposed to widely duplicated or common information.
It formulates a notion of "counterfactual memorization" that measures how a model's predictions change when a particular training example is excluded during training. It analyzes counterfactual memorization on several standard text datasets and studies patterns in what gets memorized. Examples with unconventional text formats tend to have high memorization.
It extends the definition to "counterfactual influence" which quantifies the impact of a memorized training example on the inference phase, which helps trace the source of predictions.
Overall, the paper provides a novel perspective and tools to systematically study the memorization of information in neural language models.
Strengths: This paper presents a novel and significant perspective on studying information flow in neural language models.
The notion of counterfactual memorization is an original contribution to quantifying the memorization of isolated pieces of information. This moves beyond prior work that focused more on duplicated content. Extending this to counterfactual influence to trace predictions back to training examples provides a new capability.
The mathematical formulation of counterfactual memorization is clear and well-motivated.
The empirical methodology is sound, with adequate models trained to converge and sensitivity analysis on the number of models needed.
The results support the claims and provide new insights into memorization.
Overall, I find this a very solid, insightful paper with multiple novel contributions to an important problem.
Weaknesses: One potential weakness of this paper is the application of such analysis remains elusive. I can imagine many interesting questions could be answered with the provided concept, e.g., what kind of stereotypes/social biases in the model are influenced by what data in the training samples?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: The core idea of counterfactual memorization is intuitive, but can you provide some theoretical justification for why the formulation in Equation 1 captures the memorization of information? Is there any formal argument that you can make?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank the reviewer for the strong support of our paper and useful suggestions!
> **Q1**: One potential weakness of this paper is the application of such analysis remains elusive. I can imagine many interesting questions could be answered with the provided concept, e.g., what kind of stereotypes/social biases in the model are influenced by what data in the training samples?
Thanks for the suggestion. We are mostly thinking about prediction attribution (finding which training examples contribute to the current prediction) as a general direction which may have many downstream applications. We agree that understanding the source of stereotypes / social biases seems to be a very interesting concrete question to consider.
> **Q2**: The core idea of counterfactual memorization is intuitive, but can you provide some theoretical justification for why the formulation in Equation 1 captures the memorization of information? Is there any formal argument that you can make?
One potential connection to learning theory is that Eq 1 essentially measures the "generalization gap", except that it is measured on a specific given data point, as opposed to the expectation on the (unknown) underlying data distribution in standard learning theory. The latter being large means that a model has overfitted, thus memorizing the training data without generalization capability. In our case, we modified the notion to measure the "generalization gap" of a specific example x, in order to quantify whether a particular example x is memorized. This formulation has been theoretically studied in a synthetic model, where it is proven that memorization is closely related to optimal generalization if the underlying data distribution is long tailed.
[1] Feldman V. Does learning require memorization? a short tale about a long tail. InProceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing 2020 Jun 22 (pp. 954-959).
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification! | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.