paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2021_xQGYquca0gB
Neural Production Systems
Visual environments are structured, consisting of distinct objects or entities. These entities have properties---visible or latent---that determine the manner in which they interact with one another. To partition images into entities, deep-learning researchers have proposed structural inductive biases such as slot-based architectures. To model interactions among entities, equivariant graph neural nets (GNNs) are used, but these are not particularly well suited to the task for two reasons. First, GNNs do not predispose interactions to be sparse, as relationships among independent entities are likely to be. Second, GNNs do not factorize knowledge about interactions in an entity-conditional manner. As an alternative, we take inspiration from cognitive science and resurrect a classic approach, production systems, which consist of a set of rule templates that are applied by binding placeholder variables in the rules to specific entities. Rules are scored on their match to entities, and the best fitting rules are applied to update entity properties. In a series of experiments, we demonstrate that this architecture achieves a flexible, dynamic flow of control and serves to factorize entity-specific and rule-based information. This disentangling of knowledge achieves robust future-state prediction in rich visual environments, outperforming state-of-the-art methods using GNNs, and allows for the extrapolation from simple (few object) environments to more complex environments.
accept
This paper introduces a differentiable and learnable version of the classic production system architecture. This is a worthwhile and interesting attempt at enabling more flexible reasoning in deep learning systems. The reviewers found the initial evaluations of the system uncompelling, however the substantial additions provided during the discussion phase were deemed sufficient by all reviewers. This means the paper is an unusual case where it will need substantial revision to the results, but the reviewers believe this is feasible -- so I too will recommend acceptance. I urge the authors to also attempt to clarify the model presentation, as indicated by the reviewers.
train
[ "PIbliYb9wy5", "wdLYAQnPL6d", "ubmurL82qW", "RvBpJYH7MSj", "-HVzRyERFMu", "pgRGti26Z0E", "5qeJ6tVk4Oo", "46uI6Srw7D", "EszEuLOjWc_", "o4Z5pSrEUV0", "jOO1-BA1_O9", "Aa5FG_os5Ym", "txCc7MB_bqg", "9de7M_PdP5L", "AgFOpWAbP9y", "uc0WXAwAar", "-9N1BDBdsKw", "Ll01ufZNGUK", "-MDTpSsO8Gi"...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", ...
[ " Dear. Reviewer, \n\nWe again thank the reviewer for their valuable feedback.\n\nWe did more experiments as asked by the reviewer to show the difference between the proposed method and the SCOFF baseline. \nWe also did more experiments to visualize the rules learned by the proposed method. \n\nWe think that the re...
[ -1, -1, 6, 6, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, -1, 3, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1 ]
[ "RvBpJYH7MSj", "uc0WXAwAar", "nips_2021_xQGYquca0gB", "nips_2021_xQGYquca0gB", "ubmurL82qW", "9de7M_PdP5L", "nips_2021_xQGYquca0gB", "jOO1-BA1_O9", "Aa5FG_os5Ym", "wdLYAQnPL6d", "Aa5FG_os5Ym", "Ll01ufZNGUK", "pgRGti26Z0E", "-9N1BDBdsKw", "Ll01ufZNGUK", "ubmurL82qW", "RvBpJYH7MSj", ...
nips_2021_MGX69TBAi07
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization
Large scale distributed optimization has become the default tool for the training of supervised machine learning models with a large number of parameters and training data. Recent advancements in the field provide several mechanisms for speeding up the training, including {\em compressed communication}, {\em variance reduction} and {\em acceleration}. However, none of these methods is capable of exploiting the inherently rich data-dependent smoothness structure of the local losses beyond standard smoothness constants. In this paper, we argue that when training supervised models, {\em smoothness matrices}---information-rich generalizations of the ubiquitous smoothness constants---can and should be exploited for further dramatic gains, both in theory and practice. In order to further alleviate the communication burden inherent in distributed optimization, we propose a novel communication sparsification strategy that can take full advantage of the smoothness matrices associated with local losses. To showcase the power of this tool, we describe how our sparsification technique can be adapted to three distributed optimization algorithms---DCGD, DIANA and ADIANA---yielding significant savings in terms of communication complexity. The new methods always outperform the baselines, often dramatically so.
accept
The authors are in general positive agreement as to both the theoretical and empirical contributions of this paper. I would recommend this be accepted.
train
[ "m-q_FX7Dt0I", "2vaD0MCZrCu", "m5lfDYGRYAM", "odKC0NDh4w7", "wVz_fMEDUZX", "503ICd1AG2Z", "YNVA1Nar2EY", "FqqkqjR7Ay", "rimy5CuE60U", "_7C4txtQlOH", "0Sx0zHBCYh_", "BCf3zG-74A_", "e2v3jUFmIf4", "N1lxCGvocnh", "7Jz_E7hYGt0", "Qq79X22V51H", "DxjFJLMeK6I", "dIV5KDR1HfW", "Xy4a-NESDv...
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author",...
[ " I thank the authors for responding to my review. After reading the response and other reviews, I am happy with my previous assessment.\n\nYes, I meant loss functions that cannot be described by the structural assumption in Lemma 1.", " First, notice that both diagonal and rank-1 smoothness matrices are just a s...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "BCf3zG-74A_", "FqqkqjR7Ay", "YNVA1Nar2EY", "rimy5CuE60U", "503ICd1AG2Z", "Qq79X22V51H", "N1lxCGvocnh", "dIV5KDR1HfW", "DxjFJLMeK6I", "yJcHORu6xOhH", "nips_2021_MGX69TBAi07", "-Qry7NyjwH9", "-Qry7NyjwH9", "SO6Mn2X99mf", "yJcHORu6xOhH", "SO6Mn2X99mf", "SO6Mn2X99mf", "SO6Mn2X99mf", ...
nips_2021_0isj8oxdQys
Increasing Liquid State Machine Performance with Edge-of-Chaos Dynamics Organized by Astrocyte-modulated Plasticity
Vladimir Ivanov, Konstantinos Michmizos
accept
The paper proposes a novel homeostatic mechanism to drive liquid state machines (LSMs) towards the Edge-of-Chaos in order to obtain a good dynamical regime of the network for training the readout. In this LSM variant, a homeostatic factor is computed (in a simple manner) that modulates local plasticity updates (STDP). The method is motivated from a previously proposed role of astrocytes in modulation of plasticity and homeostasis. All reviewers agree that the approach is interesting and novel. There have been a number of other approaches proposed previously, but none of them was convincing. This one seems to work very well on the performed experimental evaluations. The manuscript is well-written and the results are sound. There are a few weaknesses: - the astrocyte function seems to be tailored to the model needs rather than informed by biology - the model shows minor performance improvements compared to previous LSMs - the model was only tested on MNIST and N-MNIST. A larger variety and more challenging datasets would strengthen the paper.
train
[ "JGRy6Q7Lag", "pCrAy2HYQWW", "msg89GjdY5", "FTvASPEOFQ", "ffPzxp8C941", "c1n6Lmw1zmD", "lOekAJBl91X" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presents a novel type of liquid state machine - neuron-astrocyte liquid state machine (NALSM), which introduces feedback from the network activity to the synaptic plasticity. The proposed mechanism helps to position the liquid (reservoir) layer in the edge-of-chaos dynamics regime, a known key factor to...
[ 7, 5, -1, -1, -1, -1, 7 ]
[ 4, 3, -1, -1, -1, -1, 4 ]
[ "nips_2021_0isj8oxdQys", "nips_2021_0isj8oxdQys", "JGRy6Q7Lag", "pCrAy2HYQWW", "lOekAJBl91X", "nips_2021_0isj8oxdQys", "nips_2021_0isj8oxdQys" ]
nips_2021_kwU8HhoUi4W
Fair Sortition Made Transparent
Bailey Flanigan, Gregory Kehne, Ariel D. Procaccia
accept
Thanks for the strong submission; the reviewers unanimously enjoyed it.
train
[ "nIDQ1R_dll", "JdvHyOoqRW", "wbZwHgzdRlJ", "eEYYPL99c8", "gmJedfr9m6", "IDWNCHanQG", "-uBjqKZI9uC", "65aNbVaehd" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Additional to the maximally-fair consideration, this paper concerns the transparency in assembly selection problem. Specifically, the paper introduces the notion of transparency in panel selection as people can easy understand the probabilities with which each individual will be chosen for the panel and verify tha...
[ 7, -1, -1, -1, -1, 6, 7, 7 ]
[ 2, -1, -1, -1, -1, 4, 3, 2 ]
[ "nips_2021_kwU8HhoUi4W", "nIDQ1R_dll", "IDWNCHanQG", "-uBjqKZI9uC", "65aNbVaehd", "nips_2021_kwU8HhoUi4W", "nips_2021_kwU8HhoUi4W", "nips_2021_kwU8HhoUi4W" ]
nips_2021_Hj_PxeC8CiV
A Max-Min Entropy Framework for Reinforcement Learning
In this paper, we propose a max-min entropy framework for reinforcement learning (RL) to overcome the limitation of the soft actor-critic (SAC) algorithm implementing the maximum entropy RL in model-free sample-based learning. Whereas the maximum entropy RL guides learning for policies to reach states with high entropy in the future, the proposed max-min entropy framework aims to learn to visit states with low entropy and maximize the entropy of these low-entropy states to promote better exploration. For general Markov decision processes (MDPs), an efficient algorithm is constructed under the proposed max-min entropy framework based on disentanglement of exploration and exploitation. Numerical results show that the proposed algorithm yields drastic performance improvement over the current state-of-the-art RL algorithms.
accept
This paper proposes an interesting and novel view on MaxEnt RL: It observes that SAC may suffer from a feedback loop since the Q values of already visited states tend to be greater than that of other states because the policy update increase the entropy. This in turn encourages the policy to visit these states even more. This defect is confirmed in a no-reward environment and a method to address this is presented with encouraging experimental results. While there could have been more rigorous investigation into the claim "rarely visited state have low entropy", the overall contribution of the paper is appreciated by most of the reviewers and the interesting take on MaxEnt RL warrants acceptance.
test
[ "jzBOrRfDgub", "OP-sMilREj", "peil8TJMgY", "B2d44EAfRTF", "Mfo5LYcxxko", "CZyDwBd3pg9", "t16w5r6-6cR", "dz8H1j18A1z", "PQdJjqACC1f", "n8qMmG4gIBE", "-qN3IhedQVC", "YCSpX8CG9hj" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the clarification. I lean towards acceptance.", " One more indicative example of the effectiveness of the proposed approach is the result of the swimmer envirionment. As seen in the paper, the proposed algorithm outperforms other baselines in all the considered Mujoco tasks in the paper....
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 3 ]
[ "Mfo5LYcxxko", "CZyDwBd3pg9", "B2d44EAfRTF", "YCSpX8CG9hj", "-qN3IhedQVC", "n8qMmG4gIBE", "PQdJjqACC1f", "nips_2021_Hj_PxeC8CiV", "nips_2021_Hj_PxeC8CiV", "nips_2021_Hj_PxeC8CiV", "nips_2021_Hj_PxeC8CiV", "nips_2021_Hj_PxeC8CiV" ]
nips_2021_ELndVeVA-TR
Reward is enough for convex MDPs
Maximising a cumulative reward function that is Markov and stationary, i.e., defined over state-action pairs and independent of time, is sufficient to capture many kinds of goals in a Markov decision process (MDP). However, not all goals can be captured in this manner. In this paper we study convex MDPs in which goals are expressed as convex functions of the stationary distribution and show that they cannot be formulated using stationary reward functions. Convex MDPs generalize the standard reinforcement learning (RL) problem formulation to a larger framework that includes many supervised and unsupervised RL problems, such as apprenticeship learning, constrained MDPs, and so-called pure exploration'. Our approach is to reformulate the convex MDP problem as a min-max game involving policy and cost (negative reward)players', using Fenchel duality. We propose a meta-algorithm for solving this problem and show that it unifies many existing algorithms in the literature.
accept
This paper is very clearly written, brings together a large literature and using that unification to provide concrete conclusions about a few open problems about convex MDPs. The reviewers all agreed that the unification provided in this work is both very well done and useful to the community. It was also identified that the paper establishes an important sample complexity result: a rate of O(epsilon^{-2}) for convex MDPs, and that this contribution should be better highlighted. The author response helped allay reviewer concerns. More than one reviewer suggested moving the experiments to the appendix, and you could consider this to spend more time on the theoretical contributions. The reviewers each also gave useful suggestions for the work, and I highly encourage the authors to consider incorporating them.
train
[ "-4xwY23qToM", "mrxYAwiBEX7", "nJHjMZct1np", "yMx3gY1iW9K", "Qs28iQqwj85", "hFWwuKXxbd", "93XiTcj8h1v", "Jh_KXjAkIE", "eMte-C-pGxS", "LncQKq065CL", "zJLRNJfgyH2", "mXv83x7sbEL", "o6w-Afvnsav" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewers for their thorough feedback, for engaging in the rebuttal, and for reconsidering their original evaluation of the paper. This is much appreciated and we believe that the paper gained a lot from their feedback. We will make sure to reflect all of the comments in the final versi...
[ -1, -1, 7, -1, -1, 6, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, -1, 3, -1, -1, 4, -1, -1, -1, -1, -1, 5, 2 ]
[ "zJLRNJfgyH2", "eMte-C-pGxS", "nips_2021_ELndVeVA-TR", "93XiTcj8h1v", "LncQKq065CL", "nips_2021_ELndVeVA-TR", "nJHjMZct1np", "o6w-Afvnsav", "mXv83x7sbEL", "hFWwuKXxbd", "nips_2021_ELndVeVA-TR", "nips_2021_ELndVeVA-TR", "nips_2021_ELndVeVA-TR" ]
nips_2021_x8k1nAoGu1U
Fast Doubly-Adaptive MCMC to Estimate the Gibbs Partition Function with Weak Mixing Time Bounds
We present a novel method for reducing the computational complexity of rigorously estimating the partition functions of Gibbs (or Boltzmann) distributions, which arise ubiquitously in probabilistic graphical models. A major obstacle to applying the Gibbs distribution in practice is the need to estimate their partition function (normalizing constant). The state of the art in addressing this problem is multi-stage algorithms which consist of a cooling schedule and a mean estimator in each step of the schedule. While the cooling schedule in these algorithms is adaptive, the mean estimate computations use MCMC as a black-box to draw approximately-independent samples. Here we develop a doubly adaptive approach, combining the adaptive cooling schedule with an adaptive MCMC mean estimator, whose number of Markov chain steps adapts dynamically to the underlying chain. Through rigorous theoretical analysis, we prove that our method outperforms the state of the art algorithms in several factors: (1) The computational complexity of our method is smaller; (2) Our method is less sensitive to loose bounds on mixing times, an inherent components in these algorithms; and (3) The improvement obtained by our method is particularly significant in the most challenging regime of high precision estimates. We demonstrate the advantage of our method in experiments run on classic factor graphs, such as voting models and Ising models.
accept
The goal of this paper is a better algorithm to estimate the partition function / normalization constant in a Gibbs distribution. These are often of interest in many different forms in the NeurIPS community. The best current methods are based on annealing, essentially starting at a normalized distribution and moving through a series of distributions of different temperatures and estimating the ratio of each using MCMC. This paper proposes a new algorithm in this line which is intended to be less sensitive to tightness in the mixing times of MCMC. This is a highly mathematical paper, where the goal is improved theoretical guarantees (rather than, say, practical performance improvement.) Unfortunately, all reviewers signaled that they were not confidently able to verify the correctness of the core results. In the end, the SAC stepped in and did a careful reading of the paper and recommends acceptance. The SAC's reasoning is as follows: The presentation is indeed terse and dense. But that is par for the course for theoretical papers at NeurIPS given the page limit. Everything was well defined (as far as I saw), and all the arguments seemed credible. The authors outlined the strategy of their arguments, hiding the considerable complexity of all the details in the supplementary material. I actually found the writing pretty good compared to the average published Neurips paper. Regarding the concern by one reviewer about the assumption re spectral gap: I do not see this as a downside. It seems inevitable that one will need some assumptions on the dynamics of these chains in order to say anything theoretical. The goal (in my mind) is not to attempt the impossible task of theoretically explaining what "really" happens in practice, because one never knows. Assuming something about a spectral gap in the transition matrix of a markov chain seems to me to be less fraught epistemologically that (say) assuming one's data is drawn iid from some distribution ... an assumption that I doubt is true of almost all practical applications of machine learning, but upon which almost all ML theory is built... (Thus my view is aligned to that of the authors in their "Note to chair") So it looks like this is a real advance on a problem of interest. The analysis looks credible (all one can ever really tell without spending a week working through every single step). It is easily at least as clear, correct, and significant as the majority of published Neurips papers so I do not have any qualms recommending acceptance....
train
[ "oovyl4TpYOY", "43M6_a9xw0", "2DTRrt___S5", "svxkT3bti91", "UTGbcR3XlWU", "-6JOEuJxXP", "y_WNda9_FcT", "eW1g6jZ-14", "zc85nLVmZMC", "AyD4Veae_G1", "MbBi90BGAgo" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors addressed most of my major concerns. I understand that the authors' method has better theoretical guarantees compared to AIS. \nHowever I would highly recommend an empirical comparison which the above discussion does not preclude.\nI totally agree that mathematical rigor is important. However fair com...
[ -1, -1, -1, -1, -1, -1, -1, 5, 6, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 3 ]
[ "UTGbcR3XlWU", "y_WNda9_FcT", "-6JOEuJxXP", "zc85nLVmZMC", "eW1g6jZ-14", "MbBi90BGAgo", "AyD4Veae_G1", "nips_2021_x8k1nAoGu1U", "nips_2021_x8k1nAoGu1U", "nips_2021_x8k1nAoGu1U", "nips_2021_x8k1nAoGu1U" ]
nips_2021_6mUrD5rg-UU
Does enforcing fairness mitigate biases caused by subpopulation shift?
Many instances of algorithmic bias are caused by subpopulation shifts. For example, ML models often perform worse on demographic groups that are underrepresented in the training data. In this paper, we study whether enforcing algorithmic fairness during training improves the performance of the trained model in the \emph{target domain}. On one hand, we conceive scenarios in which enforcing fairness does not improve performance in the target domain. In fact, it may even harm performance. On the other hand, we derive necessary and sufficient conditions under which enforcing algorithmic fairness leads to the Bayes model in the target domain. We also illustrate the practical implications of our theoretical results in simulations and on real data.
accept
This paper examines the question of whether enforcing algorithmic fairness in the training domain will improve performance of the model in a target domain in which subpopulations (such as those defined by the sensitive attribute) occur in different proportions from the training domain. The key assumption behind the results is that the risk profiles (expected loss conditional on group, or group and other discriminative attributes) are the same between the training and test domain. The authors’ main result crisply characterizes conditions under which target domain performance improves. Overall this is an interesting paper that studies an important problem, but it is not without its weaknesses. Given the authors’ responses to concerns raised by reviewers, I believe the major weaknesses can be addressed through moderate revision and am thus recommending this paper for acceptance. In revising the submission, the authors should focus on the following dimensions. 1. Clarity in exposition. Reviewers jgBW VZHD enumerated a lengthy list of typos and confusions that I myself also share. (Additionally, the difference between distributions, $P^* - \tilde P$ isn’t actually defined where it is first used in line 53.) While the authors note in their response that several reviewers described the work as well-written and clear, I want to emphasize that reviewers who went into depth in their comments noted typos, undefined and ill-defined notation, and confusion about theorem statements. These issues are minor, but abundant. The manuscript will be much improved once they are weeded out in revision. 2. Displaying figure 2 sideways is confusing. If it is desired to keep the figure in this orientation to conserve space, at least rotate the text. 3. The work would benefit from further discussion of the settings in which the authors believe the results would likely apply. That is, in what practical settings might the subpopulation shift assumption hold? Would it hold in something like the motivating gender shades study example? 4. The COMPAS example is currently highly underdeveloped. I would either expand on the discussion to explain what this example is illustrating and its implications for training risk assessment tools, or omit it altogether. 5. In the author response the authors have committed to making a number of revisions to address concerns raised by reviewers. They are all worthwhile changes that will improve the clarity of the manuscript.
train
[ "Mb0OklzoHzQ", "5gVqeS8dU12", "IcOks-7Ct3Z", "7oCMulhKi93", "ZGUWZrBex4", "6hi1t7_ZxGj", "WtIDXK97nu7", "e5ZkzK3A5KW", "6mCGIdRjaKR", "pUIV3nJSo4J", "3cWE52RPXhl", "ktLrXu2T6s", "jD3RpSMFIBd" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for helping us improve the paper. We will extend the experiments section explaining how the test data satisfy the conditions in Theorem 3.2 and add additional discussions based on your questions and suggestions in the revised version.", "The work examines when does a fairness constrained m...
[ -1, 6, -1, -1, -1, -1, 5, -1, -1, -1, -1, 7, 7 ]
[ -1, 4, -1, -1, -1, -1, 3, -1, -1, -1, -1, 2, 4 ]
[ "IcOks-7Ct3Z", "nips_2021_6mUrD5rg-UU", "6mCGIdRjaKR", "5gVqeS8dU12", "6hi1t7_ZxGj", "e5ZkzK3A5KW", "nips_2021_6mUrD5rg-UU", "WtIDXK97nu7", "5gVqeS8dU12", "jD3RpSMFIBd", "ktLrXu2T6s", "nips_2021_6mUrD5rg-UU", "nips_2021_6mUrD5rg-UU" ]
nips_2021_on2DNSz2Qg
Implicit Deep Adaptive Design: Policy-Based Experimental Design without Likelihoods
We introduce implicit Deep Adaptive Design (iDAD), a new method for performing adaptive experiments in real-time with implicit models. iDAD amortizes the cost of Bayesian optimal experimental design (BOED) by learning a design policy network upfront, which can then be deployed quickly at the time of the experiment. The iDAD network can be trained on any model which simulates differentiable samples, unlike previous design policy work that requires a closed form likelihood and conditionally independent experiments. At deployment, iDAD allows design decisions to be made in milliseconds, in contrast to traditional BOED approaches that require heavy computation during the experiment itself. We illustrate the applicability of iDAD on a number of experiments, and show that it provides a fast and effective mechanism for performing adaptive design with implicit models.
accept
The paper is on Bayesian experimental design, building on the recently proposed Deep Adaptive Design (DAD). The paper's contribution is an extension iDAD (I for implicit) that does not require explicit likelihoods, does not require that experiments are conditionally independent, and is fast enough for real-time applications. The paper is interesting and thorough.
train
[ "cidH7IUGuAt", "HBX5JcU3Q8i", "2qqD8z4TwxM", "gdsHANFHRzJ", "yzv2F7V7mus", "UNJnqtx0AhX", "cOXTx-G8cL", "nHHG8I45Gv", "6leqJrVMXAv" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes iDAD, an amortized learning framework for implicit (likelihood-free) experimental design. The iDAD framework extends the previously proposed deep adaptive design (DAD) method (Foster et al., 2013), which is an amortized learning framework for experiment design in likelihood-based (i.e. non-impl...
[ 7, 7, 7, -1, -1, -1, -1, -1, 7 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, 1 ]
[ "nips_2021_on2DNSz2Qg", "nips_2021_on2DNSz2Qg", "nips_2021_on2DNSz2Qg", "2qqD8z4TwxM", "nips_2021_on2DNSz2Qg", "HBX5JcU3Q8i", "cidH7IUGuAt", "6leqJrVMXAv", "nips_2021_on2DNSz2Qg" ]
nips_2021_LZOG2YgDiRn
Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games
Real world applications such as economics and policy making often involve solving multi-agent games with two unique features: (1) The agents are inherently asymmetric and partitioned into leaders and followers; (2) The agents have different reward functions, thus the game is general-sum. The majority of existing results in this field focuses on either symmetric solution concepts (e.g. Nash equilibrium) or zero-sum games. It remains open how to learn the Stackelberg equilibrium---an asymmetric analog of the Nash equilibrium---in general-sum games efficiently from noisy samples. This paper initiates the theoretical study of sample-efficient learning of the Stackelberg equilibrium, in the bandit feedback setting where we only observe noisy samples of the reward. We consider three representative two-player general-sum games: bandit games, bandit-reinforcement learning (bandit-RL) games, and linear bandit games. In all these games, we identify a fundamental gap between the exact value of the Stackelberg equilibrium and its estimated version using finitely many noisy samples, which can not be closed information-theoretically regardless of the algorithm. We then establish sharp positive results on sample-efficient learning of Stackelberg equilibrium with value optimal up to the gap identified above, with matching lower bounds in the dependency on the gap, error tolerance, and the size of the action spaces. Overall, our results unveil unique challenges in learning Stackelberg equilibria under noisy bandit feedback, which we hope could shed light on future research on this topic.
accept
The discussion mainly centered on the authors' claims of novelty. While (to our knowledge) the technical contributions are indeed novel, there was consensus that the informal claims need to be significantly toned down -- there are too many variants of "initiates the theoretical study of sample-efficient learning of the Stackelberg equilibrium" and everything being "vastly open" which is simply not true given the references cited by the authors themselves. (There is sometimes a qualifier about samples / a bandit model, but prior work could also be considered to involve samples and a bandit model -- this isn't exactly what sets this paper apart, though maybe the noise does, which is sometimes, but not always, mentioned in the qualifier. As pointed out by one of the reviewers, there are also places where there isn't even any qualifier at all.) The fact that this is claimed again and again, and the fact that this issue was apparently already raised in the ICML submission of an earlier version of this paper and apparently wasn't fully addressed (I wasn't involved in reviewing that version so I do not know the details), do honestly leave us a bit worried about just accepting the paper. That seems unfortunate because it really wasn't necessary. Still I think acceptance is the right decision, but the authors should appreciate that this really involves trusting them that they will truly fix this this time. The same goes for any talk/poster about the material of course. I wonder if the line of work on empirical game theory by Wellman and others should be cited here somewhere.
train
[ "j_5TjA01n65", "NuWICyaoyZj", "HTTDFRH2PB", "9oMX_vWRzR_", "aXwZxkphGxt", "hVb144U4E0L", "6-e2GEBm5Yp", "KgjqxdBALy", "n1XN8OjViPh" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the post-response edit and your positive feedback on our central contributions! We respond to your additional comments as follows.\n\nRe the 2nd point (concerns about our claims on Page 6 and Page 9): We acknowledge that our current claim on Page 9 is a bit imprecise. For both claims, we meant to cl...
[ -1, 6, -1, -1, -1, -1, 6, 8, 6 ]
[ -1, 3, -1, -1, -1, -1, 4, 3, 3 ]
[ "NuWICyaoyZj", "nips_2021_LZOG2YgDiRn", "NuWICyaoyZj", "n1XN8OjViPh", "KgjqxdBALy", "6-e2GEBm5Yp", "nips_2021_LZOG2YgDiRn", "nips_2021_LZOG2YgDiRn", "nips_2021_LZOG2YgDiRn" ]
nips_2021_QUaKP7557s
Non-approximate Inference for Collective Graphical Models on Path Graphs via Discrete Difference of Convex Algorithm
The importance of aggregated count data, which is calculated from the data of multiple individuals, continues to increase. Collective Graphical Model (CGM) is a probabilistic approach to the analysis of aggregated data. One of the most important operations in CGM is maximum a posteriori (MAP) inference of unobserved variables under given observations. Because the MAP inference problem for general CGMs has been shown to be NP-hard, an approach that solves an approximate problem has been proposed. However, this approach has two major drawbacks. First, the quality of the solution deteriorates when the values in the count tables are small, because the approximation becomes inaccurate. Second, since continuous relaxation is applied, the integrality constraints of the output are violated. To resolve these problems, this paper proposes a new method for MAP inference for CGMs on path graphs. Our method is based on the Difference of Convex Algorithm (DCA), which is a general methodology to minimize a function represented as the sum of a convex function and a concave function. In our algorithm, important subroutines in DCA can be efficiently calculated by minimum convex cost flow algorithms. Experiments show that the proposed method outputs higher quality solutions than the conventional approach.
accept
The authors present a new algorithm for MAP inference in chain-structured Collective Graphical Models that uses flow techniques to solve the problem in discrete space, as opposed to previous works which use continuous relaxations. Three reviewers found the techniques to be correct and represent an advance over prior methods, with sufficient evidence presented to show the advantages of working in discrete space. The scores were 6, 6, 7. One reviewer raised their score after the author response. The meta-reviewer recommends accept.
train
[ "Lre_pLXBG_", "7WjzfEPiKej", "6fYQ15Ws_3", "tpo7NhKgq5O", "OUm050Yo1mp", "s1OwQak1_gH", "GHxrHQdkcdH", "wQQO6e_Drf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The authors consider the problem of MAP estimate of count data given noisy count data. For example $X_1(t), X_2(t), \\ldots, X_M(t)$ describe the location of an agent at time $t$, $N_i(t)$ is the number of agents at location $i$ at time $t$, and $Y_i(t)$ is a noisy estimate of $N_i(t)$. The goal is to estimate $N$...
[ 6, -1, -1, 7, -1, -1, -1, 6 ]
[ 4, -1, -1, 4, -1, -1, -1, 4 ]
[ "nips_2021_QUaKP7557s", "OUm050Yo1mp", "GHxrHQdkcdH", "nips_2021_QUaKP7557s", "Lre_pLXBG_", "tpo7NhKgq5O", "wQQO6e_Drf", "nips_2021_QUaKP7557s" ]
nips_2021_DvxH_RCnSj3
Implicit Task-Driven Probability Discrepancy Measure for Unsupervised Domain Adaptation
Probability discrepancy measure is a fundamental construct for numerous machine learning models such as weakly supervised learning and generative modeling. However, most measures overlook the fact that the distributions are not the end-product of learning, but are the basis of downstream predictor. Therefore it is important to warp the probability discrepancy measure towards the end tasks, and we hence propose a new bi-level optimization based approach so that the two distributions are compared not uniformly against the entire hypothesis space, but only with respect to the optimal predictor for the downstream end task. When applied to margin disparity discrepancy and contrastive domain discrepancy, our method significantly improves the performance in unsupervised domain adaptation, and enjoys a much more principled training process.
accept
This paper originally had originally some borderline negative reviews where the reviewers acknowledged a novel DA approach and theoretical results but noted some lack of positioning with respect to other DA methods such as DANN and some missing comparisons and justifications for the method. The authors did a very good job at answering those concerns in short and efficient replies that clarified all the major concerns from the reviewers. The final consensus is that the paper should be accepted. Note that the authors have to take into account the comments from the reviewed and integrate their very pertinent answers to those comments in the final version of the paper.
train
[ "BDTKsMOxwv", "migJuOZngyR", "FATqPhlprFx", "hs373Ls1pgj", "UtlMZUEI1bq", "EsB9g12L3Y", "_jtsaXx8F8J", "8b-nGM1S5SY", "z9kd0jrjrzW", "04CQF0uHw7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "In this paper, the authors propose a new bi-level optimization-based approach to handle the issue of [23], i.e., the proposed margin disparity discrepancy (MDD) measure conflicts with the H\\DeltaH-divergence. The theoretical results are then applied to MDD and CDD, which gives the implicit task-driven discrepancy...
[ 6, 6, -1, -1, 6, -1, -1, -1, -1, 6 ]
[ 3, 4, -1, -1, 3, -1, -1, -1, -1, 4 ]
[ "nips_2021_DvxH_RCnSj3", "nips_2021_DvxH_RCnSj3", "8b-nGM1S5SY", "z9kd0jrjrzW", "nips_2021_DvxH_RCnSj3", "BDTKsMOxwv", "04CQF0uHw7", "migJuOZngyR", "UtlMZUEI1bq", "nips_2021_DvxH_RCnSj3" ]
nips_2021_r1pprsDm185
SBO-RNN: Reformulating Recurrent Neural Networks via Stochastic Bilevel Optimization
In this paper we consider the training stability of recurrent neural networks (RNNs) and propose a family of RNNs, namely SBO-RNN, that can be formulated using stochastic bilevel optimization (SBO). With the help of stochastic gradient descent (SGD), we manage to convert the SBO problem into an RNN where the feedforward and backpropagation solve the lower and upper-level optimization for learning hidden states and their hyperparameters, respectively. We prove that under mild conditions there is no vanishing or exploding gradient in training SBO-RNN. Empirically we demonstrate our approach with superior performance on several benchmark datasets, with fewer parameters, less training data, and much faster convergence. Code is available at https://zhang-vislab.github.io.
accept
This paper proposes a new type of RNN where the hidden state at the current time step is taken to be the result of a single optimizer step applied to a function involving the previous hidden state and the current observation. It is argued that if the optimizer step defining the dynamics uses a learning rate that scales as 1/T, where T is the number of time steps, then the parameter gradient will be well behaved (i.e. won't vanish or explode). The reviewers liked the paper and its core idea, and viewed it as a solid contribution to the field of RNN design. Their main complaint was about the small-scale nature of the experiments and the lack of an LSTM comparison, which the paper's authors have promised to address in a future revision. There are two possible problems that stick out to me as the meta-reviewer. First, the 1/T learning rate for the hidden state dynamics seems like it would be too small to allow the hidden state to adapt quickly to new inputs. And second, "meta learning" is probably not the right way to frame what this method is doing, since the hidden state dynamics aren't really "learning" anything. (Using an optimizer doesn't equate to learning.) Despite this concerns I will recommend this paper for acceptance.
train
[ "3wJb9vFgynT", "c2bXE6k3RyW", "WEjBCxQwzY6", "dBHmiO9fB64", "pBpYw5Swef", "rvx3BtBAt0d", "fPo1IPElSMy", "bNLfIS2qDOD", "seju3_8LowL", "9oheCbIzbjp", "TJxZ5puXB9B", "LoEpdHLEC3W", "zMEtN3ZvUXb", "UKHyYKJbBOh", "W3BJ8YgwjRg", "-UJ5rXZ_SWe", "Sske7nd-kNJ", "rx_dpGOPUE0", "OE6VE_ZVsW...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_rev...
[ "The paper proposes to view RNN training as a bi-level optimisation process:\n* Inner loop: Updating states $h_t$ by following the the gradient of the transition function $h_t \\leftarrow h_{t-1} - \\eta \\nabla F(h_{t-1}, x_t; \\theta)$\n* Outer loop: Updating the transition function parameters $\\theta$ and hyper...
[ 6, -1, -1, 7, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 4, -1, -1, 3, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021_r1pprsDm185", "WEjBCxQwzY6", "TJxZ5puXB9B", "nips_2021_r1pprsDm185", "zMEtN3ZvUXb", "pBLI3kBA2oU", "qvE2-TvCzme", "9oheCbIzbjp", "nips_2021_r1pprsDm185", "rBigNIaSa-1", "LoEpdHLEC3W", "UKHyYKJbBOh", "-UJ5rXZ_SWe", "W3BJ8YgwjRg", "Sske7nd-kNJ", "OE6VE_ZVsWt", "rx_dpGOPUE0",...
nips_2021_hY4rUScQOe
Navigating to the Best Policy in Markov Decision Processes
Aymen Al Marjani, Aurélien Garivier, Alexandre Proutiere
accept
Although the scores for this paper make it a borderline case, after the discussion it seems that the only remaining issue is that the paper does not highlight better the contributions, technical novelties, and results. Further, the description of the algorithm was considered to be improvable. I think that these points should be manageable when preparing the final version, so that I recommend to accept the paper. The authors are expected to revise the paper taking into account the reviewers's comments and in particular paying attention to the points mentioned above.
train
[ "eswqlUBoHPA", "btuy2yy5la8", "QHfc2_f9Ne6", "A9GKb4d9eUk", "-zdYumJ3DAL", "R8xg3WnjOMs", "Z21co3Jqz5p", "wf5zuZTvXx", "Dl5aiTfso2l", "pfMptpGU4F7", "LdPXji-8meM", "9Ai2FhXpO3P", "Q5PPaxOQc7J", "wwAFTUzcoWD" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer" ]
[ "This paper deals with online learning of policies for MDPs. Its main novelty is considering the \"navigation constraints\" related to this online setting where, in contrast with an episodic setting, the system must learn the policy in a single episode without being able to reset to an initial state. Bounds on samp...
[ 4, 6, -1, -1, -1, -1, 5, -1, -1, -1, 6, -1, -1, 6 ]
[ 3, 3, -1, -1, -1, -1, 4, -1, -1, -1, 3, -1, -1, 1 ]
[ "nips_2021_hY4rUScQOe", "nips_2021_hY4rUScQOe", "9Ai2FhXpO3P", "-zdYumJ3DAL", "Dl5aiTfso2l", "wf5zuZTvXx", "nips_2021_hY4rUScQOe", "pfMptpGU4F7", "LdPXji-8meM", "Z21co3Jqz5p", "nips_2021_hY4rUScQOe", "btuy2yy5la8", "eswqlUBoHPA", "nips_2021_hY4rUScQOe" ]
nips_2021_rjIjkiyAJao
A Faster Decentralized Algorithm for Nonconvex Minimax Problems
Wenhan Xian, Feihu Huang, Yanfu Zhang, Heng Huang
accept
All reviewers agree that this paper makes an important contribution by extending recent faster convergence results for nonconvex-strongly concave minimax problems to the decentralized setting and hence I recommend the paper for acceptance. However, I ask the authors to incorporate reviewer suggestions e.g., adding discussion around non-strongly concave problems.
train
[ "ZgkVYkHXkdH", "mdeiHyYS2OZ", "s5p6jha5DbJ", "JZH3yBOonMM", "0eZBNs1hbm", "1RIC529B15", "hgIpHMQDsq", "LSRwRF_Kh5V", "BXrzMZN0WXm" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank the authors for your reply, which well addresses my concern. \nYou could write down your answer of my question (11) in the experiment section. \n\nI also read the other reviewers' comments and the authors' response to them. The authors' response looks good to me. Again, this paper is novel not only in algor...
[ -1, 6, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, 3, -1, -1, -1, -1, 3, 4, 5 ]
[ "0eZBNs1hbm", "nips_2021_rjIjkiyAJao", "mdeiHyYS2OZ", "hgIpHMQDsq", "BXrzMZN0WXm", "LSRwRF_Kh5V", "nips_2021_rjIjkiyAJao", "nips_2021_rjIjkiyAJao", "nips_2021_rjIjkiyAJao" ]
nips_2021_9J2wV5E1Aq_
Generalization Bounds For Meta-Learning: An Information-Theoretic Analysis
We derive a novel information-theoretic analysis of the generalization property of meta-learning algorithms. Concretely, our analysis proposes a generic understanding in both the conventional learning-to-learn framework \citep{amit2018meta} and the modern model-agnostic meta-learning (MAML) algorithms \citep{finn2017model}.Moreover, we provide a data-dependent generalization bound for the stochastic variant of MAML, which is \emph{non-vacuous} for deep few-shot learning. As compared to previous bounds that depend on the square norms of gradients, empirical validations on both simulated data and a well-known few-shot benchmark show that our bound is orders of magnitude tighter in most conditions.
accept
All reviewers agree that this paper is a solid work that contributes substantially to the theory of Meta-Learning. After the rebuttal, most reviewers found the response by the authors satisfactory and raised their score. As the AC of the paper, I also read the paper and found it novel and very well-written. I recommend acceptance of this work.
train
[ "yzaH8djvqKF", "IQgjYiHFTE", "rX10l1HmfD", "CcVVzLIyucA", "K_Tjr5CbRF", "1n0SiMzVKTq", "QCHC4lZfV14", "dj2zDru7oc", "LGEnbJFoNyf", "ORmnJioZQIs", "xXlEAqf6DVj", "xOsahsTIj-d", "vUh6OLeN9fv", "EHukXM1Dmnj", "bJ4ZnD9coCz", "w1ohS2VlhGZ" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ " We appreciate the reviewer's suggestions and we will update our final version according to your feedback.\n", "The authors derive two bounds on the generalization property of meta-learning algorithms using an information-theoretic analysis. The first one considers algorithms based on a *joint-training*, i.e. w...
[ -1, 7, -1, -1, -1, -1, -1, 7, 8, -1, -1, 7, -1, -1, -1, -1 ]
[ -1, 4, -1, -1, -1, -1, -1, 2, 2, -1, -1, 4, -1, -1, -1, -1 ]
[ "rX10l1HmfD", "nips_2021_9J2wV5E1Aq_", "EHukXM1Dmnj", "QCHC4lZfV14", "ORmnJioZQIs", "xXlEAqf6DVj", "w1ohS2VlhGZ", "nips_2021_9J2wV5E1Aq_", "nips_2021_9J2wV5E1Aq_", "vUh6OLeN9fv", "bJ4ZnD9coCz", "nips_2021_9J2wV5E1Aq_", "LGEnbJFoNyf", "IQgjYiHFTE", "xOsahsTIj-d", "dj2zDru7oc" ]
nips_2021_7BlQMwp_44p
ReLU Regression with Massart Noise
Ilias Diakonikolas, Jong Ho Park, Christos Tzamos
accept
This paper considers the problem of ReLU regression under the Massart noise model that has recently been studied extensively for classification problems. The main result of the paper is an algorithm that does exact parameter learning under certain distributional assumptions. All the reviewers appreciated the results of the paper. While it builds on prior techniques in the area, the technical novelty in the work is high enough. Certain technical questions raised by the reviewers were subsequently resolved by the authors' response. Overall this is a solid theory paper and I recommend acceptance.
train
[ "PMXJ7iV3EES", "K2JFn5SmY_x", "h9s61mRAQzS", "ktGH2uNh1O8", "8eFXmROg3nT", "nIVVQxj8G1P", "ZoZ60fe1L4O", "RxciILAjUVl", "6KEmTiPc2J", "0odb8IOz-v", "VohWkYfl8yd", "1UewisEe8v7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the feedback. I keep my initial score and recommend this paper for acceptance. ", " Thanks for your response,\n\nMost of my concerns are obviated. \nHowever, I keep my score since (at least IMO) the writing of the paper can still be greatly improved and the contents could become more accessible by th...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "ZoZ60fe1L4O", "nIVVQxj8G1P", "ktGH2uNh1O8", "1UewisEe8v7", "VohWkYfl8yd", "6KEmTiPc2J", "0odb8IOz-v", "nips_2021_7BlQMwp_44p", "nips_2021_7BlQMwp_44p", "nips_2021_7BlQMwp_44p", "nips_2021_7BlQMwp_44p", "nips_2021_7BlQMwp_44p" ]
nips_2021_omDF-uQ_OZ
Identification of the Generalized Condorcet Winner in Multi-dueling Bandits
The reliable identification of the “best” arm while keeping the sample complexity as low as possible is a common task in the field of multi-armed bandits. In the multi-dueling variant of multi-armed bandits, where feedback is provided in the form of a winning arm among as set of k chosen ones, a reasonable notion of best arm is the generalized Condorcet winner (GCW). The latter is an the arm that has the greatest probability of being the winner in each subset containing it. In this paper, we derive lower bounds on the sample complexity for the task of identifying the GCW under various assumptions. As a by-product, our lower bound results provide new insights for the special case of dueling bandits (k = 2). We propose the Dvoretzky–Kiefer–Wolfowitz tournament (DKWT) algorithm, which we prove to be nearly optimal. In a numerical study, we show that DKWT empirically outperforms current state-of-the-art algorithms, even in the special case of dueling bandits or under a Plackett-Luce assumption on the feedback mechanism.
accept
The reviewers came to consensus that the theoretical strength overtakes the concerns such as the assumption on the existence of GCW and the heaviness of the technical materials. I agree with these opinions and please sincerely address the concerns raised by the reviewers in the final version such as the relation with the previous work pointed out by NPR2. In particular, though I agree that the dense notation of this paper might be somewhat unavoidable, I expect that the authors make the best effort to improve the readability with more intuitions for them so that the paper becomes a good starting point for multi-dueling setting.
val
[ "pBj4t0FR1GD", "fql-BowOtJf", "qTkCeYkkFpt", "OcWQ4gzkQ6", "I56Q0mrUT2Y", "JbUnvFpXTqv", "DaMWxPPdmE" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes the notion of a \"generalised Condorcet winner\" (GCW) among contestants in a tournament (or candidates in an election). Every subset of contestants is associated with a probability distribution over the contestants in the subset; a GCW is the most probable choice in every subset. The authors ...
[ 6, -1, 6, -1, -1, -1, 7 ]
[ 3, -1, 4, -1, -1, -1, 5 ]
[ "nips_2021_omDF-uQ_OZ", "OcWQ4gzkQ6", "nips_2021_omDF-uQ_OZ", "qTkCeYkkFpt", "DaMWxPPdmE", "pBj4t0FR1GD", "nips_2021_omDF-uQ_OZ" ]
nips_2021_t8HduwpoQQv
Robust Inverse Reinforcement Learning under Transition Dynamics Mismatch
Luca Viano, Yu-Ting Huang, Parameswaran Kamalaruban, Adrian Weller, Volkan Cevher
accept
This paper considers the inverse reinforcement learning setting with a mismatch between the estimated transition dynamics and the true dynamics (due to the environment being only known from state-only demonstrations and not possible to directly query). This is an important practical problem for IRL methods that the authors address rigorously by bounding the performance degradation created by the mismatch and constructing a game-theoretic algorithm for producing a robust policy in this setting. The paper's experiments show the improvement compared to obliviousness of the dynamics mismatch. The reviewers raised a number of clarification questions for the authors, that the authors did a great job addressing in their rebuttal---and I expect those response clarifications to improve the revision of the paper itself. Given that all the final ratings at a minimum lean towards acceptance, I recommend the paper be accepted to the conference.
train
[ "j_p9cD2M1aU", "A5NTrRnHj7e", "OROtIzxFMMA", "NjRIKAAUqu", "EeZqmJKtpm9", "oH1sdwuAufH", "TItxC9qRIvc", "2_9tNhkr8s", "KbPZSVvFl7W" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the performance degradation of a Maximum Causal Entropy learner under a transition dynamics mismatch between the expert and the learner in the imitation learning setting. The authors show that the performance of the learner is bounded by the $L_1$ distance between the transition dynamics of the ...
[ 6, -1, 7, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, 2, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021_t8HduwpoQQv", "j_p9cD2M1aU", "nips_2021_t8HduwpoQQv", "KbPZSVvFl7W", "2_9tNhkr8s", "OROtIzxFMMA", "nips_2021_t8HduwpoQQv", "nips_2021_t8HduwpoQQv", "nips_2021_t8HduwpoQQv" ]
nips_2021_sneJD9juaNl
Re-ranking for image retrieval and transductive few-shot classification
In the problems of image retrieval and few-shot classification, the mainstream approaches focus on learning a better feature representation. However, directly tackling the distance or similarity measure between images could also be efficient. To this end, we revisit the idea of re-ranking the top-k retrieved images in the context of image retrieval (e.g., the k-reciprocal nearest neighbors) and generalize this idea to transductive few-shot learning. We propose to meta-learn the re-ranking updates such that the similarity graph converges towards the target similarity graph induced by the image labels. Specifically, the re-ranking module takes as input an initial similarity graph between the query image and the contextual images using a pre-trained feature extractor, and predicts an improved similarity graph by leveraging the structure among the involved images. We show that our re-ranking approach can be applied to unseen images and can further boost existing approaches for both image retrieval and few-shot learning problems. Our approach operates either independently or in conjunction with classical re-ranking approaches, yielding clear and consistent improvements on image retrieval (CUB, Cars, SOP, rOxford5K and rParis6K) and transductive few-shot classification (Mini-ImageNet, tiered-ImageNet and CIFAR-FS) benchmarks. Our code is available at https://imagine.enpc.fr/~shenx/SSR/.
accept
The paper presents an interesting approach for re-ranking based on learning similarity graph between data points to be ranked. Reviewers agree that the paper presents interesting ideas. A concern remains which is: comparison with our GNN based ranking methods. It would be important for authors to present strong GNN based ranking baselines.
train
[ "rC0cIv44f7I", "fVHPLhkYCd-", "tZCBkt1rSB", "HGv4XyyEqt", "QDlW1HZKiAm", "N0xdLU3q1uV", "9asxKII1fr", "62PEcZSeLI1", "83Sqca7QvmU", "2UV_2Ap8gth", "YBmY8r57os", "P1KWMWewOk" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you so much for your comments and suggestions! They are the incentives to make our paper stronger. ", " I've read the fellow reviewers reviews, their feedback, and the answers to my own concerns. I still think this is a good paper in the sense it proposes and tests a dynamic gnn that generalizes across da...
[ -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 5, 7 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 1 ]
[ "fVHPLhkYCd-", "2UV_2Ap8gth", "QDlW1HZKiAm", "nips_2021_sneJD9juaNl", "83Sqca7QvmU", "HGv4XyyEqt", "nips_2021_sneJD9juaNl", "YBmY8r57os", "HGv4XyyEqt", "P1KWMWewOk", "nips_2021_sneJD9juaNl", "nips_2021_sneJD9juaNl" ]
nips_2021_qGeqg4_hA2
Post-processing for Individual Fairness
Post-processing in algorithmic fairness is a versatile approach for correcting bias in ML systems that are already used in production. The main appeal of post-processing is that it avoids expensive retraining. In this work, we propose general post-processing algorithms for individual fairness (IF). We consider a setting where the learner only has access to the predictions of the original model and a similarity graph between individuals, guiding the desired fairness constraints. We cast the IF post-processing problem as a graph smoothing problem corresponding to graph Laplacian regularization that preserves the desired "treat similar individuals similarly" interpretation. Our theoretical results demonstrate the connection of the new objective function to a local relaxation of the original individual fairness. Empirically, our post-processing algorithms correct individual biases in large-scale NLP models such as BERT, while preserving accuracy.
accept
This paper proposes a post-processing method for enforcing individual fairness via graph-based smoothness. All four reviewers ultimately recommended acceptance. The main concern was novelty, which was partially (but not completely) addressed in the author response, but was not deemed significant enough to prevent acceptance. Please be sure to make any changes that have been promised to the reviewers, and seriously consider attempting to address any other issues that they might have raised, in particular YBcz's novelty concern (by being careful with citations and the relationship to prior work).
train
[ "Jm0veiSq-Q0", "qkmpGW52qtE", "z0urQbGVcyS", "xWYlhTRYvPY", "q7-_dUbf3Ej", "Srq_DXcgyNb", "ZnstGlvKcsk", "L5IqEAeU206", "lo6-PLtmsSR", "nLr9Ka0Whsy", "6Ua6Sqijj4c", "RAm8TyslApE", "0yhJQTak11P", "dFtu2XsbEs", "PZy79duYq3S", "bbwWL4E17Ht" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "In this paper, the authors consider the problem of enforcing individual fairness via post-processing. In particular, the authors characterize the post-processing of individual fairness into a graph smoothing problem and propose a novel individual fairness notion (Definition 4.1 Local Individual Fairness), as well ...
[ 6, 7, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 4, 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021_qGeqg4_hA2", "nips_2021_qGeqg4_hA2", "q7-_dUbf3Ej", "nips_2021_qGeqg4_hA2", "L5IqEAeU206", "ZnstGlvKcsk", "0yhJQTak11P", "lo6-PLtmsSR", "nLr9Ka0Whsy", "6Ua6Sqijj4c", "PZy79duYq3S", "qkmpGW52qtE", "bbwWL4E17Ht", "Jm0veiSq-Q0", "xWYlhTRYvPY", "nips_2021_qGeqg4_hA2" ]
nips_2021_77cNKCCjgw
OpenMatch: Open-Set Semi-supervised Learning with Open-set Consistency Regularization
Semi-supervised learning (SSL) is an effective means to leverage unlabeled data to improve a model’s performance. Typical SSL methods like FixMatch assume that labeled and unlabeled data share the same label space. However, in practice, unlabeled data can contain categories unseen in the labeled set, i.e., outliers, which can significantly harm the performance of SSL algorithms. To address this problem, we propose a novel Open-set Semi-Supervised Learning (OSSL) approach called OpenMatch.Learning representations of inliers while rejecting outliers is essential for the success of OSSL. To this end, OpenMatch unifies FixMatch with novelty detection based on one-vs-all (OVA) classifiers. The OVA-classifier outputs the confidence score of a sample being an inlier, providing a threshold to detect outliers. Another key contribution is an open-set soft-consistency regularization loss, which enhances the smoothness of the OVA-classifier with respect to input transformations and greatly improves outlier detection. \ours achieves state-of-the-art performance on three datasets, and even outperforms a fully supervised model in detecting outliers unseen in unlabeled data on CIFAR10. The code is available at \url{https://github.com/VisionLearningGroup/OP_Match}.
accept
The paper proposes a model for open-set semi-supervised learning. It is well-written and well-motivated. As pointed out by almost all the reviewers, there is limited novelty as the proposed model is a straightforward combination of existing components. However, extensive experimental evaluation shows big performance gap with respect to very recent baselines, and shows solid progress on the open-set semi-supervised learning setting studied. To further make the experimental results more convincing, it will be useful to add sensitivity analysis on the hyperparameters (as the model involves a number of them). Also, as the problem is related to OOD methods, the authors are encouraged to include OOD-related baselines.
train
[ "LOGG7qA1Pwr", "9d0QVlHqh_5", "WlspuyR7Pno", "MxQEz4LS30B", "eSe3em4fn19", "LWZp03IPkVi", "PsiJmkrM2I", "7IsyuF9muc3", "RShw0Od44fk", "bSAY0d3Q2JF", "RYFBRi8KGhj", "sID72cz-Lr", "yifKTW5DZoj", "beXQrxQa8nX", "PNMN30TC0MN", "43tc4_fHzv", "sRfoOgmsohh", "01MSOc_shlp" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_rev...
[ " Thanks for the question. \n\nIn the first column of Table 4, we are showing the AUROC to separate inliers and outliers within the dataset used for training. For example, inliers and outliers of CIFAR10 are employed to compute the AUROC in the first column. Please note that the values in the first column of Table ...
[ -1, -1, -1, -1, -1, 5, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, -1, -1, -1, -1, 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "9d0QVlHqh_5", "RShw0Od44fk", "43tc4_fHzv", "PNMN30TC0MN", "PsiJmkrM2I", "nips_2021_77cNKCCjgw", "nips_2021_77cNKCCjgw", "nips_2021_77cNKCCjgw", "bSAY0d3Q2JF", "yifKTW5DZoj", "sID72cz-Lr", "beXQrxQa8nX", "LWZp03IPkVi", "7IsyuF9muc3", "01MSOc_shlp", "sRfoOgmsohh", "nips_2021_77cNKCCjg...
nips_2021_5KWmB6JePx
End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering
We present an end-to-end differentiable training method for retrieval-augmented open-domain question answering systems that combine information from multiple retrieved documents when generating answers. We model retrieval decisions as latent variables over sets of relevant documents. Since marginalizing over sets of retrieved documents is computationally hard, we approximate this using an expectation-maximization algorithm. We iteratively estimate the value of our latent variable (the set of relevant documents for a given question) and then use this estimate to update the retriever and reader parameters. We hypothesize that such end-to-end training allows training signals to flow to the reader and then to the retriever better than staged-wise training. This results in a retriever that is able to select more relevant documents for a question and a reader that is trained on more accurate documents to generate an answer. Experiments on three benchmark datasets demonstrate that our proposed method outperforms all existing approaches of comparable size by 2-3% absolute exact match points, achieving new state-of-the-art results. Our results also demonstrate the feasibility of learning to retrieve to improve answer generation without explicit supervision of retrieval decisions.
accept
This paper proposes a new algorithm for training of multi-document question answering model containing a reader and a retriever. The key idea is to introduce latent variables over the sets of retrieved documents, employ the EM algorithm to approximately estimate the variables, and update the parameters of the reader and the retriever. Experimental results show that the proposed approach outperforms the baselines for about 2-3% on three benchmark datasets. Pros * The problem studied is important. * The paper is generally clearly written. * The proposed approach appears to be reasonable and sound. * Experimental results show the efficacy of the proposed approach. The improvements are significant. Cons * There were related models proposed in the past. The idea of the work is not very novel. The paper mainly demonstrates that a new way of combining the existing ideas does work better. * There are some details that are not explained very clearly. The authors have promised to make them clearer in the revised version. * The authors have also successfully addressed many of the issues pointed out by the reviewers in the rebuttal. Overall this is a solid work. We think that the contribution to ML is mainly from the empirical results.
train
[ "nKgg6FywhJL", "JB9sSiAblc8", "X2sFslnKP4", "L4ORAcaaoyZ", "cKTqBWmYak", "eaeVhSoBlSk", "zO2zKTPoL7Y", "ky5h7BUsaMD" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank the reviewer for providing the review and their thoughtful comments.\n\n### Comparison of $\\mathcal{L}_{alt2}$, Hard EM, and REINFORCE\nRegarding your first question about the connections of $\\mathcal{L_{alt2}}$ to Hard EM and REINFORCE, there are some similarities at the conceptual level e...
[ -1, -1, -1, -1, 6, 6, 6, 8 ]
[ -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "cKTqBWmYak", "eaeVhSoBlSk", "ky5h7BUsaMD", "zO2zKTPoL7Y", "nips_2021_5KWmB6JePx", "nips_2021_5KWmB6JePx", "nips_2021_5KWmB6JePx", "nips_2021_5KWmB6JePx" ]
nips_2021_IdqbmJx-urQ
Fast Algorithms for $L_\infty$-constrained S-rectangular Robust MDPs
Bahram Behzadian, Marek Petrik, Chin Pang Ho
accept
The reviewers and AC discussed the paper. The author response was helpful in clarifying some confusion. There was agreement that the paper is interesting. There are still some concerns about novelty and missing technical details, so this should be further clarified in the final version (in particular, the differences with Ho, Petrik and Wiesemann, 2018).
train
[ "KWwKSN_6r8", "TrMRnf2Rprk", "bXM6I_9-r_8", "uw8hKcXJe-", "eInQh627DGq", "6u9et4sPeC-", "mf_VevZ43vs", "n949zB0VSd", "TiAXc7cXw9", "EgO-MCPd5Ua" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors in the paper present a new, fast algorithm for solving RMDPs with L∞-constrained ambiguity sets. In particular, they propose a new homotopy method for solving SA-rectangular ambiguity sets and a bisection method that can solve, in combination with the homotopy method, RMDPs with S-rectangular ambiguity...
[ 7, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 2, 4 ]
[ "nips_2021_IdqbmJx-urQ", "EgO-MCPd5Ua", "KWwKSN_6r8", "EgO-MCPd5Ua", "TiAXc7cXw9", "KWwKSN_6r8", "n949zB0VSd", "nips_2021_IdqbmJx-urQ", "nips_2021_IdqbmJx-urQ", "nips_2021_IdqbmJx-urQ" ]
nips_2021_AjgFqUoD4U
Instance-optimal Mean Estimation Under Differential Privacy
Mean estimation under differential privacy is a fundamental problem, but worst-case optimal mechanisms do not offer meaningful utility guarantees in practice when the global sensitivity is very large. Instead, various heuristics have been proposed to reduce the error on real-world data that do not resemble the worst-case instance. This paper takes a principled approach, yielding a mechanism that is instance-optimal in a strong sense. In addition to its theoretical optimality, the mechanism is also simple and practical, and adapts to a variety of data characteristics without the need of parameter tuning. It easily extends to the local and shuffle model as well.
accept
This paper studies a fundamental problem in the differential privacy literature, namely mean estimation. It presents a novel algorithm and accompanies this with a theoretical and experimental evaluation. The reviewers all appreciate these contributions as valuable and worth accepting. There are some reservations about the clarity of the paper. In particular, the algorithm is not clearly explained and the instance optimality claim hides a significant gap between the upper and lower bounds. Hopefully, these presentation issues can be resolved in the camera ready revision.
train
[ "DM9wS9Feq1_", "cfwREV1b4hy", "Lw_rQqA2mz", "WDxTUpLKkQx", "3BVCn_mEoqZ", "kO9QpKAC6Tk", "b2LotEX_9f", "GF0agMsN5z_", "Cyk81RlVk70", "5EIe5FExeLu", "eiD0lPpoDAe", "Rbkeq1Gxqs", "LxMLu2vR85", "eZic8rj7d42", "_bkmbjLkcBc", "dlmLurcHAD0", "uamOMhAqmNo", "BMaIOVzL8nl" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_rev...
[ " We have to use $\\mathcal{R}_{\\mathrm{in\\mbox{-}nbr}}(\\cdot)$ in order to establish instance optimality. We actually follow the notation in [4], where they also use $\\mathcal{R}(\\cdot)$ to denote a neighborhood lower bound, but their neighborhood includes all neighbors. We consider this as a major differenc...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "cfwREV1b4hy", "WDxTUpLKkQx", "nips_2021_AjgFqUoD4U", "3BVCn_mEoqZ", "_bkmbjLkcBc", "b2LotEX_9f", "dlmLurcHAD0", "eZic8rj7d42", "5EIe5FExeLu", "Rbkeq1Gxqs", "nips_2021_AjgFqUoD4U", "LxMLu2vR85", "eiD0lPpoDAe", "BMaIOVzL8nl", "Lw_rQqA2mz", "uamOMhAqmNo", "nips_2021_AjgFqUoD4U", "nip...
nips_2021_hA-PHQGOjqQ
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
We describe a novel attribution method which is grounded in Sensitivity Analysis and uses Sobol indices. Beyond modeling the individual contributions of image regions, Sobol indices provide an efficient way to capture higher-order interactions between image regions and their contributions to a neural network's prediction through the lens of variance.We describe an approach that makes the computation of these indices efficient for high-dimensional problems by using perturbation masks coupled with efficient estimators to handle the high dimensionality of images.Importantly, we show that the proposed method leads to favorable scores on standard benchmarks for vision (and language models) while drastically reducing the computing time compared to other black-box methods -- even surpassing the accuracy of state-of-the-art white-box methods which require access to internal representations. Our code is freely available:github.com/fel-thomas/Sobol-Attribution-Method.
accept
A method for assessing the influence of image regions on neural network predictions is proposed. Reviewers n1GW and Szkm are positive about the approach and its reported experimental performance. In discussion following the author response, Reviewer Xxm2 stated the response's ablation study sufficiently addressed concerns.
train
[ "QDz3KgPGHGL", "MC6fyWs-LLN", "cuZOUk8dISt", "muipcfxiN1K", "CYPE4XG5j01", "L1SSlt0tPDC", "hIJp_tOrTu", "M175RfVZfin" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer,\n\nAs promised, we updated our table of results to address your remark on the effectiveness of the higher-order information.\n\nBest regards", " Thank you for your review. We are happy to address your comments. The paper will improve as a result.\n\nQ: “I do not see why black-box attribution mode...
[ -1, -1, -1, -1, -1, 4, 7, 6 ]
[ -1, -1, -1, -1, -1, 4, 4, 2 ]
[ "MC6fyWs-LLN", "L1SSlt0tPDC", "muipcfxiN1K", "hIJp_tOrTu", "M175RfVZfin", "nips_2021_hA-PHQGOjqQ", "nips_2021_hA-PHQGOjqQ", "nips_2021_hA-PHQGOjqQ" ]
nips_2021_DZKsFQyDB9
PatchGame: Learning to Signal Mid-level Patches in Referential Games
We study a referential game (a type of signaling game) where two agents communicate with each other via a discrete bottleneck to achieve a common goal. In our referential game, the goal of the speaker is to compose a message or a symbolic representation of "important" image patches, while the task for the listener is to match the speaker's message to a different view of the same image. We show that it is indeed possible for the two agents to develop a communication protocol without explicit or implicit supervision. We further investigate the developed protocol and show the applications in speeding up recent Vision Transformers by using only important patches, and as pre-training for downstream recognition tasks (e.g., classification).
accept
The author response addressed most of the concerns of the reviewers and all reviewers recommend to accept the paper after the author response and discussion. I recommend accept with the expectation that the authors will revise the paper for the camera ready, addressing the reviewer's concerns according to the author response, including, but not limited to the following aspects: 1. Add the mean and standard error to Table 1, 2 and Figure 4, 6 2. Add results on Pascal VOC 3. Add an extra ablation study on augmentations and batch sizes 4. Add ablation study for Mihai et al. and discuss it in the related work 5. Move the discussion on sentence length from the appendix to the main paper 6. Fix typos and make other minor writing improvements
test
[ "y5zQTAJMdS8", "3CoUv_ZlJf", "wKuGlzBCYbU", "QgBzvKJj1sD", "ISfQksKCJI2", "ANqO7DoSGWq", "RQ-OYlBJhQA", "aW5sCEyWwtc", "t_8pD0QHJLb", "gPKomXdj0n-", "NI95TkDgII1", "bS_qCAQ2o80", "BrmzAyTMwKd", "i4JBiWQfnwS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents a visual referential game played by a pair of agents. Compared to existing work in this space, the game is a variant of those studied previously as the communication is based directly on encoded patches (rather than a latent vector describing the whole image), and the agent models diverge from ...
[ 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 5, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_DZKsFQyDB9", "bS_qCAQ2o80", "ISfQksKCJI2", "RQ-OYlBJhQA", "gPKomXdj0n-", "nips_2021_DZKsFQyDB9", "t_8pD0QHJLb", "nips_2021_DZKsFQyDB9", "ANqO7DoSGWq", "y5zQTAJMdS8", "i4JBiWQfnwS", "BrmzAyTMwKd", "nips_2021_DZKsFQyDB9", "nips_2021_DZKsFQyDB9" ]
nips_2021_h1bPe7spQkr
Implicit Generative Copulas
Copulas are a powerful tool for modeling multivariate distributions as they allow to separately estimate the univariate marginal distributions and the joint dependency structure. However, known parametric copulas offer limited flexibility especially in high dimensions, while commonly used non-parametric methods suffer from the curse of dimensionality. A popular remedy is to construct a tree-based hierarchy of conditional bivariate copulas.In this paper, we propose a flexible, yet conceptually simple alternative based on implicit generative neural networks.The key challenge is to ensure marginal uniformity of the estimated copula distribution.We achieve this by learning a multivariate latent distribution with unspecified marginals but the desired dependency structure.By applying the probability integral transform, we can then obtain samples from the high-dimensional copula distribution without relying on parametric assumptions or the need to find a suitable tree structure.Experiments on synthetic and real data from finance, physics, and image generation demonstrate the performance of this approach.
accept
Reviewers had divergent opinions on this paper — behind the scenes, in the discussion phase, one was arguing strongly for acceptance, and one for rejection. The primary complaints were: (i) the model may have limited applicability, due to the fact that it is purely "implicit" and does not provide a density estimate, precluding many of the common uses of copula models; (ii) the empirical evaluation could have been stronger, as there is only moderate improvement relative to vine copula baselines, and for the image data such as fashion MNIST other metrics could be more appropriate. However, all reviewers agreed that the authors responded well to concerns during the discussion period. I would strongly suggest including the suggested additional results on generation for standard datasets in the appendix, as mentioned, as well as including more on motivating examples in which an implicit copula model is helpful despite lack of an explicit density.
train
[ "ct3bxBv7nvT", "pDMXEQo9kfj", "fncO8DmRj4", "Un3mdpH1U6", "DmxoeXcZssT", "BWuMfcsGyT0", "C6yAkxqsXg", "S9fwIyAGIpy" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes an implicit method (IGC) of estimating a copula. The suggested approach is elegant and from the presented results the method seems to work on par with existing non-parametric vine copula approaches. Previous works have made an attempt to use GANs for implicit copula estimation with difficulties,...
[ 7, -1, -1, -1, -1, 6, 7, 4 ]
[ 5, -1, -1, -1, -1, 2, 5, 4 ]
[ "nips_2021_h1bPe7spQkr", "S9fwIyAGIpy", "ct3bxBv7nvT", "BWuMfcsGyT0", "C6yAkxqsXg", "nips_2021_h1bPe7spQkr", "nips_2021_h1bPe7spQkr", "nips_2021_h1bPe7spQkr" ]
nips_2021_-t9LPHRYKmi
Tensor Normal Training for Deep Learning Models
Despite the predominant use of first-order methods for training deep learning models, second-order methods, and in particular, natural gradient methods, remain of interest because of their potential for accelerating training through the use of curvature information. Several methods with non-diagonal preconditioning matrices, including KFAC, Shampoo, and K-BFGS, have been proposed and shown to be effective. Based on the so-called tensor normal (TN) distribution, we propose and analyze a brand new approximate natural gradient method, Tensor Normal Training (TNT), which like Shampoo, only requires knowledge of the shape of the training parameters. By approximating the probabilistically based Fisher matrix, as opposed to the empirical Fisher matrix, our method uses the block-wise covariance of the sampling based gradient as the pre-conditioning matrix. Moreover, the assumption that the sampling-based (tensor) gradient follows a TN distribution, ensures that its covariance has a Kronecker separable structure, which leads to a tractable approximation to the Fisher matrix. Consequently, TNT's memory requirements and per-iteration computational costs are only slightly higher than those for first-order methods. In our experiments, TNT exhibited superior optimization performance to state-of-the-art first-order methods, and comparable optimization performance to the state-of-the-art second-order methods KFAC and Shampoo. Moreover, TNT demonstrated its ability to generalize as well as first-order methods, while using fewer epochs.
accept
This paper develops a new 2nd-order optimization method targeted at neural networks that combines aspects of K-FAC and Shampoo. In particular, it uses a Shampoo-style approximation to estimate the Fisher information matrix instead of the Empirical Fisher (as in Shampoo), and like K-FAC raises the curvature matrix approximation to the power -1 to compute the preconditioner. The reviewers agree that this is a well written paper that makes a solid contribution to the area. The main concerns of the reviewers are related to the experiments and the tuning of different optimizers. I share these concerns, but am pleased to see that the authors are quite willing to rerun their experiments based on the reviewer's guidance and update their conclusions accordingly. One thing I would like to bring up again is the use of SVD (which is much slower than inverses) and constant damping in K-FAC. Especially for those autoencoder problems a constant damping is highly suboptimal. The Tensorflow release of K-FAC has a MNIST autoencoder experiment implementation (under tensorflow_kfac/examples/autoencoder_mnist.py) which reproduces the results from the original paper (which gets a loss of 50.25 after 1.5k steps using an increasing batch size schedule). It would be quite interesting to see how well the proposed preconditioner would performed as a drop-in replacement for K-FAC in that setting, or even one with a fixed batch size of 60k. Another thing probably worth pointing out is that this method, like Shampoo, uses the tensor shape of the network's parameters to construct its approximations. That this works well for convnets has entirely to do with the specific role of these parameters play in the network, and how this just happens to be related to their shape in typical implementations. In general, shape can be totally arbitrary, and there is nothing preventing me from parameterizing all of the convolutional layers in my network with flat vectors, or with tensors of the same shape but with randomly permuted entries.
train
[ "40Yhxryc-SG", "Px3e8tPR2Fo", "5ibHJsMlcM1", "22FUPlX26Dp", "igCflFYsVyN", "peV51MVacsb", "YLegJL2sb_v", "XgGIPPafvOn", "70zqo4luZd", "7Q6YPXqqcmS", "r3txLXFlTpd" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Paper proposes a tractable natural gradient approximation. It uses the shampoo approximation algebra combined with exponents of -`1 as its approximating the Fisher. Empirical experiments were carried out in two tasks: 1) autoencoder as well 2) convolutional nets. The approach is clever, and is also a straightfor...
[ 6, -1, -1, -1, -1, 7, -1, -1, -1, -1, 7 ]
[ 5, -1, -1, -1, -1, 5, -1, -1, -1, -1, 4 ]
[ "nips_2021_-t9LPHRYKmi", "5ibHJsMlcM1", "22FUPlX26Dp", "igCflFYsVyN", "XgGIPPafvOn", "nips_2021_-t9LPHRYKmi", "peV51MVacsb", "40Yhxryc-SG", "r3txLXFlTpd", "nips_2021_-t9LPHRYKmi", "nips_2021_-t9LPHRYKmi" ]
nips_2021_LBhruMnhgIB
Unintended Selection: Persistent Qualification Rate Disparities and Interventions
Reilly Raab, Yang Liu
accept
(The meta-review was written jointly by AC and SAC after discussion with the program chairs.) This paper studies how qualification disparities between groups can arise without structural differences between different groups. That is: when (optimal) machine learning systems make decisions that impact people, how can these systems cause inequality to arise and maintain inequality when it is present? Furthermore, when the system is modified to enforce standard fairness constraints, how does this impact long term inequality? Overall, the topic is of extreme interest to the ML community (long-term fairness of systems). The reviewers found the paper to be technically clear and impactful, and all recommend acceptance, two strongly (scores = 8,9). The one reviewer only weakly recommending acceptance (score = 6) did not engage in the discussion. However, while the paper's technical contributions are crisp, several sociotechnical issues were identified by the SAC (in collaboration with ethics chairs and an additional AC). The decision is to conditionally accept the paper. The sociotechnical issues, raised in a separate comment that serves as a sociotechnical review, must be addressed. In particular, the following changes need to be implemented for the paper to be accepted (these are copy-pasted from the sociotechnical review): 1. Review and update analogies drawn from evolutionary biology, so as to minimize the risk of essentialist misinterpretation. 2. Connect technical definitions to some real-world examples, and discuss how the assumptions made in the paper align or not with those real-world examples. Some of the key terms that require connecting to examples are "qualification" and "subpopulation". The best approach would be to introduce a plausible running example that readers can use to understand the implications of the results. Such an example would need to pin down what precisely is meant by "qualification", how the groups (subpopulations) are defined, and what the classifier is doing. If no plausible example can be produced, then the authors need to walk back their statements about the applicability of the results to real-world sociotechnical systems. 3. Be explicit about the operationalization of fairness and explain what the various assumptions and criteria around that operationalization mean in the context of real-world dynamics. 4. Discuss how the replicator dynamics might be realized in real-world examples. 5. Connect the proposed intervention to a clear motivation of mitigating harms to marginalized groups, and more explicitly clarify differences between this intervention and other approaches (such as equalized odds and demographic parity) in service of that overall goal. UPDATE: The revisions submitted by the authors have been reviewed and the paper has been officially accepted.
test
[ "v5wHaWt1pkT", "lsbt-_nCRGR", "-3c--47EXkC", "mzZYfzLUhf", "H-7e8kikIK", "egsGrAcmWU2", "7igbn1s5Yx4", "e164sSGcsxB", "NFDpf9-HsvJ", "og6vDv0Cs4S" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The new title sounds fine to me!", " Thank you again for your comments. We greatly appreciate them.\n\nWith respect to revisions:\n\n* We concur: addressing the differences in trajectories predicted by our model\nand the Coate and Loury model would provide an additional means of\ndistinguishing the theories whi...
[ -1, -1, -1, -1, -1, -1, -1, 8, 9, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 5, 3 ]
[ "lsbt-_nCRGR", "-3c--47EXkC", "7igbn1s5Yx4", "nips_2021_LBhruMnhgIB", "e164sSGcsxB", "og6vDv0Cs4S", "NFDpf9-HsvJ", "nips_2021_LBhruMnhgIB", "nips_2021_LBhruMnhgIB", "nips_2021_LBhruMnhgIB" ]
nips_2021_OMNRFw1fX3a
Revisiting 3D Object Detection From an Egocentric Perspective
3D object detection is a key module for safety-critical robotics applications such as autonomous driving. For these applications, we care most about how the detections affect the ego-agent’s behavior and safety (the egocentric perspective). Intuitively, we seek more accurate descriptions of object geometry when it’s more likely to interfere with the ego-agent’s motion trajectory. However, current detection metrics, based on box Intersection-over-Union (IoU), are object-centric and aren’t designed to capture the spatio-temporal relationship between objects and the ego-agent. To address this issue, we propose a new egocentric measure to evaluate 3D object detection, namely Support Distance Error (SDE). Our analysis based on SDE reveals that the egocentric detection quality is bounded by the coarse geometry of the bounding boxes. Given the insight that SDE would benefit from more accurate geometry descriptions, we propose to represent objects as amodal contours, specifically amodal star-shaped polygons, and devise a simple model, StarPoly, to predict such contours. Our experiments on the large-scale Waymo Open Dataset show that SDE better reflects the impact of detection quality on the ego-agent’s safety compared to IoU; and the estimated contours from StarPoly consistently improve the egocentric detection quality over recent 3D object detectors.
accept
After the rebuttal most reviewers agreed to accept this paper. The remaining concerns centered around connections of the proposed evaluation method and algorithm to down-stream planning and driving performance. The AC agrees that the paper would be stronger making these connections, but sees enough merit in the proposed work to accept the paper without it.
test
[ "cA0A-KQa4aD", "dwDbs5cJrLN", "YsUL2baI-UM", "eBZ3VrTjch", "NNnteZ75Pf6", "wQH1TdFseh7", "OZGNd2CaTY", "XRsLcDYCbyd", "jJIe3OrpJPx", "SYtppvcD4OB", "k6FklBfmI-n" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the suggestions. We will add the recommended sign and threshold analysis and add further discussion of when StarPoly is most valuable in comparison to bounding boxes.", " Thanks for addressing my questions.\n1. Indeed sign and use different thresholds can be easily added. Please add the analysis in t...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 4, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "dwDbs5cJrLN", "NNnteZ75Pf6", "jJIe3OrpJPx", "k6FklBfmI-n", "SYtppvcD4OB", "XRsLcDYCbyd", "nips_2021_OMNRFw1fX3a", "nips_2021_OMNRFw1fX3a", "nips_2021_OMNRFw1fX3a", "nips_2021_OMNRFw1fX3a", "nips_2021_OMNRFw1fX3a" ]
nips_2021_lN2Uqm-ScC
Optimizing Information-theoretical Generalization Bound via Anisotropic Noise of SGLD
Recently, the information-theoretical framework has been proven to be able to obtain non-vacuous generalization bounds for large models trained by Stochastic Gradient Langevin Dynamics (SGLD) with isotropic noise. In this paper, we optimize the information-theoretical generalization bound by manipulating the noise structure in SGLD. We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance if both the prior and the posterior are jointly optimized. This validates that the optimal noise is quite close to the empirical gradient covariance. Technically, we develop a new information-theoretical bound that enables such an optimization analysis. We then apply matrix analysis to derive the form of optimal noise covariance. Presented constraint and results are validated by the empirical observations.
accept
The paper studies the connection between the generalization ability of SGLD and the covariance structure of its noise term. The reviewers initially had a lot of concerns about the lack of clarity of the paper and some details in the derivation of the proofs. Most of these concerns were addressed by the authors in the rebuttal and some reviewers raised their scores as a result. Overall, the contribution made in the paper is seen as interesting and worthy of acceptance. However, many reviewers, and myself included, think that the writing in some sections of the paper and the clarity are poor. I strongly advise the authors to follow the suggestions made by the reviewers to improve the manuscript for the camera-ready version.
val
[ "xWuy5TPBe_8", "4FapqiGWyiq", "VqwTci7ibDE", "FsZC55P2zme", "B0VWpeq0RGN", "oqczypvn6Hb", "ym1899ReNqv", "DaAaAUnTqe5", "N9iXAE6Gu28", "Ugn8fJtT2LC", "rXoFnVEd9Mi", "8ItT-WCgds", "vdc9SM5YpR9", "ZL0At6OoA90", "dEuxmsD9gEp", "mvSIsche265", "MDQZdEL0qG", "nVgyggVP60", "cg7VkjgvXb_"...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "...
[ " **Q1**: Usefulness of the information-theoretical bounds. It may not reasonable that the generalization error increasing with time.\n\n**A1**: Thank you for the comment. There might be a misunderstanding about the ``generalization error``. The reviewer may refer to the **test error** (error on the test data) as ...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 2, 3 ]
[ "sTRlQKGG9b4", "VqwTci7ibDE", "B0VWpeq0RGN", "nips_2021_lN2Uqm-ScC", "oqczypvn6Hb", "ym1899ReNqv", "DaAaAUnTqe5", "Ugn8fJtT2LC", "8ItT-WCgds", "dEuxmsD9gEp", "nips_2021_lN2Uqm-ScC", "rXoFnVEd9Mi", "ZL0At6OoA90", "MDQZdEL0qG", "mvSIsche265", "Rruq0cJHhe", "nVgyggVP60", "cg7VkjgvXb_"...
nips_2021_WwqOoNnA8f
Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning
Federated learning (FL) has gain growing interests for its capability of learning from distributed data sources collectively without the need of accessing the raw data samples across different sources. So far FL research has mostly focused on improving the performance, how the algorithmic disparity will be impacted for the model learned from FL and the impact of algorithmic disparity on the utility inconsistency are largely unexplored. In this paper, we propose an FL framework to jointly consider performance consistency and algorithmic fairness across different local clients (data sources). We derive our framework from a constrained multi-objective optimization perspective, in which we learn a model satisfying fairness constraints on all clients with consistent performance. Specifically, we treat the algorithm prediction loss at each local client as an objective and maximize the worst-performing client with fairness constraints through optimizing a surrogate maximum function with all objectives involved. A gradient-based procedure is employed to achieve the Pareto optimality of this optimization problem. Theoretical analysis is provided to prove that our method can converge to a Pareto solution that achieves the min-max performance with fairness constraints on all clients. Comprehensive experiments on synthetic and real-world datasets demonstrate the superiority that our approach over baselines and its effectiveness in achieving both fairness and consistency across all local clients.
accept
This paper studies the notion of fairness concerns in Federated Learning settings where a vanilla global model may be particularly bad on some clients versus others. Their solve this issue by maximizin the performance of the model on the worst client, leading to minimax style of fairness gurantees. The authors complement this analysis with an empirical evaluation of their method.
train
[ "rVAM56Mstve", "PHNoG3gvfxa", "URdBtiSzDs6", "WfFDtl1oV6G", "dH_lG0pSWOk", "aAZ_0FqxxLb", "inhl4jb3aqi", "UJdqrAszmyP", "_YyHf64IoLF" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a framework for finding solutions that achieve both demographic parity and performance consistency across local clients in FL. They propose first achieving min-max performance with multi-client fairness and then finding a solution that is Pareto optimal and demonstrate their method on experimen...
[ 7, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "nips_2021_WwqOoNnA8f", "rVAM56Mstve", "UJdqrAszmyP", "_YyHf64IoLF", "rVAM56Mstve", "inhl4jb3aqi", "nips_2021_WwqOoNnA8f", "nips_2021_WwqOoNnA8f", "nips_2021_WwqOoNnA8f" ]
nips_2021_wQZWg82TWx
A Mathematical Framework for Quantifying Transferability in Multi-source Transfer Learning
Current transfer learning algorithm designs mainly focus on the similarities between source and target tasks, while the impacts of the sample sizes of these tasks are often not sufficiently addressed. This paper proposes a mathematical framework for quantifying the transferability in multi-source transfer learning problems, with both the task similarities and the sample complexity of learning models taken into account. In particular, we consider the setup where the models learned from different tasks are linearly combined for learning the target task, and use the optimal combining coefficients to measure the transferability. Then, we demonstrate the analytical expression of this transferability measure, characterized by the sample sizes, model complexity, and the similarities between source and target tasks, which provides fundamental insights of the knowledge transferring mechanism and the guidance for algorithm designs. Furthermore, we apply our analyses for practical learning tasks, and establish a quantifiable transferability measure by exploiting a parameterized model. In addition, we develop an alternating iterative algorithm to implement our theoretical results for training deep neural networks in multi-source transfer learning tasks. Finally, experiments on image classification tasks show that our approach outperforms existing transfer learning algorithms in multi-source and few-shot scenarios.
accept
This paper proposes a new measure to quantify transferability in multi-source transfer learning problems, which takes into account the similarity between domains, sample size of each domain and model complexity. The authors have not been successful in responding to all the concerns raised by the reviewers. mainly, two important concerns remain. one theoretical side, it would surely enhance the paper to have justification/proofs/lower bound results on the measure. if not, on the empirical side, the authors need to provide more evidence on success of the proposed measure.
train
[ "N5YWmC-gr8S", "5X6bthScND2", "21-W1c5ZFrI", "eOzZOU1uVe", "4RVm38k3kZC", "C2dEWuJfI3", "7Xq4P5m-QqA", "-MIiBt9nKU0", "hW9G5KXZBv", "G0r5IrL0CiB" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer so much for the comments.\n\nRegarding the model complexity, we would like to clarify that the quantity $V^{(0)}$ is contained in the mathematical expression of the testing loss without forethoughts, where we interpret this quantity as the model complexity. We find that it corresponds to the...
[ -1, 5, 6, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, 3, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "eOzZOU1uVe", "nips_2021_wQZWg82TWx", "nips_2021_wQZWg82TWx", "-MIiBt9nKU0", "G0r5IrL0CiB", "21-W1c5ZFrI", "5X6bthScND2", "hW9G5KXZBv", "nips_2021_wQZWg82TWx", "nips_2021_wQZWg82TWx" ]
nips_2021_zdNEp82a-_q
Morié Attack (MA): A New Potential Risk of Screen Photos
Dantong Niu, Ruohao Guo, Yisen Wang
accept
The paper presents a novel kind of attack: Moire attack. It is inspired by that there will be Moire effect when shooting images on the LCD monitors. Although the Moire effect is perceptible by human eyes, it is very hard to distinguish between images with different Moire effects. Moire attack therefore is hard to recognize and will be harmful. Current algorithms are not robust to Moire attack. The authors are calling for our attention to this attack. We suggest the author carefully merge the rebuttals in the final version.
train
[ "Pwrob7HhyNb", "iBQL8xkDpy", "_7PHIYYv_8E", "fZjFQfXKN4d", "A6O6ssu6W45", "psIIWHh0_J", "Q7AwlnOpfPz", "K385Gl21OHu", "mu7v9YqVeuG", "qo2xOb_UXh7", "rdflB8zwd8p", "W38ScPGRcI2" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "A new white-box attack to deep neural network is proposed. The authors observe that a large number of digital images are affected by the so called moiré effect or moirè pattern. When images are displayed on a digital monitor they are obviously sampled, then they are sampled again when acquired by a new device. The...
[ 5, -1, -1, -1, 8, -1, -1, -1, -1, -1, 7, 6 ]
[ 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 5 ]
[ "nips_2021_zdNEp82a-_q", "_7PHIYYv_8E", "K385Gl21OHu", "mu7v9YqVeuG", "nips_2021_zdNEp82a-_q", "Q7AwlnOpfPz", "A6O6ssu6W45", "Pwrob7HhyNb", "W38ScPGRcI2", "rdflB8zwd8p", "nips_2021_zdNEp82a-_q", "nips_2021_zdNEp82a-_q" ]
nips_2021_NvXnBQQw0Jb
Fast Bayesian Inference for Gaussian Cox Processes via Path Integral Formulation
Gaussian Cox processes are widely-used point process models that use a Gaussian process to describe the Bayesian a priori uncertainty present in latent intensity functions. In this paper, we propose a novel Bayesian inference scheme for Gaussian Cox processes by exploiting a conceptually-intuitive {¥it path integral} formulation. The proposed scheme does not rely on domain discretization, scales linearly with the number of observed events, has a lower complexity than the state-of-the-art variational Bayesian schemes with respect to the number of inducing points, and is applicable to a wide range of Gaussian Cox processes with various types of link functions. Our scheme is especially beneficial under the multi-dimensional input setting, where the number of inducing points tends to be large. We evaluate our scheme on synthetic and real-world data, and show that it achieves comparable predictive accuracy while being tens of times faster than reference methods.
accept
This paper presents a path integral formulation for inference in Gaussian Cox process models, which is general for any prior kernel and a variety of link functions. Three out of four reviewers recommend acceptance, with one of the reviewers assessing the paper as a top 15% of accepted papers at NeurIPS. The main criticism of the paper (raised by Reviewer SxYf and shared by other reviewers and myself) is that the lack of commitment from the authors to make the code publicly available will impede reproducibility. I agree with this point but I understand there are cases where this is not possible. I believe the technical contribution of the paper is strong enough to be beneficial to the NeurIPS community but encourage the authors to improve the reproducibility of their approach. For example, they can include an algorithm (in pseudocode) in the main paper detailing the exact computations required in their method. Besides addressing the reviewers’ concerns, I also encourage the authors to discuss the limitations of their approach a bit further, for example by comparing hyper-parameter estimation in their approach versus other methods such as those based on variational inference.
train
[ "GRlFB0TlI6q", "RVrK6O-9rV", "YJatyGWELMV", "AiqoD7w793s", "r5COpvEdiTE", "3v_4oiNdwhW", "GeeuTpBcNAH", "uWopaHgtNlJ", "T1vvOsq2i5", "BAcrUjnIVK-", "NKCmXzfe1p0" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for taking the time and effort to address my concerns through additional experimental results. Based on this, I am happy that my original score of 7 is an accurate reflection of this paper. \n\nAs noted by reviewer SxYf, I agree that it is a real shame that code cannot be shared....
[ -1, -1, 5, -1, 7, -1, -1, -1, -1, 9, 7 ]
[ -1, -1, 3, -1, 4, -1, -1, -1, -1, 4, 2 ]
[ "T1vvOsq2i5", "3v_4oiNdwhW", "nips_2021_NvXnBQQw0Jb", "uWopaHgtNlJ", "nips_2021_NvXnBQQw0Jb", "BAcrUjnIVK-", "r5COpvEdiTE", "YJatyGWELMV", "NKCmXzfe1p0", "nips_2021_NvXnBQQw0Jb", "nips_2021_NvXnBQQw0Jb" ]
nips_2021_yITJ6t31eAE
Lattice partition recovery with dyadic CART
OSCAR HERNAN MADRID PADILLA, Yi Yu, Alessandro Rinaldo
accept
This paper studies the problem of recovering the partition of a piecewise constant signal supported on a multi-dimensional lattice. Specifically, the true signal is assumed to be piecewise constant over an unknown rectangular partition of the lattice. Given access to noisy measurements, the goal is to recover the underlying partition. This work analyzes the performance of the Dyadic CART algorithm for this task. Overall, modulo some issues with the presentation, the reviewers agreed that this work is slightly above the acceptance threshold.
train
[ "VUC6iGTRx-L", "qNK5Ed6Q4eM", "aeNtCPKp8Zl", "poYgxgVpo0i", "DTi_lc0Fctk", "AZC_SMC5Xdw", "BsitiWmgwvL", "ZaHS6edW-a", "qIelyXzXn0B", "Vh7rFZyqC2V", "-NnHPs00kCJ" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarification on point 2!", " Thank you very much for your feedback.\n\n1. We agree that some more discussions are necessary and we will try to fit some in the revision if space permits.\n\n2. Thank you very much for this, which is indeed a typo from us. In Proposition 3, in the definition of ...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 2, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "qNK5Ed6Q4eM", "poYgxgVpo0i", "nips_2021_yITJ6t31eAE", "ZaHS6edW-a", "AZC_SMC5Xdw", "qIelyXzXn0B", "-NnHPs00kCJ", "aeNtCPKp8Zl", "Vh7rFZyqC2V", "nips_2021_yITJ6t31eAE", "nips_2021_yITJ6t31eAE" ]
nips_2021_eaAM_bdW0Q
Robust Deep Reinforcement Learning through Adversarial Loss
Tuomas Oikarinen, Wang Zhang, Alexandre Megretski, Luca Daniel, Tsui-Wei Weng
accept
I thank the authors for their submission and active participation in the discussions. The paper tackles an important problem as robustifying deep RL agents is crucial for making progress towards real-world applications. Reviewers appreciate its significance [Hgqp,naEG,hFQ2], novelty [naEG], and interesting discussion [Hgqp], as well as its thorough experiments [hFQ2]. During rebuttal and discussion, reviewer naEG was compelled by the author response and clarifications, as well as reviewer hFQ2's arguments. I am discounting 8ojS's clear stance against the paper as I believe the author response sufficiently addresses their concerns and they haven't indicate otherwise during the discussion. I also appreciate the author's openness about potential limitations of their work. Overall, I recommend acceptance and encourage the authors to further improve their paper based on the reviewer feedback.
train
[ "Gc-BtCS5gSC", "LUxTj9IGnBN", "aF5PRpk8Ck", "e3r7mXEA6nA", "1CwpZOmrs0p", "dKQZZZyjUFd", "RzMz0_hAD9", "xmBv5RmfZUK", "zk1d5QfjAaZ", "PIna9_ZQD_", "ndz7wTJD84i", "QB32Fp365Bp" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer 8ojS, thank you again for the initial review. As the discussion period is close to the end and we have not yet heard back from you, we wanted to reach out to see if you have any additional questions or concerns. We believe there may be some misunderstandings based on the review comments, and we hope...
[ -1, -1, 7, -1, -1, -1, -1, -1, -1, 7, 9, 3 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, 2, 5, 3 ]
[ "QB32Fp365Bp", "aF5PRpk8Ck", "nips_2021_eaAM_bdW0Q", "zk1d5QfjAaZ", "QB32Fp365Bp", "ndz7wTJD84i", "xmBv5RmfZUK", "PIna9_ZQD_", "aF5PRpk8Ck", "nips_2021_eaAM_bdW0Q", "nips_2021_eaAM_bdW0Q", "nips_2021_eaAM_bdW0Q" ]
nips_2021_eOEs9Wa91qH
Provable Model-based Nonlinear Bandit and Reinforcement Learning: Shelve Optimism, Embrace Virtual Curvature
This paper studies model-based bandit and reinforcement learning (RL) with nonlinear function approximations. We propose to study convergence to approximate local maxima because we show that global convergence is statistically intractable even for one-layer neural net bandit with a deterministic reward. For both nonlinear bandit and RL, the paper presents a model-based algorithm, Virtual Ascent with Online Model Learner (ViOlin), which provably converges to a local maximum with sample complexity that only depends on the sequential Rademacher complexity of the model class. Our results imply novel global or local regret bounds on several concrete settings such as linear bandit with finite or sparse model class, and two-layer neural net bandit. A key algorithmic insight is that optimism may lead to over-exploration even for two-layer neural net model class. On the other hand, for convergence to local maxima, it suffices to maximize the virtual return if the model can also reasonably predict the gradient and Hessian of the real return.
accept
This paper takes a new approach to the theory of exploration in reinforcement learning with function approximation, with the main insight being to aim for local optimality rather than global optimality. The reviewers agree that the paper is well-motivated, technically interesting, and fairly creative result that will likely inspire further research along this direction in RL. While there are many directions in which the technical results themselves can likely be improved (e.g., moving beyond deterministic dynamics), it seems appropriate to leave this for future work. The authors are encouraged to incorporate the clarifications suggested by the reviewers.
train
[ "a2ekU0QLKMt", "Cu9rp-qpnAx", "CmJv3wYdmr2", "7qva9C-cM9", "cfrShnXXlW4", "u-WWRBNGLf_", "zJZPUCgXHj", "n8HYwJKUBp" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper proposes new nonlinear bandit and reinforcement learning algorithms whose sample complexity toward local maxima are bounded by the sequential Rademacher complexity of particular loss functions (and do not depend on the action dimension). An interesting insight obtained in this paper is that when dealing ...
[ 7, 7, 6, 7, -1, -1, -1, -1 ]
[ 4, 4, 2, 4, -1, -1, -1, -1 ]
[ "nips_2021_eOEs9Wa91qH", "nips_2021_eOEs9Wa91qH", "nips_2021_eOEs9Wa91qH", "nips_2021_eOEs9Wa91qH", "CmJv3wYdmr2", "7qva9C-cM9", "Cu9rp-qpnAx", "a2ekU0QLKMt" ]
nips_2021_nVofoXjTmA_
You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection
Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu
accept
The authors present a simple algorithm for converting a Vision Transformer (ViT) trained for classification into a detection model. The approach is based on dropping the the CLS tokens in ViT and appends learnable DET tokens, followed by matching these to the ground-truth during training. Empirical results are not SOTA, but are competitive. The reviewers appreciate the simplicity and the relatively strong performance of the resulting model. During the discussion the reviewers agreed that the rebuttal clarified the remaining points. I will hence recommend acceptance. I urge the authors to include suggested improvements in the revised version.
val
[ "W_RkIQmm-w2", "f1x4juVjq4E", "eIGMMhH8iX", "MWT4G5A5VfW", "X8oS2H6vKHn", "Me_yJOCBatv", "wPedd-u35UV", "2m3rrgrnkyv", "tfRKMDbdFZk", "TuizcU7Tpr", "VohzYdw8qEH", "jmnrN6Vs2gn", "_7cNsvoOIDL", "RnVmvqtmbtv" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents the design and evaluation of a Transformer-based object detector which works in a pure sequence-to-sequence manner. The proposed YOLOS detector is built from ViT by borrowing the [DET] token from detection transformer (DETR). A rather interesting result is that such a pure seq-to-seq architectu...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_nVofoXjTmA_", "eIGMMhH8iX", "jmnrN6Vs2gn", "X8oS2H6vKHn", "2m3rrgrnkyv", "wPedd-u35UV", "VohzYdw8qEH", "TuizcU7Tpr", "RnVmvqtmbtv", "RnVmvqtmbtv", "W_RkIQmm-w2", "_7cNsvoOIDL", "nips_2021_nVofoXjTmA_", "nips_2021_nVofoXjTmA_" ]
nips_2021_rm0I5y2zkG8
Learning to delegate for large-scale vehicle routing
Sirui Li, Zhongxia Yan, Cathy Wu
accept
This paper proposes and analyzes a learning augmented local search algorithm to solve large-scale VRPs. Overall, all reviewers found novelty and merit in the work that was presented, but a number of questions and potential concerns of the work were posed. In their rebuttal, the author(s) were very thorough and convincing in answering these questions/concerns, and in many cases additional experiments were performed to provide additional insights in response to specific reviewer questions. As a result of these interactions, the unanimous decision of the reviewers was to accept the paper (with multiple reviewers raising their final scores), continent on the the author(s) commitment to revise the paper to incorporate the following: (1) the revised statement of contributions they provided in the rebuttal, (2) an expanded discussion of other related learning-based approaches to this problem, (3) the additional comparative results produced relative to HGS, which was suggested as a better SOTA approach to compare against than LKH-3 and (4) the results of new ablation experiments that were performed in response to specific reviewer queries that further enhance the paper's conclusions. In response to a direct question regarding how the authors would accomplish all revisions during the rebuttal phase, the author(s) explicitly agreed to this set of changes, and we are taking them at their word.
train
[ "3mGASIOxWC", "9R4OW9xJKMd", "b3wXk9ixw0", "ifTLf5xEI5F", "m8p1ALihJPV", "ft4bVa_qYg2", "oksWLiUab0h", "OtFzjMbft4U", "tbLdgPBjCB", "FaFmp1eZuK0", "ZNhMz8PB7jU", "LJNfKX4YZCi", "8d5mcm8S2jD", "hprvszRDM_", "Hpw-gi690Id", "qUOkmJPjpx6", "vO7QJ2mT1tU", "VvwOpexGtd5", "PdevtkJ0zxq",...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_r...
[ " Thank you for your suggestions! We will make sure to update the final version with the clarification on OR-Tools, as well as responses to other reviews including additional ablations, results, and discussions. Thank you again for your feedback during the rebuttal period. It truly helps us strengthen our work!", ...
[ -1, 7, -1, -1, 7, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 3, -1, -1, 4, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "b3wXk9ixw0", "nips_2021_rm0I5y2zkG8", "8d5mcm8S2jD", "ft4bVa_qYg2", "nips_2021_rm0I5y2zkG8", "m8p1ALihJPV", "tbLdgPBjCB", "nips_2021_rm0I5y2zkG8", "hprvszRDM_", "ZNhMz8PB7jU", "vO7QJ2mT1tU", "nips_2021_rm0I5y2zkG8", "9R4OW9xJKMd", "OtFzjMbft4U", "m8p1ALihJPV", "nips_2021_rm0I5y2zkG8",...
nips_2021_mekyxmlLJNd
Effective Meta-Regularization by Kernelized Proximal Regularization
We study the problem of meta-learning, which has proved to be advantageous to accelerate learning new tasks with a few samples. The recent approaches based on deep kernels achieve the state-of-the-art performance. However, the regularizers in their base learners are not learnable. In this paper, we propose an algorithm called MetaProx to learn a proximal regularizer for the base learner. We theoretically establish the convergence of MetaProx. Experimental results confirm the advantage of the proposed algorithm.
accept
The paper initially received mixed ratings. Most concerns from the reviewers have been well addressed by the rebuttal and after the discussion period, all reviewers are now on the accept side. In particular, the reviewers have been convinced by the experimental results provided in the rebuttal. The area chair agrees with their assessment and follows their recommendation. The final version of the paper should include these results and take into account the discussions that took place during the rebuttal period.
train
[ "mbMC6qZUN-P", "qIfhxmcRNbT", "VyiIBrUJ0Yg", "5y0QSdOQ6tu", "huA8ivPteV5", "1Vz3ip6HVnx", "TU2KX85jdmn", "LF5U7F-0zi", "63-agVEd0wT", "XjfkoOQ9Umi", "K4ld_kWTYp", "IElHsE8f8Dy", "k570KHwYEyH", "acvbv1SthaJ" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I have read all reviewers' comments and the authors' rebuttal, I still vote for acceptance and keep my score unchanged.", "The authors proposed a kernelized proximal regularization method (MetaProx) for meta learning. MetaProx combines deep kernel and proximal meta regularization, allowing learnable regularizat...
[ -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "k570KHwYEyH", "nips_2021_mekyxmlLJNd", "5y0QSdOQ6tu", "K4ld_kWTYp", "nips_2021_mekyxmlLJNd", "LF5U7F-0zi", "XjfkoOQ9Umi", "huA8ivPteV5", "huA8ivPteV5", "acvbv1SthaJ", "qIfhxmcRNbT", "k570KHwYEyH", "nips_2021_mekyxmlLJNd", "nips_2021_mekyxmlLJNd" ]
nips_2021_KCsNBfdYI7E
Towards Context-Agnostic Learning Using Synthetic Data
We propose a novel setting for learning, where the input domain is the image of a map defined on the product of two sets, one of which completely determines the labels. We derive a new risk bound for this setting that decomposes into a bias and an error term, and exhibits a surprisingly weak dependence on the true labels. Inspired by these results, we present an algorithm aimed at minimizing the bias term by exploiting the ability to sample from each set independently. We apply our setting to visual classification tasks, where our approach enables us to train classifiers on datasets that consist entirely of a single synthetic example of each class. On several standard benchmarks for real-world image classification, we achieve robust performance in the context-agnostic setting, with good generalization to real world domains, whereas training directly on real world data without our techniques yields classifiers that are brittle to perturbations of the background.
accept
This paper proposes a data augmentation scheme that synthesizes image background so that models can achieve better generalization by learning from one synthetic image and generalizing to real natural images. All reviewers recommended to accept. Accept.
val
[ "ZwlLDylKoyR", "X7_DUkq0NM", "l077TzY_ENl", "-5uWSYv9ko", "8vpF1Estzkv", "Fulv15E_Orw" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a data augmentation scheme that synthesizes image background so that models can achieve better generalization by learning from one synthetic image and generalizing to real natural images. The process involves using previous images as background and adding adversarial noises. Experimental result...
[ 6, 7, -1, -1, -1, 5 ]
[ 4, 3, -1, -1, -1, 4 ]
[ "nips_2021_KCsNBfdYI7E", "nips_2021_KCsNBfdYI7E", "X7_DUkq0NM", "Fulv15E_Orw", "ZwlLDylKoyR", "nips_2021_KCsNBfdYI7E" ]
nips_2021_8SEJ8AT_6Dl
Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers
Quantile (and, more generally, KL) regret bounds, such as those achieved by NormalHedge (Chaudhuri, Freund, and Hsu 2009) and its variants, relax the goal of competing against the best individual expert to only competing against a majority of experts on adversarial data. More recently, the semi-adversarial paradigm (Bilodeau, Negrea, and Roy 2020) provides an alternative relaxation of adversarial online learning by considering data that may be neither fully adversarial nor stochastic (I.I.D.). We achieve the minimax optimal regret in both paradigms using FTRL with separate, novel, root-logarithmic regularizers, both of which can be interpreted as yielding variants of NormalHedge. We extend existing KL regret upper bounds, which hold uniformly over target distributions, to possibly uncountable expert classes with arbitrary priors; provide the first full-information lower bounds for quantile regret on finite expert classes (which are tight); and provide an adaptively minimax optimal algorithm for the semi-adversarial paradigm that adapts to the true, unknown constraint faster, leading to uniformly improved regret bounds over existing methods.
accept
The reviewers unanimously support acceptance of the paper. The writing of the paper requires improvement and I hope that the authors will implement changes promised to the reviewers.
train
[ "Yc_aeMxflsV", "1C8bBi4HEnp", "SX3zrahD35", "k1VOGqtFQi", "IoMoX0OhqqR", "1e4XPLLRvU2", "AG11fAX26B", "FwCQgKtZOev", "9BSCORXD8Rh", "eNuhORootf", "OkVEYqAEf_", "6V9ELPuo7CP", "WAi0uJXIwji", "JGqFy8bND9z", "sK5CaOSt3Lf", "Dyq3aROQEFu" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper considers the problem of prediction with expert advice,\nand focuses on quantile regret bounds that depend on the comparator distribution\nand semi-adversarial paradigm,\nwhich is an intermediate setting between stochastic one and adversarial one.\nIn the paper,\nminimax optimal regret bounds for both p...
[ 6, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 2, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "nips_2021_8SEJ8AT_6Dl", "k1VOGqtFQi", "nips_2021_8SEJ8AT_6Dl", "IoMoX0OhqqR", "OkVEYqAEf_", "AG11fAX26B", "FwCQgKtZOev", "9BSCORXD8Rh", "Dyq3aROQEFu", "SX3zrahD35", "SX3zrahD35", "Yc_aeMxflsV", "Yc_aeMxflsV", "sK5CaOSt3Lf", "nips_2021_8SEJ8AT_6Dl", "nips_2021_8SEJ8AT_6Dl" ]
nips_2021_FCrLunb8-G
Gradient-Free Adversarial Training Against Image Corruption for Learning-based Steering
We introduce a simple yet effective framework for improving the robustness of learning algorithms against image corruptions for autonomous driving. These corruptions can occur due to both internal (e.g., sensor noises and hardware abnormalities) and external factors (e.g., lighting, weather, visibility, and other environmental effects). Using sensitivity analysis with FID-based parameterization, we propose a novel algorithm exploiting basis perturbations to improve the overall performance of autonomous steering and other image processing tasks, such as classification and detection, for self-driving cars. Our model not only improves the performance on the original dataset, but also achieves significant performance improvement on datasets with multiple and unseen perturbations, up to 87% and 77%, respectively. A comparison between our approach and other SOTA techniques confirms the effectiveness of our technique in improving the robustness of neural network training for learning-based steering and other image processing tasks.
accept
This paper proposes a new approach for adversarial training. The idea seems novel and interesting and the result seems to be promising. The paper presentation and organization should be further improved. The meta reviewer recommends acceptance of this paper.
train
[ "kJ4Vlz4sMsj", "_7rht_HPKWZ", "1BS8J6uXnLA", "DEebQbyAtqo", "KLYg6J9N3wb", "a5XgRi8I3HJ", "Q8ekQB7lwK", "5pxkwezpb5C", "YJZpy5ZYpnC", "Fay2ijq3g3U", "V_AQTgB9Tt", "zNW8VwB2Vg9", "VD1WhaFC3k" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewers for all the excellent suggestions and comments that help further improve the exposition of this paper.\n\nbest,\n Authors", " I would like to thank the authors for their effort to make the Method section more clear and sound. I changed my rating to Accept.", " In this me...
[ -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "_7rht_HPKWZ", "DEebQbyAtqo", "nips_2021_FCrLunb8-G", "Q8ekQB7lwK", "a5XgRi8I3HJ", "5pxkwezpb5C", "YJZpy5ZYpnC", "zNW8VwB2Vg9", "1BS8J6uXnLA", "VD1WhaFC3k", "nips_2021_FCrLunb8-G", "nips_2021_FCrLunb8-G", "nips_2021_FCrLunb8-G" ]
nips_2021_0FDxsIEv9G
Deep Proxy Causal Learning and its Application to Confounded Bandit Policy Evaluation
Proxy causal learning (PCL) is a method for estimating the causal effect of treatments on outcomes in the presence of unobserved confounding, using proxies (structured side information) for the confounder. This is achieved via two-stage regression: in the first stage, we model relations among the treatment and proxies; in the second stage, we use this model to learn the effect of treatment on the outcome, given the context provided by the proxies. PCL guarantees recovery of the true causal effect, subject to identifiability conditions. We propose a novel method for PCL, the deep feature proxy variable method (DFPV), to address the case where the proxies, treatments, and outcomes are high-dimensional and have nonlinear complex relationships, as represented by deep neural network features. We show that DFPV outperforms recent state-of-the-art PCL methods on challenging synthetic benchmarks, including settings involving high dimensional image data. Furthermore, we show that PCL can be applied to off-policy evaluation for the confounded bandit problem, in which DFPV also exhibits competitive performance.
accept
The expert reviewers all appreciated the paper and agree it provides a useful new method and that the paper should be accepted. The reviewers appreciated the authors' response which better highlighted the significance of the contribution, including the relevant applications and what is lacking in previous work. The authors are expected to address the points raised by reviewers in a final version, including incorporating the additional clarifying discussion in their responses.
train
[ "eKPm8D9C2MV", "LcoWuPkElhz", "eIupxEu-hfE", "QYCYI0rSW8a", "DaQ-890uyzP", "lv0FWT_eb07", "Vl8qHUNAzl", "P1IGSEooJLW", "ATRRfTBHCMZ", "lUelkgF8MBT", "JNp7S9SFFLS", "cRdSlaY0dCu" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " esp. on the difference with ignorability settings. \nI've updated my score accordingly.", "This work extends known Proxy Causal Learning methods that aim at learning the causal effect of treatment in cases where unobserved confounders are assumed yet there is observable information in form of proxy variables to...
[ -1, 7, -1, -1, 7, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 3, -1, -1, 3, -1, -1, -1, -1, -1, 3, 4 ]
[ "Vl8qHUNAzl", "nips_2021_0FDxsIEv9G", "lv0FWT_eb07", "lUelkgF8MBT", "nips_2021_0FDxsIEv9G", "cRdSlaY0dCu", "LcoWuPkElhz", "nips_2021_0FDxsIEv9G", "JNp7S9SFFLS", "DaQ-890uyzP", "nips_2021_0FDxsIEv9G", "nips_2021_0FDxsIEv9G" ]
nips_2021_du_Rss0tW8
Certifying Robustness to Programmable Data Bias in Decision Trees
Datasets can be biased due to societal inequities, human biases, under-representation of minorities, etc. Our goal is to certify that models produced by a learning algorithm are pointwise-robust to dataset biases. This is a challenging problem: it entails learning models for a large, or even infinite, number of datasets, ensuring that they all produce the same prediction. We focus on decision-tree learning due to the interpretable nature of the models. Our approach allows programmatically specifying \emph{bias models} across a variety of dimensions (e.g., label-flipping or missing data), composing types of bias, and targeting bias towards a specific group. To certify robustness, we use a novel symbolic technique to evaluate a decision-tree learner on a large, or infinite, number of datasets, certifying that each and every dataset produces the same prediction for a specific test point. We evaluate our approach on datasets that are commonly used in the fairness literature, and demonstrate our approach's viability on a range of bias models.
accept
The paper proposes to certify that machine learning models (in particular decision trees learned by the CART algorithm) are robust to potential dataset biases. This is a timely and important topic, given the widespread deployment of ML models in high stakes situations, and the difficulty of collecting data that are fully representative and free from biases. Reviewers were in consensus that the paper makes an interesting contribution to the literature in this important problem space. Only minor concerns were raised in the initial reviews, and adequately addressed by the authors during the rebuttal. I recommend acceptance and congratulate the authors on their timely and important research work.
train
[ "NWGnbis9NBA", "suEK3eGyblZ", "3eCNRGZQNY2", "xWHlyq2TGiH", "sXsN4dsTkDf", "3SKVLm2C9Xi", "YBQE-J7XSvq", "-TsEDr_7bpi", "OOZaJIIGs_1", "704m_wt8H_y" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks you for your response. My interpretation was the 2nd which you have adequately answered.\n\nI have changed my score appropriately.", "The paper considers a method of certifying decision trees for pointwise robustness to data bias. More specifically, the paper introduces a language to encapsulates differe...
[ -1, 7, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, 4, -1, -1, -1, -1, -1, 4, 2, 4 ]
[ "xWHlyq2TGiH", "nips_2021_du_Rss0tW8", "sXsN4dsTkDf", "suEK3eGyblZ", "OOZaJIIGs_1", "-TsEDr_7bpi", "704m_wt8H_y", "nips_2021_du_Rss0tW8", "nips_2021_du_Rss0tW8", "nips_2021_du_Rss0tW8" ]
nips_2021_CaKvIT5UMfd
TöRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis
Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e.g., NeRF). Several works extend these to dynamic scenes captured with monocular video, with promising performance. However, the monocular setting is known to be an under-constrained problem, and so methods rely on data-driven priors for reconstructing dynamic content. We replace these priors with measurements from a time-of-flight (ToF) camera, and introduce a neural representation based on an image formation model for continuous-wave ToF cameras. Instead of working with processed depth maps, we model the raw ToF sensor measurements to improve reconstruction quality and avoid issues with low reflectance regions, multi-path interference, and a sensor's limited unambiguous depth range. We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions, and discuss the benefits and limitations of integrating RGB+ToF sensors now available on modern smartphones.
accept
The paper proposed an approach to model geometry and appearance in a dynamic scene based on monocular RGB and ToF measurements. The key novelty is a principled extension of NeRF based on the image formation model of ToF measurements. The paper initially received a mixed rating, with two reviewers rated it below the bar and two above. Two reviewers were satisfied with the rebuttal and upgraded the ratings. While the meta-reviewer agrees that the quality of the results was not impressive, the ToF extension of NeRF for dynamic scenes is a solid technical contribution. The authors are encouraged to incorporate the review feedback in the final manuscript.
train
[ "R_CegQiRH4t", "nNSTt5W2Mdj", "vmEKKvccHj3", "Z9OFYwOIeAP", "MwimgcwTpdm", "oATT5zUrLyk", "k0oQfBheCXA", "cjUe3N_-eg_", "4nMg7edkhK", "Q8HsdidSDOi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors introduce a novel neural representation based on an image formation model for continuous-wave ToF cameras, thereby replacing the data-driven priors used by other approaches on the reconstruction of dynamic scene parts with measurements from a time-of-flight (ToF) camera. Using the modeling of raw ToF m...
[ 7, 7, -1, 6, -1, -1, -1, -1, -1, 7 ]
[ 3, 4, -1, 4, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_CaKvIT5UMfd", "nips_2021_CaKvIT5UMfd", "oATT5zUrLyk", "nips_2021_CaKvIT5UMfd", "Z9OFYwOIeAP", "nNSTt5W2Mdj", "Q8HsdidSDOi", "nips_2021_CaKvIT5UMfd", "R_CegQiRH4t", "nips_2021_CaKvIT5UMfd" ]
nips_2021_0vaPiltED1N
Sequence-to-Sequence Learning with Latent Neural Grammars
Sequence-to-sequence learning with neural networks has become the de facto standard for sequence modeling. This approach typically models the local distribution over the next element with a powerful neural network that can condition on arbitrary context. While flexible and performant, these models often require large datasets for training and can fail spectacularly on benchmarks designed to test for compositional generalization. This work explores an alternative, hierarchical approach to sequence-to-sequence learning with synchronous grammars, where each node in the target tree is transduced by a subset of nodes in the source tree. The source and target trees are treated as fully latent and marginalized out during training. We develop a neural parameterization of the grammar which enables parameter sharing over combinatorial structures without the need for manual feature engineering. We apply this latent neural grammar to various domains---a diagnostic language navigation task designed to test for compositional generalization (SCAN), style transfer, and small-scale machine translation---and find that it performs respectably compared to standard baselines.
accept
The paper proposes a neural approach of probabilistic quasi-synchronous context-free grammars (QCFG) and evaluates its application to various sequence-to-sequence applications (natural language to command, style transfer, and machine translation). While tree-based, the approach doesn’t require any treebank or other linguistic annotation, and training is end-to-end without any manual feature engineering. Overall, reviews find the paper to be quite strong. While there is prior work on quasi-synchronous grammars, the neural parameterization of the paper, the non-reliance on feature engineering, the application to recent benchmarks, and strong results are seen as significant contributions. The overall direction of this work is important compared to current fully data-driven models, which typically do not incorporate any inductive bias and can cause low sample efficiency and over-reliance on surface-form information. The only main weaknesses of the paper are high time complexity and experiments that are overall small scale. But I think it is reasonable to downplay these limitations, as one of the stated goals of the paper is sample efficiency rather than ability to efficiently train on large datasets.
train
[ "yNbMQiP1L55", "CA45gHph2Mx", "zpZTQdd6jLz", "1ybrcE4otvN", "8dv3eSqp4TH", "hNh5UGiWKD_", "Rdy6PIqlvI", "6M73M06zmCC", "Rfk6jT6mHms", "X0pEbs3RfAc" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper investigates an neural approach of probabilistic quasi-synchronous context-free grammars (QCFG) and its application in various sequence-to-sequence applications. QCFG is a type of very general tree-to-tree synchronous grammar where any subset of nodes in the source tree can be aligned to a node in the ...
[ 7, -1, -1, -1, -1, -1, -1, 9, 7, 7 ]
[ 5, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "nips_2021_0vaPiltED1N", "1ybrcE4otvN", "X0pEbs3RfAc", "yNbMQiP1L55", "hNh5UGiWKD_", "Rfk6jT6mHms", "6M73M06zmCC", "nips_2021_0vaPiltED1N", "nips_2021_0vaPiltED1N", "nips_2021_0vaPiltED1N" ]
nips_2021_OSLVL-tIBei
Exploration-Exploitation in Multi-Agent Competition: Convergence with Bounded Rationality
The interplay between exploration and exploitation in competitive multi-agent learning is still far from being well understood. Motivated by this, we study smooth Q-learning, a prototypical learning model that explicitly captures the balance between game rewards and exploration costs. We show that Q-learning always converges to the unique quantal-response equilibrium (QRE), the standard solution concept for games under bounded rationality, in weighted zero-sum polymatrix games with heterogeneous learning agents using positive exploration rates. Complementing recent results about convergence in weighted potential games [16,34], we show that fast convergence of Q-learning in competitive settings obtains regardless of the number of agents and without any need for parameter fine-tuning. As showcased by our experiments in network zero-sum games, these theoretical results provide the necessary guarantees for an algorithmic approach to the currently open problem of equilibrium selection in competitive multi-agent settings.
accept
This paper shows that for a class of multi-agent competitive games -- weighted zero-sum polymatrix games, a variant of Q-learning can provably converge to the unique quantal response equilibrium. (+) The paper presents a set of new theoretical results regarding convergence guarantees of learning rules in multi-agent games, an area that is understudied so far despite the popularity of learning algorithms used for multi-agent settings. The theoretical analysis is solid. (+) The numerical experiments show the learning dynamics in some example games, which confirms the theoretical results. (-) There are some minor presentation issues, for example, the term “network competition” is used without explanation in the introduction, and the use of “.” instead of “,” in line 67. (-) The reviewers also made a number of suggestions to further strengthen the paper, including adding additional motivations, connections between QRE and NE, additional experiments regarding the scalability. Overall, the reviewer team has a positive view of the paper. We suggest the authors make changes they promised in the responses to improve the paper.
train
[ "Q7auanQbYXq", "oFM26neYq87", "5MXX4UVYdox", "FTjmpeKt9uJ", "u0zHYrjS6m", "tw8EFOjRcEO", "vZnVOJQzABt", "DN71ydhBhux" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper investigates the convergence of a smooth variant of Q-learning dynamics in multiplayer network games called weighted zero-sum polymatrix games. The authors suggest interpreting the exploration rate of Q-learning as the inverse of the lambda parameter of the quantal response model, determining the degree...
[ 8, -1, 6, -1, -1, -1, -1, 6 ]
[ 3, -1, 2, -1, -1, -1, -1, 3 ]
[ "nips_2021_OSLVL-tIBei", "u0zHYrjS6m", "nips_2021_OSLVL-tIBei", "vZnVOJQzABt", "DN71ydhBhux", "Q7auanQbYXq", "5MXX4UVYdox", "nips_2021_OSLVL-tIBei" ]
nips_2021_90c-FVYJ5rL
Low-Rank Extragradient Method for Nonsmooth and Low-Rank Matrix Optimization Problems
Atara Kaplan, Dan Garber
accept
This paper studies an important class of non-smooth low rank matrix optimization problems. Recent work established that for corresponding smooth problem, high rank SVD computations could be replaced with low rank SVDs in the proximity of an optimal solution. This work establishes a corresponding result in the challenging non-smooth case, under appropriate assumptions. The authors use empirical observations to justify the reasonability of these assumptions. This work provides a significant theoretical extension and fills an important hole in the literature by providing theoretical justification for techniques that are used to efficiently solve a broad class of non-smooth low rank matrix optimization problems.
train
[ "-_mODkGkF1D", "BHV0Lr0-Zt", "igMJsDbybaE", "Ibw5FSt7_O7", "tHR7k2QL_4e", "hNBOU2n9gSe" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper, motivated by a new efficient method in solving smooth low-rank matrix optimization problems, the authors propose to extend the method to nonsmooth problems. However, direct generalization is not possible, as is shown for a specific failure case. Thus, the authors propose to introduce some auxiliary ...
[ 6, -1, -1, -1, 9, 6 ]
[ 4, -1, -1, -1, 3, 3 ]
[ "nips_2021_90c-FVYJ5rL", "-_mODkGkF1D", "hNBOU2n9gSe", "tHR7k2QL_4e", "nips_2021_90c-FVYJ5rL", "nips_2021_90c-FVYJ5rL" ]
nips_2021_haSQRA5RnuM
Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
Mutual information (MI) maximization provides an appealing formalism for learning representations of data. In the context of reinforcement learning (RL), such representations can accelerate learning by discarding irrelevant and redundant information, while retaining the information necessary for control. Much prior work on these methods has addressed the practical difficulties of estimating MI from samples of high-dimensional observations, while comparatively less is understood about which MI objectives yield representations that are sufficient for RL from a theoretical perspective. In this paper, we formalize the sufficiency of a state representation for learning and representing the optimal policy, and study several popular MI based objectives through this lens. Surprisingly, we find that two of these objectives can yield insufficient representations given mild and common assumptions on the structure of the MDP. We corroborate our theoretical results with empirical experiments on a simulated game environment with visual observations.
accept
This paper is very nicely written and the reviewers all agreed that there is merit to the work. The primary concerns including the following. 1. There are limitations in the scope, particularly in that the sufficiency of the representation does not discuss other important properties like compactness. The theory (Proposition 1) seems to assume that the identity transformation is analyzed, and it is not too surprising that this representation is sufficient. The theory is clearly presented, but considering a more practical setting with some bottleneck that removes irrelevant information would be beneficial (even if it is assumed that only irrelevant information is removed). 2. The choice of these three MI objectives and then the clear superiority of the forward objective in the experiments speaks to (a) limitations in the objectives considered and (b) some insufficiency in the experimental setup to make these relatively strong claims that perfectly match the theory. The experiments were run with only 5 seeds, with default hyperparameters and some insufficiently explicit hyperparameter choices (one approach uses a bottleneck, another does not). The beauty of these simple environments is that you can more exhaustively run experiments (more runs, more hyperparameter investigation) to provide sufficient evidence to make these strong claims. These default hyperparameters might be good for one representation and not for the other; results could be very different if these were tuned. As it stands, the experiments highlight results for SAC with default parameters plus these representations with somewhat inconsistent hyperparameter choices, which his relatively weak evidence. I do not particularly doubt the results, though, because for the three objectives investigated, it is intuitive that forward MI objective is effective, and that the other two remove important information. This speaks to the first insufficiency about the limited scope in objectives. Despite these concerns, the clarity of writing and the importance of the topic make this a useful paper for the community. If there is room in the program, this paper could be accepted. This paper, though, could be improved and be a much stronger contribution; I would highly recommend that the authors seriously consider improving the strength of their empirical claims. The reviewers had concerns also about the problem setting, where the representations are learned beforehand from a batch of data. I do not share this concern, and believe that studying this problem setting is of interest to better understand learning representations that then are used to accelerate learning downstream. The authors may likely continue to get push back on this setting, but at least know you have one supporter here. Minor point: though not pointed out by the reviewers, the proof in Lemma 2 has a few steps that are needlessly complex. For example, introducing the tower property twice is not necessary; you can directly jump to p(Y|X) p(X, Z) = P(Y| X) P(Z | X) P(X) for Eq 16, rather than doing the slightly odd thing of writing P(X, Z|X) = P(Z | X). It also seems like this result is quite general: three RVs with pairwise equal MI and conditional independence implies that we have this equality in probability. Of course, as pointed out by a reviewer, this proof as written has several pretty critical typos.
train
[ "mSSg9CAD1p1", "OLeCje9wSE5", "RyJxDeXxfwU", "plmSy2ZF-Hr", "zfF5UgK_6WW", "tbNiERmf6Up", "aBMrIY20bjn", "bacOBD2c_Z", "Pq3OBnlUkKw", "HUfEysS5K87", "JkzwfKXKjVZ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper discusses the relation between the sufficiency of representation in RL and the associated info-theoretic quantities. The authors formalize the sufficiency of the representation. The authors further consider three commonly adopted mutual information objectives for obtaining representations, namely, (i) f...
[ 6, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "nips_2021_haSQRA5RnuM", "RyJxDeXxfwU", "zfF5UgK_6WW", "bacOBD2c_Z", "mSSg9CAD1p1", "JkzwfKXKjVZ", "HUfEysS5K87", "Pq3OBnlUkKw", "nips_2021_haSQRA5RnuM", "nips_2021_haSQRA5RnuM", "nips_2021_haSQRA5RnuM" ]
nips_2021_W2rRWbI4CTW
A Geometric Perspective towards Neural Calibration via Sensitivity Decomposition
It is well known that vision classification models suffer from poor calibration in the face of data distribution shifts. In this paper, we take a geometric approach to this problem. We propose Geometric Sensitivity Decomposition (GSD) which decomposes the norm of a sample feature embedding and the angular similarity to a target classifier into an instance-dependent and an instance-independent com-ponent. The instance-dependent component captures the sensitive information about changes in the input while the instance-independent component represents the insensitive information serving solely to minimize the loss on the training dataset. Inspired by the decomposition, we analytically derive a simple extension to current softmax-linear models, which learns to disentangle the two components during training. On several common vision models, the disentangled model out-performs other calibration methods on standard calibration metrics in the face of out-of-distribution (OOD) data and corruption with significantly less complexity. Specifically, we surpass the current state of the art by 30.8% relative improvement on corrupted CIFAR100 in Expected Calibration Error.
accept
The authors study the problem of model calibration in the presence of distribution shifts for computer vision. The authors propose a sensitivity decomposition on the last softmax layer of the model. This calibration approach demonstrates improved performance compared to existing ones. All reviewers recommended to accept. Accept.
train
[ "NKD1BzOvaPm", "ZtFWzoehBA8", "XSp26im2DdG", "7gJ7Z1rU4EF", "Nzf_7m6e5E2", "_EaUSBqAd9r", "_oZQzBreQh", "1IpmowAeEF", "Twlj4479I4I", "DrMf9vNmYth", "SEWOxrnLGMS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper presents an approach for calibration of neural networks. The proposed approach is based on the analysis of the softmax logits and decomposing it into two components that depends on the norm of the feature vector (norm component), and the other that depends on the similarity (similarity component) with th...
[ 9, 7, -1, -1, 7, -1, -1, -1, -1, -1, 6 ]
[ 4, 3, -1, -1, 3, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_W2rRWbI4CTW", "nips_2021_W2rRWbI4CTW", "1IpmowAeEF", "Twlj4479I4I", "nips_2021_W2rRWbI4CTW", "nips_2021_W2rRWbI4CTW", "NKD1BzOvaPm", "SEWOxrnLGMS", "Nzf_7m6e5E2", "ZtFWzoehBA8", "nips_2021_W2rRWbI4CTW" ]
nips_2021_Dzy8YEm5dX
Towards a Unified Information-Theoretic Framework for Generalization
Mahdi Haghifam, Gintare Karolina Dziugaite, Shay Moran, Dan Roy
accept
The paper makes progress in a new approach for proving generalization bounds.
train
[ "XbDXktiiN1q", "pmdyqObTNEW", "s7kefoTUgTE", "CCn_fMYsaS5", "-fge91fpE3p", "om6MitmJnY2", "9ZrVvdIWY9-" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer, \n\nWe thank the reviewer for reading our response. Are there any outstanding issues that our response has not addressed or only partially addressed? We'd welcome guidance on how we might improve the paper to the point where it would earn an unqualified \"Accept\" from you.\n\nThanks.", "The pape...
[ -1, 6, -1, -1, -1, 9, 8 ]
[ -1, 4, -1, -1, -1, 4, 3 ]
[ "s7kefoTUgTE", "nips_2021_Dzy8YEm5dX", "pmdyqObTNEW", "om6MitmJnY2", "9ZrVvdIWY9-", "nips_2021_Dzy8YEm5dX", "nips_2021_Dzy8YEm5dX" ]
nips_2021_5Re03X8Iigi
Bayesian decision-making under misspecified priors with applications to meta-learning
Max Simchowitz, Christopher Tosh, Akshay Krishnamurthy, Daniel J. Hsu, Thodoris Lykouris, Miro Dudik, Robert E. Schapire
accept
Thank you to the authors and the reviewers for their contributions to the conference! This paper introduces a novel sensitivity analysis of Bayesian bandit algorithms such as Thompson Sampling under misspecified priors. Misspecification is clearly an important problem in practice, and the review team unanimously appreciated the paper. I am happy to recommend acceptance.
train
[ "bq2cXVfy-U", "QNZxFG0TC4E", "2oWfQI0Eu5R", "qYJcXQJTslA", "tUmQ_ff8vQU", "aE1phGRxR3r", "7EgjgWP-VV", "bsOVoAjB2yq", "X9LUh38mJi0", "qVUUHc-Zx7m", "ioT6dIE60Ft" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper focus on the performance of the Thompson sampling algorithm under the misspecification of the prior distribution. The authors propose a general family of n-Monte Carlo algorithm, which contains the canonical TS algorithm as a special case. The authors then proceed to show that the regret difference of a...
[ 7, -1, -1, -1, -1, -1, -1, 7, 8, 7, 5 ]
[ 1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "nips_2021_5Re03X8Iigi", "tUmQ_ff8vQU", "ioT6dIE60Ft", "qVUUHc-Zx7m", "bq2cXVfy-U", "X9LUh38mJi0", "bsOVoAjB2yq", "nips_2021_5Re03X8Iigi", "nips_2021_5Re03X8Iigi", "nips_2021_5Re03X8Iigi", "nips_2021_5Re03X8Iigi" ]
nips_2021_UwSwML5iJkp
Neural Trees for Learning on Graphs
Rajat Talak, Siyi Hu, Lisa Peng, Luca Carlone
accept
The paper proposes novel GNN architecture that is based on theory from probabilistic graphical models. The literature on GNNs is vast yet this approach seems to be unique and furthermore well grounded in theory of PGMs. Currently the manuscript focuses on undirected graphs, but extension to directed graphs should be straightforward. Very well written paper applying rich theory on PGMs in the elegant, practical way.
train
[ "TfPcMgERnZ", "Sp1n5Kdvgg", "2IKeDHQAuQo", "MooSI7sbxm7", "ZClmvCMFA74", "5wL74PBihY8", "Xby3hr2A6LZ", "p2QOLDiNH", "tvUVyr7hpa6", "P_9MKZrcp2q", "xOZ0ND257c", "mqLC0LC87Ut", "tXnzDyjF_pD", "fTI8R3tMxBp", "tHlL77CKOQH", "2XqKOvGgIwb", "kyRlZI-AZWL", "gWjbozZVuMa", "J9W0wyHx09t", ...
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "...
[ " Thanks for your followup. In fact, what I am really looking for now is just one example, not the high level summary of contributions (as I believe I understand those contributions -- no need to re-iterate). As long as I see one such example, I will be convinced and this example will resolve many of my remaining c...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, 4, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "Sp1n5Kdvgg", "2IKeDHQAuQo", "MooSI7sbxm7", "ZClmvCMFA74", "5wL74PBihY8", "Xby3hr2A6LZ", "p2QOLDiNH", "tvUVyr7hpa6", "kyRlZI-AZWL", "tXnzDyjF_pD", "tHlL77CKOQH", "nips_2021_UwSwML5iJkp", "2XqKOvGgIwb", "J9W0wyHx09t", "kVvhkhwW803", "mqLC0LC87Ut", "gWjbozZVuMa", "nips_2021_UwSwML5iJ...
nips_2021_hw2VfWAr6t6
Enabling Fast Differentially Private SGD via Just-in-Time Compilation and Vectorization
A common pain point in differentially private machine learning is the significant runtime overhead incurred when executing Differentially Private Stochastic Gradient Descent (DPSGD), which may be as large as two orders of magnitude. We thoroughly demonstrate that by exploiting powerful language primitives, including vectorization, just-in-time compilation, and static graph optimization, one can dramatically reduce these overheads, in many cases nearly matching the best non-private running times. These gains are realized in two frameworks: one is JAX, which provides rich support for these primitives through the XLA compiler. We also rebuild core parts of TensorFlow Privacy, integrating more effective vectorization as well as XLA compilation, granting significant memory and runtime improvements over previous release versions. Our proposed approaches allow us to achieve up to 50x speedups compared to the best alternatives. Our code is available at https://github.com/TheSalon/fast-dpsgd.
accept
This paper proposes an approach to speed-up practical implementations of DP-SGD, a popular approach to privacy-preserving machine learning. The reviewers recognized the significance of the work and its impact for practitioners. The author response helped to further clarify some points. One reviewer (who did not participate in the discussion) was not convinced by the novelty, as the contribution could be seen as mainly engineering. But in the end the previous arguments prevailed. Therefore, the paper is accepted. The authors are encouraged to incorporate the additional details and discussion from the response.
train
[ "AmmzM1BZstX", "LBJbFfuuGES", "HAJtibNpPdh", "_iDMzbyXWn-", "KE3Zre2aIyD", "OzwnaSJNeGF", "4lyGdLql0lo", "OdmtBzSzU9", "GBM5GWsVj3", "qHFhs04RoRo", "5GqqCFsLhWI", "tYPaDmNtGIq" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your time, attention, and comments on our paper, we appreciate it.", "This paper investigates three language primitives, namely vectorization, just-in-time (JIT) compilation and static graph optimization, that can be leveraged to speed up the process of stochastic gradient descent (DPSGD) in train...
[ -1, 7, -1, -1, -1, -1, -1, -1, -1, 3, 7, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "HAJtibNpPdh", "nips_2021_hw2VfWAr6t6", "_iDMzbyXWn-", "OzwnaSJNeGF", "4lyGdLql0lo", "LBJbFfuuGES", "tYPaDmNtGIq", "qHFhs04RoRo", "5GqqCFsLhWI", "nips_2021_hw2VfWAr6t6", "nips_2021_hw2VfWAr6t6", "nips_2021_hw2VfWAr6t6" ]
nips_2021_OKPS9YdZ8Va
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores
Explaining the decisions of an Artificial Intelligence (AI) model is increasingly critical in many real-world, high-stake applications.Hundreds of papers have either proposed new feature attribution methods, discussed or harnessed these tools in their work.However, despite humans being the target end-users, most attribution methods were only evaluated on proxy automatic-evaluation metrics (Zhang et al. 2018; Zhou et al. 2016; Petsiuk et al. 2018). In this paper, we conduct the first user study to measure attribution map effectiveness in assisting humans in ImageNet classification and Stanford Dogs fine-grained classification, and when an image is natural or adversarial (i.e., contains adversarial perturbations). Overall, feature attribution is surprisingly not more effective than showing humans nearest training-set examples. On a harder task of fine-grained dog categorization, presenting attribution maps to humans does not help, but instead hurts the performance of human-AI teams compared to AI alone. Importantly, we found automatic attribution-map evaluation measures to correlate poorly with the actual human-AI team performance. Our findings encourage the community to rigorously test their methods on the downstream human-in-the-loop applications and to rethink the existing evaluation metrics.
accept
This paper presents a user study aimed at evaluating the effectiveness of feature attribution methods (e.g. GradCAM) in assisting human decision making. Interestingly, feature attribution methods are found to not be better than a comparatively simple 3-nearest-neighbor-based method; in some cases, a feature attribution map was even harmful to human performance. Reviewers found the paper to be compelling and well-written, though individual reviewers raised specific potential concerns, such as the size of the study (320 subjects) or the relative artificiality of the task (vs. a real-world application). However, in totality, reviews were positive and the authors’ rebuttals were helpful for clarifying reviewer questions.
train
[ "8fl_xpg2k_e", "i51Kzu9UU5", "ArNFbvP2xIJ", "CHwZhuisZjp", "9s9bvniM3P", "IS6OMlzwaZ6", "1eB2R1KBBw", "XMlRHCK4sjD", "o3Q85n_Rz-W", "gH66fEv-TRq", "bYVuH6IqWu", "c5msNZW3CnL" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer **Tf4g**,\n\nThank you for your inputs and encouragement again! We have happily updated our manuscript in light of your comments.\n\nWe are a bit anxious and new to this NeurIPS 2021 author-reviewer discussion format---and we wonder if you have any further thoughts/questions given our rebuttal that ...
[ -1, -1, -1, 6, 7, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, -1, -1, 4, 4, -1, -1, -1, -1, -1, 4, 5 ]
[ "gH66fEv-TRq", "XMlRHCK4sjD", "1eB2R1KBBw", "nips_2021_OKPS9YdZ8Va", "nips_2021_OKPS9YdZ8Va", "o3Q85n_Rz-W", "CHwZhuisZjp", "c5msNZW3CnL", "9s9bvniM3P", "bYVuH6IqWu", "nips_2021_OKPS9YdZ8Va", "nips_2021_OKPS9YdZ8Va" ]
nips_2021_iCJFwoy1T-q
Coordinated Proximal Policy Optimization
We present Coordinated Proximal Policy Optimization (CoPPO), an algorithm that extends the original Proximal Policy Optimization (PPO) to the multi-agent setting. The key idea lies in the coordinated adaptation of step size during the policy update process among multiple agents. We prove the monotonicity of policy improvement when optimizing a theoretically-grounded joint objective, and derive a simplified optimization objective based on a set of approximations. We then interpret that such an objective in CoPPO can achieve dynamic credit assignment among agents, thereby alleviating the high variance issue during the concurrent update of agent policies. Finally, we demonstrate that CoPPO outperforms several strong baselines and is competitive with the latest multi-agent PPO method (i.e. MAPPO) under typical multi-agent settings, including cooperative matrix games and the StarCraft II micromanagement tasks.
accept
The reviewers and AC discussed the paper. There was agreement that the approach is interesting and the results are impressive. The author response was also helpful in addressing some of the concerns of the reviewers (e.g., additional experimental results). There are still some concerns about the novelty of the approach and the comparisons with MAPPO. These should be clarified.
train
[ "nOGRBy-hJbG", "vc2SKSC-oTb", "c86tmrIxG1", "5GSPh7z0GoZ", "ZUr_WxlKuA", "gTu-mP0Ic1i", "6Hm3ywN6km", "p4yjsL0hdOO", "zJeHnFXlTMU", "0ULihdI_-U_" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " 1. We carefully check the paper from the link you have provided and find that the authors of MAPPO have redone the experiments and updated the result of 5m\\_vs\\_6m. The new arxiv version was uploaded on July 5th, which is after the submission deadline of NeurlPS. However, when preparing our paper, we referred t...
[ -1, -1, -1, -1, -1, -1, 5, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 4, 3 ]
[ "vc2SKSC-oTb", "c86tmrIxG1", "6Hm3ywN6km", "0ULihdI_-U_", "zJeHnFXlTMU", "p4yjsL0hdOO", "nips_2021_iCJFwoy1T-q", "nips_2021_iCJFwoy1T-q", "nips_2021_iCJFwoy1T-q", "nips_2021_iCJFwoy1T-q" ]
nips_2021_2OqZZAqxnn
Unbiased Classification through Bias-Contrastive and Bias-Balanced Learning
Datasets for training machine learning models tend to be biased unless the data is collected with complete care. In such a biased dataset, models are susceptible to making predictions based on the biased features of the data. The biased model fails to generalize to the case where correlations between biases and targets are shifted. To mitigate this, we propose Bias-Contrastive (BiasCon) loss based on the contrastive learning framework, which effectively leverages the knowledge of bias labels. We further suggest Bias-Balanced (BiasBal) regression which trains the classification model toward the data distribution with balanced target-bias correlation. Furthermore, we propose Soft Bias-Contrastive (SoftCon) loss which handles the dataset without bias labels by softening the pair assignment of the BiasCon loss based on the distance in the feature space of the bias-capturing model. Our experiments show that our proposed methods significantly improve previous debiasing methods in various realistic datasets.
accept
The paper proposes new loss functions for training on datasets with feature bias. The reviewers find the proposals to be novel and the experiment results to be strong, and recommend acceptance. There were some concerns raised about the rationale behind some of the design choices, and missing comparisons to some baselines. The authors have satisfactorily addressed these concerns over multiple back-and-forth discussions with the reviewers, and have provided additional supporting experimental results. We trust that the authors will include the additional results in the final version of the paper.
train
[ "60biIwFNgsc", "Zs0avOIUr0", "-SGpsMy4ohh", "khHPBjCxik", "E6peMdx4dB", "ArP2Ss45pcd", "MOgvpja5s9v", "yoRGKabnWK", "l_uBuvgYIV", "4_DiY_0HL7D", "iPX0qX4KGh", "oGTLqiWApFQ", "tK6VHrWKYyJ", "3HsDL-gJCPq", "j5FNBJ-RTEY", "_P4Y71hsAy2", "3RUG_wJA8SS", "1uFgx2TBod", "PeV04evbipO" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "...
[ " Thank you for the detailed responses and the additional experiments. This satisfies my main concerns and I will recommend acceptance.", "The authors propose a Bias-Contrastive (BiasCon) loss, Soften Bias-Contrastive (SoftCon) loss and Bias-Balanced (BiasBal) regression to mitigate model biases. The motivation...
[ -1, 6, 6, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, 4, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "E6peMdx4dB", "nips_2021_2OqZZAqxnn", "nips_2021_2OqZZAqxnn", "ArP2Ss45pcd", "4_DiY_0HL7D", "yoRGKabnWK", "nips_2021_2OqZZAqxnn", "4_DiY_0HL7D", "-SGpsMy4ohh", "3RUG_wJA8SS", "tK6VHrWKYyJ", "3HsDL-gJCPq", "j5FNBJ-RTEY", "_P4Y71hsAy2", "1uFgx2TBod", "PeV04evbipO", "Zs0avOIUr0", "MOg...
nips_2021_lDVeaQIScg
Learning from Inside: Self-driven Siamese Sampling and Reasoning for Video Question Answering
Recent advances in the video question answering (i.e., VideoQA) task have achieved strong success by following the paradigm of fine-tuning each clip-text pair independently on the pretrained transformer-based model via supervised learning. Intuitively, multiple samples (i.e., clips) should be interdependent to capture similar visual and key semantic information in the same video. To consider the interdependent knowledge between contextual clips into the network inference, we propose a Siamese Sampling and Reasoning (SiaSamRea) approach, which consists of a siamese sampling mechanism to generate sparse and similar clips (i.e., siamese clips) from the same video, and a novel reasoning strategy for integrating the interdependent knowledge between contextual clips into the network. The reasoning strategy contains two modules: (1) siamese knowledge generation to learn the inter-relationship among clips; (2) siamese knowledge reasoning to produce the refined soft label by propagating the weights of inter-relationship to the predicted candidates of all clips. Finally, our SiaSamRea can endow the current multimodal reasoning paradigm with the ability of learning from inside via the guidance of soft labels. Extensive experiments demonstrate our SiaSamRea achieves state-of-the-art performance on five VideoQA benchmarks, e.g., a significant +2.1% gain on MSRVTT-QA, +2.9% on MSVD-QA, +1.0% on ActivityNet-QA, +1.8% on How2QA and +4.3% (action) on TGIF-QA.
accept
The paper received mixed recommendations from the reviewers, one reviewer increasing their score after the author response leading to 2x5, 1x6, and 1x7 rating. One concern remaining after the author response was the question if the novel approach is specific enough to VideoQA or should be evaluated on other video recognition tasks. However, I think (a) answering this question is outside the scope of this paper, and remains to be seen, as the question/task conditioning might be an important component which makes the proposed approach so successful and if future work shows this is generalizable to other tasks will only increase the impact of this work further. (b) the strong results (in combination with the novel approach in the videoQA field) are of interest to the community Given the interesting model and strong results (outperforming recent work, including CoMVT [34] from CVPR 21) on 5 datasets, and solid ablation study, I recommend acceptance under the expectation that the authors revise the paper according to the author response and address the reviewers concerns, including, but not limited to: 1) ablating W1, W2 2) ablation with multiple copies 3) release code 4) additional details about where the sampling of the anchor and Siamese clips is Additional I would like the authors to add the missing comparison to [22] on TGIF and recommend proofreading the paper for English fluency.
train
[ "rLtJNeRymiH", "sDt1ox6CtSV", "pdK9gzulR2D", "U19G12NHkJ", "uzaVYjoaBe6", "zA9XN4zWIC", "pkXZvWegPqI", "gQ4LAgecjwZ", "2ehZMx7Mfzs", "T5VObYRJ4L", "1Kk6KrXH2un", "y-O1_eBuEjx", "y4lVtUKj2a4", "fdyqU9CcXlg", "Q4oGYJ72p6", "uStXPqr5U3S", "x-CECf5kXMO" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " My question has been addressed. I will keep my original rating.\n", " I would like to keep the Rating as 5: Marginally below the acceptance threshold. The proposed method is more like a generic video understanding method. It can be applied to most video understanding tasks to learn from the inside by using soft...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 5 ]
[ "y4lVtUKj2a4", "y-O1_eBuEjx", "U19G12NHkJ", "nips_2021_lDVeaQIScg", "x-CECf5kXMO", "x-CECf5kXMO", "x-CECf5kXMO", "x-CECf5kXMO", "x-CECf5kXMO", "uStXPqr5U3S", "Q4oGYJ72p6", "Q4oGYJ72p6", "fdyqU9CcXlg", "nips_2021_lDVeaQIScg", "nips_2021_lDVeaQIScg", "nips_2021_lDVeaQIScg", "nips_2021_...
nips_2021_LJSnwCx7wzj
Identification and Estimation of Joint Probabilities of Potential Outcomes in Observational Studies with Covariate Information
The joint probabilities of potential outcomes are fundamental components of causal inference in the sense that (i) if they are identifiable, then the causal risk is also identifiable, but not vise versa (Pearl, 2009; Tian and Pearl, 2000) and (ii) they enable us to evaluate the probabilistic aspects of necessity'',sufficiency'', and ``necessity and sufficiency'', which are important concepts of successful explanation (Watson, et al., 2020). However, because they are not identifiable without any assumptions, various assumptions have been utilized to evaluate the joint probabilities of potential outcomes, e.g., the assumption of monotonicity (Pearl, 2009; Tian and Pearl, 2000), the independence between potential outcomes (Robins and Richardson, 2011), the condition of gain equality (Li and Pearl, 2019), and the specific functional relationships between cause and effect (Pearl, 2009). Unlike existing identification conditions, in order to evaluate the joint probabilities of potential outcomes without such assumptions, this paper proposes two types of novel identification conditions using covariate information. In addition, when the joint probabilities of potential outcomes are identifiable through the proposed conditions, the estimation problem of the joint probabilities of potential outcomes reduces to that of singular models and thus they can not be evaluated by standard statistical estimation methods. To solve the problem, this paper proposes a new statistical estimation method based on the augmented Lagrangian method and shows the asymptotic normality of the proposed estimators. Given space constraints, the proofs, the details on the statistical estimation method, some numerical experiments, and the case study are provided in the supplementary material.
accept
All reviewers have a favorable opinion about this paper after discussion. Paraphrasing one of the reviewer comments, identification results for joint probabilities of potential outcomes are "very rare" and definitely crucial for many important quantities like probability of sufficiency, necessity etc., individual treatment effect. This paper provides a novel conditions and an estimation procedure for solving it. So clearly the contributions are fundamental, original and timely. There is a reviewer comment asking for formal statement for the asymptotic normality, some empirical comparison and further examples for the theorems. I really hope authors can revise accordingly.
train
[ "edz6yxBhyCW", "egsXiOsojv0", "rgADSImTFd0", "7CUkGr4Uw3", "vGF8ydYQmSd", "YRolaZ2Ucm", "CY6J4ZRAdA7", "HQRh1mCSKEz", "V164QC9oEC2", "EfhfsJaFEiA", "kxGtX8NM17_", "Ke22ggmAbh7", "ii_lpXRzlgz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the identification and estimation problem of the joint probabilities of potential outcomes. To solve this problem, this paper proposes two sets of identification conditions using covariate information. Furthermore, this paper proposes a novel statistical estimation method based on the augmented ...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ 2, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_LJSnwCx7wzj", "HQRh1mCSKEz", "YRolaZ2Ucm", "nips_2021_LJSnwCx7wzj", "kxGtX8NM17_", "CY6J4ZRAdA7", "EfhfsJaFEiA", "edz6yxBhyCW", "7CUkGr4Uw3", "Ke22ggmAbh7", "ii_lpXRzlgz", "nips_2021_LJSnwCx7wzj", "nips_2021_LJSnwCx7wzj" ]
nips_2021_NvN_B_ZEY5c
Online false discovery rate control for anomaly detection in time series
This article proposes novel rules for false discovery rate control (FDRC) geared towards online anomaly detection in time series. Online FDRC rules allow to control the properties of a sequence of statistical tests. In the context of anomaly detection, the null hypothesis is that an observation is normal and the alternative is that it is anomalous. FDRC rules allow users to target a lower bound on precision in unsupervised settings. The methods proposed in this article overcome short-comings of previous FDRC rules in the context of anomaly detection, in particular ensuring that power remains high even when the alternative is exceedingly rare (typical in anomaly detection) and the test statistics are serially dependent (typical in time series). We show the soundness of these rules in both theory and experiments.
accept
After the rebuttal phase, the paper now has only positive reviews, and as such could be accepted. The main issues found during the review phase are the somewhat unclear novelty and the insufficient experiments. After carefully reading the paper again, I need to add another concern: the independence assumption of the p-values. While the authors try to address this issue in Section 5, their approach is somewhat limited: - It is assumed that sufficiently far away p-values are independent, this is measured by the distance L in time, see (10). However, it remains unclear whether and when this assumption is satisfied, and the authors do not discuss this issue. - The only proposed method working for (10) needs to know L, which again sounds rather unrealistic. In summary, despite having a good average rating and a positive rebuttal phase, I personally see this as a true borderline case.
test
[ "YaX7c_KjtXA", "5QEKOKpPRoX", "slzn-sobFjd", "KYOOLrxt3Fc", "un0uyBvpDYu", "Nz-YGNnyNsH", "QVA7Ivl3oyp", "x-ECIEJcGp", "H5-5rNH-t-U", "qdhnNgOEe-6", "sPlt6DBsfzc", "I6tX_w7-rCl", "CUQtzLU3btW", "RLPReGtCKoc", "jWRKELddo8b", "_xpYwONHcVs", "HRCVYHoKeP", "hE1eKyOmPHD", "4ecTZgKxzS"...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_r...
[ " I would like to see the difference and advancement over the method in the paper [18], which used sFDR already. You mentioned that the proposed method is inspired by the paper [18], so it is better to clarify the difference.", "This paper address online false discovery rate (FDR $\\simeq 1-\\mathrm{Precision}$) ...
[ -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 9, 7 ]
[ -1, 3, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "hE1eKyOmPHD", "nips_2021_NvN_B_ZEY5c", "KYOOLrxt3Fc", "CUQtzLU3btW", "nips_2021_NvN_B_ZEY5c", "QVA7Ivl3oyp", "x-ECIEJcGp", "I6tX_w7-rCl", "nips_2021_NvN_B_ZEY5c", "5QEKOKpPRoX", "IxU3Oi6OCtg", "un0uyBvpDYu", "jWRKELddo8b", "Xt3Cfrbfxa", "RLPReGtCKoc", "5QEKOKpPRoX", "un0uyBvpDYu", ...
nips_2021_ClwfZc4ooKM
Pragmatic Image Compression for Human-in-the-Loop Decision-Making
Standard lossy image compression algorithms aim to preserve an image's appearance, while minimizing the number of bits needed to transmit it. However, the amount of information actually needed by the user for downstream tasks -- e.g., deciding which product to click on in a shopping website -- is likely much lower. To achieve this lower bitrate, we would ideally only transmit the visual features that drive user behavior, while discarding details irrelevant to the user's decisions. We approach this problem by training a compression model through human-in-the-loop learning as the user performs tasks with the compressed images. The key insight is to train the model to produce a compressed image that induces the user to take the same action that they would have taken had they seen the original image. To approximate the loss function for this model, we train a discriminator that tries to distinguish whether a user's action was taken in response to the compressed image or the original. We evaluate our method through experiments with human participants on four tasks: reading handwritten digits, verifying photos of faces, browsing an online shopping catalogue, and playing a car racing video game. The results show that our method learns to match the user's actions with and without compression at lower bitrates than baseline methods, and adapts the compression model to the user's behavior: it preserves the digit number and randomizes handwriting style in the digit reading task, preserves hats and eyeglasses while randomizing faces in the photo verification task, preserves the perceived price of an item while randomizing its color and background in the online shopping task, and preserves upcoming bends in the road in the car racing game.
accept
Strengths: - Novel and elegant approach - Clever training procedure that incorporates humans in-the-loop - Can potentially give insight into human decision-making process (from visual stimuli) - Well-executed experiments involving real human subject feedback Weaknesses: - No clear application or real-world use case - Requires a more careful discussion on algorithmic decision support and agency Summary: Reviewers were mostly unanimous in their opinion that this is a solid contribution to the growing literature on learning to support human decision-making. One reviewer initially raised several concerns regarding practicality, but the authors response (which included additional experimentation) were helpful in increasing his score. Learning with humans in-the-loop is challenging, and this paper does a commendable job at executing experiments in this setting. As one reviewer comments, the authors should clarify the amount of human resources (time, queries, etc.) required in their approach. But more importantly, I would encourage the authors to discuss in detail the potential impacts of deploying their algorithm in the real world. Issues such as the role of algorithms for decision-support and their relation to human agency should be acknowledged – especially given that the paper does not highlight any specific use-case. The authors should also carefully consider and justify what they measure and why. All in all, paper presents a fresh approach and is likely to be the focus of interesting follow-up work.
test
[ "4JUO43Yr3ZJ", "cXJ12pEyiH", "ATEmrcLdyqs", "1NRQTTW_FPL", "MOMRO8kgqOU", "15oHG-omFL", "kKiyEngMUpT", "UVxmlYC4tBz", "6_odr182H10", "CmRDFALseNJ", "OqBJk218HWu", "fWH6GDNb8mO", "ZfTHokScyWO", "N46gjNDLH9d", "ttWq0LM5zCU", "S27tEia8gVn" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a downstream task specific compression method by training a compression model through human-in-the-loop leaning, which adaptively maintain core information based on whether human can make the same decisions when they see compressed or original images. The user study show the proposed method outp...
[ 6, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "nips_2021_ClwfZc4ooKM", "1NRQTTW_FPL", "N46gjNDLH9d", "kKiyEngMUpT", "nips_2021_ClwfZc4ooKM", "OqBJk218HWu", "UVxmlYC4tBz", "6_odr182H10", "CmRDFALseNJ", "fWH6GDNb8mO", "MOMRO8kgqOU", "4JUO43Yr3ZJ", "S27tEia8gVn", "ttWq0LM5zCU", "nips_2021_ClwfZc4ooKM", "nips_2021_ClwfZc4ooKM" ]
nips_2021_BEVDmheFG0
Generalized Linear Bandits with Local Differential Privacy
Yuxuan Han, Zhipeng Liang, Yang Wang, Jiheng Zhang
accept
This paper develops LDP algorithms for contextual bandits. The algorithms are novel, problem is timely, and the results are interesting. The paper should be of interest to both bandit/online learning as well as privacy researchers.
train
[ "X43cOyNpvms", "H3uoFUCu79j", "TvvBSiTKnG", "tgMUHEtN4ZI", "FAkRFBxUah", "BQSXUD2q9O0", "a9w-EHpl1WR", "rBwRLxXti2n" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Review remain unaltered.", "This paper proposes to design and analyze algorithms for generalized linear bandits with local differential privacy (LDP). The main idea is to design a stochastic gradient based estimator to ensure LDP. It is show, by a comparison to the lower bound that the worst-case results contai...
[ -1, 6, -1, -1, -1, -1, 6, 7 ]
[ -1, 3, -1, -1, -1, -1, 2, 4 ]
[ "FAkRFBxUah", "nips_2021_BEVDmheFG0", "BQSXUD2q9O0", "a9w-EHpl1WR", "rBwRLxXti2n", "H3uoFUCu79j", "nips_2021_BEVDmheFG0", "nips_2021_BEVDmheFG0" ]
nips_2021_xz80iPFIjvG
On the Algorithmic Stability of Adversarial Training
The adversarial training is a popular tool to remedy the vulnerability of deep learning models against adversarial attacks, and there is rich theoretical literature on the training loss of adversarial training algorithms. In contrast, this paper studies the algorithmic stability of a generic adversarial training algorithm, which can further help to establish an upper bound for generalization error. By figuring out the stability upper bound and lower bound, we argue that the non-differentiability issue of adversarial training causes worse algorithmic stability than their natural counterparts. To tackle this problem, we consider a noise injection method. While the non-differentiability problem seriously affects the stability of adversarial training, injecting noise enables the training trajectory to avoid the occurrence of non-differentiability with dominating probability, hence enhancing the stability performance of adversarial training. Our analysis also studies the relation between the algorithm stability and numerical approximation error of adversarial attacks.
accept
The large generalization gap of adversarial training is a central problem of adversarial robustness. This paper makes the first attempt to study this problem from an algorithmic stability point of view. The authors establish upper and lower bounds for robust accuracy of adversarial training and investigate the causes of large generalization gap. In addition, the paper proposes a noise-injection method to improve the algorithmic stability and threfore reduces the generalization gap of adversarial training. Overall, this is a nice contribution to NeurIPS. It provides insightful understanding to a key problem of adversarial training. On the other side, the paper mainly contains theoretial analysis. More experimental study would benefit more readers. I recommend accept.
train
[ "2VZbAvSGKE", "a61kcx2hY1", "aIJ3Ju8zIQW", "iNRWh1UBAsT", "wTL9pHxQ9Lf", "X3tUQ6e8fZK", "owxjJnBSg6b", "qieVKZCJ9wr", "gRNM1v_eNuk", "F6tvrOuDZvm" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate your effort in reviewing this paper and following up with our feedbacks! We will add the additional related references and improve the clarity of our paper based on the comments from Reviewer 6caz. ", "This paper provides the argument stability bound for adversarial training for linear models and ...
[ -1, 7, -1, -1, -1, -1, -1, 7, 5, 5 ]
[ -1, 3, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "aIJ3Ju8zIQW", "nips_2021_xz80iPFIjvG", "X3tUQ6e8fZK", "F6tvrOuDZvm", "gRNM1v_eNuk", "a61kcx2hY1", "qieVKZCJ9wr", "nips_2021_xz80iPFIjvG", "nips_2021_xz80iPFIjvG", "nips_2021_xz80iPFIjvG" ]
nips_2021_U0k2DVAED5
Width-based Lookaheads with Learnt Base Policies and Heuristics Over the Atari-2600 Benchmark
Stefan O'Toole, Nir Lipovetzky, Miquel Ramirez, Adrian Pearce
accept
The reviewers have written careful reviews and are generally in agreement that the paper is making a nice contribution. The authors are strongly encouraged to revise the final paper in accordance with the substantial feedback from the reviewers.
train
[ "FASZyRrhZxp", "-WcMCCiXz5b", "uBodSYP3-0U", "Mnt4WatQc6R", "RL-XgwOH8H", "lRXz-NK0gPy", "mLK5N3Ui_ZG", "cTLSTcyiw5f", "KBI68plvpUv", "vGpIHpw6ke", "VyTpkHoLRYD", "bZCmEBcDuac", "UCjEEwQrmB4", "1QaBVDtbs0P", "GYkRaJZVWzJ" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks again for the great discussion points, they are spot on.\n\nLet us clarify that we were not trying to claim that skipping frames is an advantage, we were rather just pointing out the ambiguities and implications of using the number of frames including the skipped ones. We think the least ambiguous way of d...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 4, 4 ]
[ "-WcMCCiXz5b", "Mnt4WatQc6R", "UCjEEwQrmB4", "RL-XgwOH8H", "lRXz-NK0gPy", "mLK5N3Ui_ZG", "bZCmEBcDuac", "nips_2021_U0k2DVAED5", "nips_2021_U0k2DVAED5", "VyTpkHoLRYD", "KBI68plvpUv", "1QaBVDtbs0P", "GYkRaJZVWzJ", "nips_2021_U0k2DVAED5", "nips_2021_U0k2DVAED5" ]
nips_2021_a2Gr9gNFD-J
Characterizing possible failure modes in physics-informed neural networks
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models. The typical approach is to incorporate physical domain knowledge as soft constraints on an empirical loss function and use existing machine learning methodologies to train the model. We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena for even slightly more complex problems. In particular, we analyze several distinct situations of widespread physical interest, including learning differential equations with convection, reaction, and diffusion operators. We provide evidence that the soft regularization in PINNs, which involves PDE-based differential operators, can introduce a number of subtle problems, including making the problem more ill-conditioned. Importantly, we show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize. We then describe two promising solutions to address these failure modes. The first approach is to use curriculum regularization, where the PINN's loss term starts from a simple PDE regularization, and becomes progressively more complex as the NN gets trained. The second approach is to pose the problem as a sequence-to-sequence learning task, rather than learning to predict the entire space-time at once. Extensive testing shows that we can achieve up to 1-2 orders of magnitude lower error with these methods as compared to regular PINN training.
accept
This paper addresses an interesting and important topic, and performs a careful and well motivated analysis, producing very practical lessons. There was active discussion of this paper -- 3 out of 4 reviewers raised their scores as a result, and all 4 reviewers vote for acceptance. I am therefore recommending that this paper be accepted.
train
[ "pwhXPJtFu-J", "GzfqbfrJ-B-", "bApFeGg3Hma", "ouVNDtL2rCB", "qTlbiUogHm", "AbxBiQaeFT-", "Mqw07MvHprt", "880_ouZ1uZy", "9fQMDYhpciQ", "hRoyZ2URVdb", "a5nOWifymSp", "JQBPrvGEiGe", "MR9Qpue051", "WMlDwCHP5Qd" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer" ]
[ " To Reviewers 1, 2, 3, 4:\n\nWe would like to thank all the reviewers for taking the time and providing their valuable feedback. Several reviewers seemed to have similar questions about Section 5 (curriculum learning and Seq2Seq), \nand we address those shared questions here (individual responses to specific revie...
[ -1, 7, -1, -1, 6, -1, 7, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 4, -1, -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_a2Gr9gNFD-J", "nips_2021_a2Gr9gNFD-J", "Mqw07MvHprt", "AbxBiQaeFT-", "nips_2021_a2Gr9gNFD-J", "9fQMDYhpciQ", "nips_2021_a2Gr9gNFD-J", "bApFeGg3Hma", "a5nOWifymSp", "WMlDwCHP5Qd", "MR9Qpue051", "GzfqbfrJ-B-", "qTlbiUogHm", "nips_2021_a2Gr9gNFD-J" ]
nips_2021_hm0i-cunzGW
Artistic Style Transfer with Internal-external Learning and Contrastive Learning
Although existing artistic style transfer methods have achieved significant improvement with deep neural networks, they still suffer from artifacts such as disharmonious colors and repetitive patterns. Motivated by this, we propose an internal-external style transfer method with two contrastive losses. Specifically, we utilize internal statistics of a single style image to determine the colors and texture patterns of the stylized image, and in the meantime, we leverage the external information of the large-scale style dataset to learn the human-aware style information, which makes the color distributions and texture patterns in the stylized image more reasonable and harmonious. In addition, we argue that existing style transfer methods only consider the content-to-stylization and style-to-stylization relations, neglecting the stylization-to-stylization relations. To address this issue, we introduce two contrastive losses, which pull the multiple stylization embeddings closer to each other when they share the same content or style, but push far away otherwise. We conduct extensive experiments, showing that our proposed method can not only produce visually more harmonious and satisfying artistic images, but also promote the stability and consistency of rendered video clips.
accept
The submission proposes an extension of SANet's artistic style transfer loss formulation which adds adversarial and contrastive terms to the loss. The adversarial loss term is computed using a discriminator trained to classify images as either belonging to the style dataset or to the distribution of stylized content images. The contrastive loss terms encourage the embeddings of stylized images which either share the same content or style image to be closer and vice versa. The proposed approach is compared against SANet and other baselines through qualitative (showing stylizations) and quantitative (user study, LPIPS) means. Stylizations are compared for a few content/style image pairs to qualitatively assess the effect of the adversarial and contrastive loss terms. Reviewers appreciate the writing quality and agree on the fact that the results presented in the submission represent an improvement over previous approaches, but there is disagreement on whether the analysis of the effect of individual loss components is sufficient, and on whether the limitations of the GAN loss are sufficiently discussed. On the limitations of the GAN loss, reviewer JMx5 objects that its benefit may not be realized for style images that are very different from the training styles. The authors respond that the style images they evaluate on have not been seen during training. Reviewer zxcr points out in the reviewer discussion that the extent to which the effect of the GAN loss generalizes to held-out style images likely depends on whether they are in-distribution or out-of-distribution (the "distribution" here being the distribution of style images seen during training). This is a good point, and the authors' response here isn't entirely satisfactory: even if the model generalizes to in-distribution held-out style images, there's no guarantee that it would generalize to out-of-distribution style images, and that limitation should be discussed. In the discussion, Reviewer vsGU mentions that discussing the limitations of the proposed approach belongs to the main paper and not the Appendix. I agree with that. There was also a debate on whether the distinction between style transfer and domain translation is artificial, which I think requires some nuance. Since domain translation approaches work by imitating the statistics of a population of domain-specific images, it does feel weird that the submission dismisses style transfer approaches as "neglecting the external style information reserved in the large-scale style dataset" without acknowledging domain translation approaches. There is a distinction between the two (the former targets a specific style image while the latter targets the common "look and feel" of a collection of style images), but I think the submission would benefit from being more generous in drawing connections between various families of approaches. On the influence of the added loss terms, I would break the issue down into two questions: - Are the current ablations convincing enough? - If they are not, how much does that weaken the submission? I agree with Reviewers JMx5 and zxcr that a qualitative assessment on three content/style image pairs is a very small number of data points to draw conclusions from. This issue could have been fixed by adding ablated models among the approaches evaluated in Section 4.3's user study. On whether this is a serious issue, the risk here is that the effect of one of the two loss terms could be marginal, which would nullify one of the submission's two major contributions. The principle of incorporating global information from a corpus of styles into the training signal remains an interesting contribution nevertheless, but obviously the paper's impact would be more substantial if it better characterized the contributions of the two loss terms. This submission is very much a borderline case. I read the paper attentively, and I agree that the idea of incorporating information from a corpus of styles rather than treating each style as separate is a new and interesting contribution which incorporates insights from domain translation. Even though the submission has its flaws, I recommend accepting it provided the following is addressed in the final manuscript: - A discussion on the approach's limitations, and in particular those of its GAN loss, is incorporated into the main text. - The relationship between the proposed approach and domain translation is better highlighted. - Additional examples are shown in the Appendix to accompany Figure 5.
train
[ "OSBx8Zsug7Y", "H-kUPFf356l", "HFc-nZlJRc-", "1l4wKipUAkt", "50GP0tHN5Q", "EBnBdoV5B0y", "ogWl-qNNETi", "uVtrykZ1gCA", "WCoS7Jra2Vs", "uchrUQWRG1m" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The contribution of the work aims to improve style transfer in imagery like artwork. The ethical ramifications of this work (e.g. injury, safety, security, human rights, surveillance) are not very salient. The authors have acknowledged both positive and negative prospective social impacts in their supplement. Th...
[ -1, -1, -1, -1, -1, -1, 6, 7, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 5, 3 ]
[ "nips_2021_hm0i-cunzGW", "ogWl-qNNETi", "uchrUQWRG1m", "uVtrykZ1gCA", "WCoS7Jra2Vs", "nips_2021_hm0i-cunzGW", "nips_2021_hm0i-cunzGW", "nips_2021_hm0i-cunzGW", "nips_2021_hm0i-cunzGW", "nips_2021_hm0i-cunzGW" ]
nips_2021_UMrf6F4Tg9c
Fast Abductive Learning by Similarity-based Consistency Optimization
To utilize the raw inputs and symbolic knowledge simultaneously, some recent neuro-symbolic learning methods use abduction, i.e., abductive reasoning, to integrate sub-symbolic perception and logical inference. While the perception model, e.g., a neural network, outputs some facts that are inconsistent with the symbolic background knowledge base, abduction can help revise the incorrect perceived facts by minimizing the inconsistency between them and the background knowledge. However, to enable effective abduction, previous approaches need an initialized perception model that discriminates the input raw instances. This limits the application of these methods, as the discrimination ability is usually acquired from a thorough pre-training when the raw inputs are difficult to classify. In this paper, we propose a novel abduction strategy, which leverages the similarity between samples, rather than the output information by the perceptual neural network, to guide the search in abduction. Based on this principle, we further present ABductive Learning with Similarity (ABLSim) and apply it to some difficult neuro-symbolic learning tasks. Experiments show that the efficiency of ABLSim is significantly higher than the state-of-the-art neuro-symbolic methods, allowing it to achieve better performance with less labeled data and weaker domain knowledge.
accept
There were two main concerns with this paper --- the algorithm requires exponential space and a concern about the fairness in the experimental results. I am not worried about the worst case exponential space requirement of the exact algorithm. Many algorithms have bad worst case behavior while having fast and effective approximations (as several reviewers note). The one low score on this paper is due to the exponential space issue. Also, the discussion of the fairness of empirical comparisons convince me that this is an issue in the clarity of the writing rather than a fundamental issue with the paper.
test
[ "hzISucvdOjH", "XIdfh6TQ5ay", "LMuo2xJzmR", "eKgmx27Swlw", "kswrwQBDBcs", "QKEqLmG_e8k", "fQ81RkqENBK", "0KNubvGhOx5", "rjfpr5faabW", "IdUrSQgN4b", "bDRixFt7THa", "qopmaeUfFlB", "o_NE3Sh8RwL", "0VqRd7ZtD5" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **[Q4+]** About exponential search space.\n\n**[A4+]** Indeed, this kind of neuro-symbolic learning methods where pseudo-labels exist (such as ABL and DeepProbLog), face the problem of exponential complexity. However, ABLSim tries to prune the search through abduction and similarity, so it has a higher efficiency...
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "XIdfh6TQ5ay", "IdUrSQgN4b", "kswrwQBDBcs", "nips_2021_UMrf6F4Tg9c", "0KNubvGhOx5", "fQ81RkqENBK", "bDRixFt7THa", "eKgmx27Swlw", "qopmaeUfFlB", "o_NE3Sh8RwL", "0VqRd7ZtD5", "nips_2021_UMrf6F4Tg9c", "nips_2021_UMrf6F4Tg9c", "nips_2021_UMrf6F4Tg9c" ]
nips_2021_jUL1lnsiU9
To Beam Or Not To Beam: That is a Question of Cooperation for Language GANs
Due to the discrete nature of words, language GANs require to be optimized from rewards provided by discriminator networks, via reinforcement learning methods. This is a much harder setting than for continuous tasks, which enjoy gradient flows from discriminators to generators, usually leading to dramatic learning instabilities. However, we claim that this can be solved by making discriminator and generator networks cooperate to produce output sequences during training. These cooperative outputs, inherently built to obtain higher discrimination scores, not only provide denser rewards for training but also form a more compact artificial set for discriminator training, hence improving its accuracy and stability.In this paper, we show that our SelfGAN framework, built on this cooperative principle, outperforms Teacher Forcing and obtains state-of-the-art results on two challenging tasks, Summarization and Question Generation.
accept
This work demonstrates the effectiveness of a novel and interesting approach to utilising GANs with cooperative training and MCTS. The paper is well written and easy to follow. Though the initial results in the paper lacked evaluation on an important task (unconditional text generation) and didn't evaluate the diversity of generated text, the strong rebuttal resolves these issues and gives convincing evidence of the effectiveness of SelfGAN and MCTS. Hopefully this paper inspires more work in this interesting direction.
train
[ "ad4or7b-4GA", "AC1ZUJvMphv", "NMvW0e0yRst", "xNN6mqaIdqt", "Cu6-68kl815", "Hwm4xGFffUo", "NpACQ3ghBoy", "RqUZU33nFX", "PyNXTEFRhEN", "ZOTOAPCNDXu", "5T839DR2nKX", "LD1R0uSCDt3", "Dq-HgW5Kvng" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents a method of training (or rather fine-tuning) text generation models using a GAN-like setup. Instead of using the discriminator directly to update the generator as is usually the case, the authors propose using the discriminator as a way of cooperative decoding to generate high-quality samples w...
[ 7, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 7, 6 ]
[ 3, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 2 ]
[ "nips_2021_jUL1lnsiU9", "PyNXTEFRhEN", "Cu6-68kl815", "NpACQ3ghBoy", "RqUZU33nFX", "nips_2021_jUL1lnsiU9", "Dq-HgW5Kvng", "Hwm4xGFffUo", "ad4or7b-4GA", "nips_2021_jUL1lnsiU9", "LD1R0uSCDt3", "nips_2021_jUL1lnsiU9", "nips_2021_jUL1lnsiU9" ]
nips_2021_0XJDcC07tQs
Shapley Residuals: Quantifying the limits of the Shapley value for explanations
Popular feature importance techniques compute additive approximations to nonlinear models by first defining a cooperative game describing the value of different subsets of the model's features, then calculating the resulting game's Shapley values to attribute credit additively between the features. However, the specific modeling settings in which the Shapley values are a poor approximation for the true game have not been well-described. In this paper we utilize an interpretation of Shapley values as the result of an orthogonal projection between vector spaces to calculate a residual representing the kernel component of that projection. We provide an algorithm for computing these residuals, characterize different modeling settings based on the value of the residuals, and demonstrate that they capture information about model predictions that Shapley values cannot. Shapley residuals can thus act as a warning to practitioners against overestimating the degree to which Shapley-value-based explanations give them insight into a model.
accept
The paper presents a way to detect when local feature importance scores might be providing misleading signal due to feature interaction. This is an interesting problem that was not addressed before. The techniques used are not surprising but are sound. The main limitations of this work are the missing discussion about computational complexity and the fact that an earlier version of this paper appeared in an ICML workshop.
train
[ "5Stm368NHbw", "Txju8yvcDsP", "ZUyhxAX8IOT", "719b7PsIRp9", "dC0mCxG79K", "6DMqHj12TM2", "L_3lNnUsi27", "Oa-yctkklAF", "pRfZsInoUg", "qqtux32Tiye" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I am not sure when you updated your review, because I just now noticed you changed your rating. Could you please specify which comments you thought were improperly addressed, and in particular why these comments caused you to now think this is below the acceptance threshold?", "This paper presents Shapley resid...
[ -1, 4, -1, -1, -1, -1, -1, 5, 7, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "Txju8yvcDsP", "nips_2021_0XJDcC07tQs", "dC0mCxG79K", "qqtux32Tiye", "pRfZsInoUg", "Txju8yvcDsP", "Oa-yctkklAF", "nips_2021_0XJDcC07tQs", "nips_2021_0XJDcC07tQs", "nips_2021_0XJDcC07tQs" ]
nips_2021_zL1szwVKdwc
The Elastic Lottery Ticket Hypothesis
Lottery Ticket Hypothesis (LTH) raises keen attention to identifying sparse trainable subnetworks, or winning tickets, which can be trained in isolation to achieve similar or even better performance compared to the full models. Despite many efforts being made, the most effective method to identify such winning tickets is still Iterative Magnitude-based Pruning (IMP), which is computationally expensive and has to be run thoroughly for every different network. A natural question that comes in is: can we “transform” the winning ticket found in one network to another with a different architecture, yielding a winning ticket for the latter at the beginning, without re-doing the expensive IMP? Answering this question is not only practically relevant for efficient “once-for-all” winning ticket finding, but also theoretically appealing for uncovering inherently scalable sparse patterns in networks. We conduct extensive experiments on CIFAR-10 and ImageNet, and propose a variety of strategies to tweak the winning tickets found from different networks of the same model family (e.g., ResNets). Based on these results, we articulate the Elastic Lottery Ticket Hypothesis (E-LTH): by mindfully replicating (or dropping) and re-ordering layers for one network, its corresponding winning ticket could be stretched (or squeezed) into a subnetwork for another deeper (or shallower) network from the same family, whose performance is nearly the same competitive as the latter’s winning ticket directly found by IMP. We have also extensively compared E-LTH with pruning-at-initialization and dynamic sparse training methods, as well as discussed the generalizability of E-LTH to different model families, layer types, and across datasets. Code is available at https://github.com/VITA-Group/ElasticLTH.
accept
This paper investigates whether lottery tickets can transfer across architectures of different depths, following up on prior work which showed transfer across datasets for the same architecture. Reviewers all found the clarity and originality to be strong and the topic is of significant interest. There were some concerns regarding the soundness of the claims presented, but these were resolved through additional experiments performed during the rebuttal period. I would strongly encourage the authors to include these experiments in the final version of the paper. Overall, I think this paper presents a valuable contribution and I recommend acceptance.
val
[ "MRxSltREdv", "MFmRBvBcldW", "NnJjHaOga6", "RpyVs9IhDqO", "WbE0fK74jLd", "Kt1KZOmA0wJ", "It4xdCzgwCU", "Xw5_vAHMY2o", "sQtou3e2ohB", "AGdCgtOIZYo", "z-xAF6o__OV", "mPVJOxV880j", "LCt8LCUGYhw", "n6zbGb9kdkE", "PIlujFWZBto" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper transfer the winning ticket found for one model to other models, the author claims it can reduce the pruning cost for a large-scale model. The motivation is clear. \n\n\n However, I still have some concerns about this manuscript.\n\nThe readability of Fig3 4 5 6 is poor.\n\nThe author does not present...
[ 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1 ]
[ 5, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1 ]
[ "nips_2021_zL1szwVKdwc", "RpyVs9IhDqO", "nips_2021_zL1szwVKdwc", "It4xdCzgwCU", "Xw5_vAHMY2o", "Xw5_vAHMY2o", "LCt8LCUGYhw", "PIlujFWZBto", "MRxSltREdv", "NnJjHaOga6", "nips_2021_zL1szwVKdwc", "MRxSltREdv", "MRxSltREdv", "z-xAF6o__OV", "NnJjHaOga6" ]
nips_2021_S5LLQZ-yUP
Joint Inference for Neural Network Depth and Dropout Regularization
Dropout regularization methods prune a neural network's pre-determined backbone structure to avoid overfitting. However, a deep model still tends to be poorly calibrated with high confidence on incorrect predictions. We propose a unified Bayesian model selection method to jointly infer the most plausible network depth warranted by data, and perform dropout regularization simultaneously. In particular, to infer network depth we define a beta process over the number of hidden layers which allows it to go to infinity. Layer-wise activation probabilities induced by the beta process modulate neuron activation via binary vectors of a conjugate Bernoulli process. Experiments across domains show that by adapting network depth and dropout regularization to data, our method achieves superior performance comparing to state-of-the-art methods with well-calibrated uncertainty estimates. In continual learning, our method enables neural networks to dynamically evolve their depths to accommodate incrementally available data beyond their initial structures, and alleviate catastrophic forgetting.
accept
The submission proposes a model and variational inference scheme to learn the depth and width of neural networks simultaneously. All reviewers were impressed with the strong empirical performance of the proposed method compared to multiple baselines in a suite of tasks. One reviewer raised a concern that some text in the experiments were copied directly from Antoran et al. (2020), which the authors promised to rewrite. Overall, a good paper and should be of interest to the community.
train
[ "65ZeU7b8Dxt", "-DEP8z9-SpU", "E6eS5e3LF2e", "kqrq1NJIxH", "pkqAr4d7gP7", "-NJAWkcOE-D", "Y4O_5MaZoj", "bqsN1I2al4", "wInCWqaCjs_", "9eSUIK7KE89", "aHuvz6tNMmu", "qXMaC2eNgKY", "MScopVhK5mo", "_Szvh0TWDV6" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " As suggested by the reviewer, we have evaluated additional values of S over the range {1, 5, 10, 15, 20} on two image datasets: SVHN and CIFAR10. [Figure 7](https://www.dropbox.com/s/8kw7j7kgylygzn7/figure_7.pdf?dl=0) shows that the validation errors decrease faster for larger values of S, but reach a similar lev...
[ -1, -1, -1, -1, 6, 7, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, -1, -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, 4, 2 ]
[ "-DEP8z9-SpU", "E6eS5e3LF2e", "bqsN1I2al4", "Y4O_5MaZoj", "nips_2021_S5LLQZ-yUP", "nips_2021_S5LLQZ-yUP", "9eSUIK7KE89", "aHuvz6tNMmu", "pkqAr4d7gP7", "-NJAWkcOE-D", "_Szvh0TWDV6", "MScopVhK5mo", "nips_2021_S5LLQZ-yUP", "nips_2021_S5LLQZ-yUP" ]
nips_2021_DqU-rIHy4Eh
Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows
Normalizing flows are generative models that provide tractable density estimation via an invertible transformation from a simple base distribution to a complex target distribution. However, this technique cannot directly model data supported on an unknown low-dimensional manifold, a common occurrence in real-world domains such as image data. Recent attempts to remedy this limitation have introduced geometric complications that defeat a central benefit of normalizing flows: exact density estimation. We recover this benefit with Conformal Embedding Flows, a framework for designing flows that learn manifolds with tractable densities. We argue that composing a standard flow with a trainable conformal embedding is the most natural way to model manifold-supported data. To this end, we present a series of conformal building blocks and apply them in experiments with synthetic and real-world data to demonstrate that flows can model manifold-supported distributions without sacrificing tractable likelihoods.
accept
The submission proposes to use conformal mapping to implement injective flow to map into a lower dimensional embedding space. The proposed idea is original, well explained, and well motivated. While the experiments were lacking in the initial submission, the authors have properly engaged with the reviewers: they offered the appropriate additional experiments to improve the paper and address reviewers concerns. I recommend this paper for acceptance.
train
[ "tTMNIw0eKTx", "AVGpHturpzw", "m-tENARNMZA", "zkjHBttnGr6", "xBbrrvUxny3", "9Ut1aXqfgO_", "QQ9CDrft58a", "YIBKaxHuWW", "cY2lkIt2dOM", "8yehU-RIBp3", "IDtnpf_64L", "b1Wn2HZdkwR", "Bcq1aYTfgx", "lMlJn0G98MZ", "dMSy2LooXj7", "K3h_BPV7rTJ" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " I would like to thank the authors for their response to my question. I will keep my original scores", " We’re glad you agree that exact computation of the density on the learned manifold is the main strength of the work. Thank you for your continued correspondence.\n\n> Could the author further describe how Fig...
[ -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 3 ]
[ "lMlJn0G98MZ", "zkjHBttnGr6", "nips_2021_DqU-rIHy4Eh", "xBbrrvUxny3", "9Ut1aXqfgO_", "QQ9CDrft58a", "YIBKaxHuWW", "cY2lkIt2dOM", "8yehU-RIBp3", "b1Wn2HZdkwR", "nips_2021_DqU-rIHy4Eh", "IDtnpf_64L", "m-tENARNMZA", "K3h_BPV7rTJ", "nips_2021_DqU-rIHy4Eh", "nips_2021_DqU-rIHy4Eh" ]
nips_2021_TqvwWkdlLIk
The Limits of Optimal Pricing in the Dark
A ubiquitous learning problem in today’s digital market is, during repeated interactions between a seller and a buyer, how a seller can gradually learn optimal pricing decisions based on the buyer’s past purchase responses. A fundamental challenge of learning in such a strategic setup is that the buyer will naturally have incentives to manipulate his responses in order to induce more favorable learning outcomes for him. To understand the limits of the seller’s learning when facing such a strategic and possibly manipulative buyer, we study a natural yet powerful buyer manipulation strategy. That is, before the pricing game starts, the buyer simply commits to “imitate” a different value function by pretending to always react optimally according to this imitative value function. We fully characterize the optimal imitative value function that the buyer should imitate as well as the resultant seller revenue and buyer surplus under this optimal buyer manipulation. Our characterizations reveal many useful insights about what happens at equilibrium. For example, a seller with concave production cost will obtain essentially 0 revenue at equilibrium whereas the revenue for a seller with convex production cost is the Bregman divergence of her cost function between no production and certain production. Finally, and importantly, we show that a more powerful class of pricing schemes does not necessarily increase, in fact, may be harmful to, the seller’s revenue. Our results not only lead to an effective prescriptive way for buyers to manipulate learning algorithms but also shed lights on the limits of what a seller can really achieve when pricing in the dark.
accept
This paper deals with a novel novel question, provides a non-trivial and gets to a surprisingly clean result. There were some concerns regarding the relation to Tang and Zheng and the learning angle to justify publication in NeurIPS, but both points were adequately addressed by the authors in their response. The review team is confident that this is a strong contribution to NeurIPS.
train
[ "ZOA80J9Le6j", "U_LHB33DFco", "vDpp3fxCPP", "4hlssRWTLGq", "t2tChvYNExG", "hLGda_eDKn3", "AAo8GMLvQE", "TBdJtK_WXT", "HfUlDExlsW6" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer Abq4,\n\nThank you again for your time and for appreciating our results. We believe we have responded properly to your major concern, however please let us know if you might have any additional questions. We would be happy to have a discussion. \n\nSincerely, The authors", " We thank the review...
[ -1, -1, -1, -1, -1, 6, 8, 7, 5 ]
[ -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "vDpp3fxCPP", "hLGda_eDKn3", "HfUlDExlsW6", "TBdJtK_WXT", "AAo8GMLvQE", "nips_2021_TqvwWkdlLIk", "nips_2021_TqvwWkdlLIk", "nips_2021_TqvwWkdlLIk", "nips_2021_TqvwWkdlLIk" ]
nips_2021_8vXYx6d8Wc
No RL, No Simulation: Learning to Navigate without Navigating
Most prior methods for learning navigation policies require access to simulation environments, as they need online policy interaction and rely on ground-truth maps for rewards. However, building simulators is expensive (requires manual effort for each and every scene) and creates challenges in transferring learned policies to robotic platforms in the real-world, due to the sim-to-real domain gap. In this paper, we pose a simple question: Do we really need active interaction, ground-truth maps or even reinforcement-learning (RL) in order to solve the image-goal navigation task? We propose a self-supervised approach to learn to navigate from only passive videos of roaming. Our approach, No RL, No Simulator (NRNS), is simple and scalable, yet highly effective. NRNS outperforms RL-based formulations by a significant margin. We present NRNS as a strong baseline for any future image-based navigation tasks that use RL or Simulation.
accept
This paper presents a method for learning robot navigation policies avoiding simulations and reinforcement learning. The paper was initially on the fence, generally appreciated by reviewers but with one of the reviewers pointing several weaknesses, which were discussed with the authors and among the reviewers. The main problems were a complete omission of a discussion of offline RL, and the general responses to the reviewers ("please forget about this") was quite hand-wavy. However, the authors did provide convincing answers to many issues, and in particular explaining that their setting is sufficiently different from RL (no reward, most importantly). Of course, this still means that these parts should have covered it in the related works section, and I strongly urge the authors to modify the paper. It was also discussed that several baselines are missing, in particular work on topological memory, and the standard ImageGoal environment was still considered a restriction. Issues on presentation and reproducibility remained. However, there was a near consensus that the work is sufficiently novel and nicely executed. I recommend acceptance.
train
[ "WIG6zZFjRYV", "1UYAbisaT84", "AH_TQP_EPee", "84LBGHcY3Y8", "RqGtZOIGqTq", "micfuFB2oM2", "lANP2v5D_EC", "tcmuf8Y_iBX", "i_adEcGyrn", "ZE_5awa-glY", "m8FfeA8R21", "SuRVU-jQf-h", "rmPJ1RMf-US", "rxnrEcH5wiP", "A4Pved6eQly", "iH9sTgyAZcy", "oItUj3kmLBs" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ " The reviewer deeply appreciates the active responses from the authors. The vigorous defense of their work in their rebuttal (including the direct prods to the reviewer) is received in a healthy manner and perfectly understandable. Instead of rebutting these prods, the reviewer will try to bring a conclusive end t...
[ -1, 5, -1, -1, -1, 7, -1, -1, 8, -1, -1, 7, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, -1, -1, 4, -1, -1, 4, -1, -1, 5, -1, -1, -1, -1, -1 ]
[ "AH_TQP_EPee", "nips_2021_8vXYx6d8Wc", "84LBGHcY3Y8", "oItUj3kmLBs", "lANP2v5D_EC", "nips_2021_8vXYx6d8Wc", "1UYAbisaT84", "A4Pved6eQly", "nips_2021_8vXYx6d8Wc", "A4Pved6eQly", "iH9sTgyAZcy", "nips_2021_8vXYx6d8Wc", "nips_2021_8vXYx6d8Wc", "micfuFB2oM2", "i_adEcGyrn", "SuRVU-jQf-h", ...
nips_2021_yn267zYn8Eg
Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model
Inspired by biological evolution, we explain the rationality of Vision Transformer by analogy with the proven practical Evolutionary Algorithm (EA) and derive that both of them have consistent mathematical representation. Analogous to the dynamic local population in EA, we improve the existing transformer structure and propose a more efficient EAT model, and design task-related heads to deal with different tasks more flexibly. Moreover, we introduce the spatial-filling curve into the current vision transformer to sequence image data into a uniform sequential format. Thus we can design a unified EAT framework to address multi-modal tasks, separating the network architecture from the data format adaptation. Our approach achieves state-of-the-art results on the ImageNet classification task compared with recent vision transformer works while having smaller parameters and greater throughput. We further conduct multi-modal tasks to demonstrate the superiority of the unified EAT, \eg, Text-Based Image Retrieval, and our approach improves the rank-1 by +3.7 points over the baseline on the CSS dataset.
accept
This paper provides a view on the Transformer architecture as being related to that of an Evolutionary Algorithm. Based on this, the authors propose some improvements to the architecture and show convincing empirical results to demonstrate their efficacy and efficiency. The reviewers and discussion subsequent to the rebuttal support this view and I trust the authors will incorporate the reviewers' comments into the final manuscript.
train
[ "II5S9m7djtf", "sCDX4wmQ0ED", "pK5QhqntqsD", "eJBJUI7PG4R", "nDkkIDtJdJ", "kF9Z4WdSYDQ", "D4muHcQHT-g", "qdpmwjytagJ" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this article, the author\n1. explain the transformer from the EA algorithm perspective\n2. uses the dynamic local population strategy used in EA algorithm and transfer the strategy into the transformer\n3. introduces the spatial filling curve to transform the image data\n4. achieves SOTA results on a number of ...
[ 6, -1, -1, -1, -1, 5, 6, 7 ]
[ 4, -1, -1, -1, -1, 4, 4, 5 ]
[ "nips_2021_yn267zYn8Eg", "kF9Z4WdSYDQ", "qdpmwjytagJ", "D4muHcQHT-g", "II5S9m7djtf", "nips_2021_yn267zYn8Eg", "nips_2021_yn267zYn8Eg", "nips_2021_yn267zYn8Eg" ]
nips_2021_jfd_GB546GJ
Improving Compositionality of Neural Networks by Decoding Representations to Inputs
In traditional software programs, it is easy to trace program logic from variables back to input, apply assertion statements to block erroneous behavior, and compose programs together. Although deep learning programs have demonstrated strong performance on novel applications, they sacrifice many of the functionalities of traditional software programs. With this as motivation, we take a modest first step towards improving deep learning programs by jointly training a generative model to constrain neural network activations to "decode" back to inputs. We call this design a Decodable Neural Network, or DecNN. Doing so enables a form of compositionality in neural networks, where one can recursively compose DecNN with itself to create an ensemble-like model with uncertainty. In our experiments, we demonstrate applications of this uncertainty to out-of-distribution detection, adversarial example detection, and calibration --- while matching standard neural networks in accuracy. We further explore this compositionality by combining DecNN with pretrained models, where we show promising results that neural networks can be regularized from using protected features.
accept
UPDATE: The revision has been reviewed and the paper is officially accepted. ---- After extensive discussions among SACs, ACs, and the program chairs, we have decided to conditionally accept this paper. The primary concerns were around over-claiming, the interpretation of results, and the analogy to programming (which was viewed as a distraction from the main contributions of the paper). There was some debate around whether these issues constitute a "fatal flaw" that should prevent the paper from being published at NeurIPS, but all agreed that these issues need to be addressed. We hope that the authors will take into account feedback from the reviewers and make the changes that were promised as part of the rebuttal. Additionally, in order to be accepted, the following changes must be made in the revision: Introduction: L52: "This joint training optimizes the classifier not only to be highly performant, but constraints its activations to a manifold that the generative model is able to decode. Consequently, the networks entire computation is constrained to a 'decodable' vector space... " This is misleading and should be rephrased or removed; just because the activations at a given layer can decode the input doesn't mean that that information is being used by the downstream layers to actually make decisions. L59: This paragraph should focus on what the paper actually demonstrates empirically; essentially remove the first sentence. Section 3: Highlight that equation (4) doesn't require that the reconstructed images $g_\phi(h_l)$ actually produce the same output y, so the reconstruction may not actually be highlighting the features that were important for f to make its decision in the first place. Section 4: This section should be significantly rewritten to avoid making claims unsupported by the evidence. Specifically: L116: "In practice, this theory leads us to expect 'useless' information..." This would be true if Equation (4) wasn't rewarding the network for keeping the useless information to help support the decoding task. L119: This paragraph is misleading and should be removed. While it is true that you can visualize the information at each layer, you cannot actually tell which of that information is actually being relied upon to make the decision. L130: This is the only paragraph in this section that states an empirical fact; the whole section should focus more on what this experiment objectively shows, but avoid making claims not supported by the observations made in this experiment. L138: This paragraph again misrepresents what can really be inferred from the generated images. Section 6. The analogy with assertions is misleading as it implies a level of guarantees that is not supported by the evidence. The section should focus more closely on what the experiments actually demonstrate for this application. The attractiveness task should also be replaced with the "bald vs not bald", and "bearded vs not bearded" experiment presented in the rebuttal. Abstract: The abstract should be consistent with all the changes made to the rest of the paper. ---- The original meta-review for this copy of the paper follows: I am really torn about this paper. On the one hand, I think this paper presents an intriguing and original idea: train a generative model to reconstruct the input from the intermediate layers of the network, and then feed those reconstructed inputs either to the same network or to other networks that perform related tasks, possibly even recursively. The paper then presents empirical evidence that these compositions can be engineered to have some useful properties, such as identifying when their input is out of distribution, or avoiding the use of certain types of information in a classification. On the other hand, the paper as written suffers from extreme over-claiming and interprets many of its results in a way that I don't think is warranted by the data. The gap between the clear contributions of the paper and the narrative that the authors built around them was pointed out by multiple reviewers. While the authors did significant work through the rebuttal period to address many of the reviewer concerns and made some significant promises in how they were going to rewrite the paper. I think the issues are significant enough that I really think a fresh version of this paper that incorporates all the proposed changes needs to be reviewed by a fresh panel. The issues with his paper are too big to accept it without further review. I now describe some of the issues that I found most questionable about this paper. The paper is organized around an analogy to traditional programming and program debugging. The claim is that the generative models allow us to interpret what each of the layers in the network mean so that they can be debugged probed, asserted, etc. However, the analogy doesn't really stand up to scrutiny, and in arguing for this interpretation of its results, the paper fails to explore alternative explanations for the observed behaviors. For example, consider section 4. The objective empirical observation is that for this set of images and image classification tasks, when an image is misclassified, the reconstructed images progressively look more and more like the incorrect label. This is interesting, but the authors try to claim something much bigger; they claim that these images can allow one to understand "what features a neural network is attending to in making a prediction, or to understand what features make it susceptible to making an error". I find this highly unlikely. In fact, the loss function in Equation 4 almost guarantees that this would not be the case. For example, suppose that in the case of the handbag, the neural network actually learns that a handful of pixels making up the handle of a handbag are what characterizes it as a handbag. An image that allows one to understand what features a neural network is attending would show just the handle of the handbag and discard everything else if that's what the network is using for its classification. However, equation 4 rewards the network for maintaining enough information at each layer to reconstruct the whole handbag, even if most of that information is not actually being used to make the prediction, only for the reconstruction. In fact, the handle of the handbag has very few pixels, so the reconstruction network would not be significantly penalized if it failed to display the handle; there is nothing in equation 4 that actually requires the reconstruction to focus on the features that are important for classification. There are similar issues with each of the subsequent sections. They are motivated and explained in terms of this analogy to programming, but the analogy just doesn't hold up to scrutiny. I think section 6 is particularly problematic because the analogy with assertions implies a level of guarantees that is simply not supported by the evidence. Now, I actually think this section is the most interesting one in the paper, and I think it is unfortunate that the authors chose such a bad example to make their point. However, I think a lot more evidence is needed to show that this approach can be used to reliably prevent the network from using a protected characteristic when making a decision (that would be a great paper by itself). For example, Equation 8 forces the generative model to optimize three criteria: faithfulness to the input image, randomness with respect to the protected characteristic, and ability to recreate the correct classification output. How these competing goals are balanced out will depend on some hyperparameter tuning that may determine the extent to which the resulting network actually satisfies the desired constraint. For example, a protected characteristic that affects a lot of pixels in the image will increase the cost of adding diversity relative to the goal of being faithful to the input image and may require different hyperparameters from one that only occupies a few pixels. Overall I think there is an important and novel contribution in this paper, and I do believe the authors understand some of the issues with the paper as submitted and have demonstrated a willingness to rewrite the paper based on the criticisms in the reviews. This is reflected in the high scores given to the paper by all the reviewers. However, multiple reviewers raised the same concern highlighted above, and the Area Chair feels that this concern is quite substantial and requires a major revision to address.
test
[ "mQ2NXOWp5Wu", "FniH6NO_hOi", "hD7JPGaPPQA", "VASWYi-c_Fi", "LxgfW1A08_", "rQ5krbEdTwG", "_BJKGqGcTVj", "N2gLV5U8Ed9", "CT3fKaGlXn7", "qd1T0KZ2C9b", "Bg3inCf8ga4", "1ndiurbzuaH", "1i8xjWExhpU", "9kNojSOBHLF", "Eww1Iac1rx", "_FpDPHqKDZy", "LCqR7sxefn", "Kenfkjf3YEm" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " Thank you for the positive notes! \n\nWe appreciate the time you and the other reviewers took to provide helpful comments and suggestions. We think the new experiments and findings have made the paper better and we are excited to incorporate them in the final version.", "This paper argues for inverting intermed...
[ -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "hD7JPGaPPQA", "nips_2021_jfd_GB546GJ", "_FpDPHqKDZy", "LCqR7sxefn", "rQ5krbEdTwG", "Eww1Iac1rx", "CT3fKaGlXn7", "1ndiurbzuaH", "1i8xjWExhpU", "9kNojSOBHLF", "nips_2021_jfd_GB546GJ", "LCqR7sxefn", "nips_2021_jfd_GB546GJ", "nips_2021_jfd_GB546GJ", "Kenfkjf3YEm", "FniH6NO_hOi", "Bg3inC...
nips_2021_N6ubGJ2lQf
The Hardness Analysis of Thompson Sampling for Combinatorial Semi-bandits with Greedy Oracle
Fang Kong, Yueran Yang, Wei Chen, Shuai Li
accept
While the review team recognizes the importance of the problem and like the direction that the paper took, they also felt that the paper is not yet ready for publication. The main concerns are: * model is not well motivated. In some problems (like OIM) the feedback assumed here is inconsistent with what is observed in practice (nodes vs edges). * the regret bounds don't directly reflect the performance of the algorithm * lower bound is too stylized and may not say much about the actual hardness of the problem While the reviewers appreciate the author responses on those issues, they were not sufficient to significantly change the sentiment.
val
[ "yZ0VmDwLZWN", "yrsPIIRKp6", "nuLG6aS7SPA", "QI1v4r2smj", "vEcbc967puf", "INngza3TOfu", "8LAuhB8kUdQ", "1WpsWDOyfTn", "uvmLHfVrf2" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper studies the combinatorial semi-bandit problem. It provides the first analysis of Thompson sampling (TS) with the greedy oracle. It shows instances that the TS with Gaussian priors suffers $\\Omega(\\log T / \\Delta^2)$ regret from. It proposes a modified TS with Beta priors and shows that the proposed al...
[ 5, -1, -1, -1, -1, -1, 4, 4, 7 ]
[ 3, -1, -1, -1, -1, -1, 3, 5, 3 ]
[ "nips_2021_N6ubGJ2lQf", "1WpsWDOyfTn", "1WpsWDOyfTn", "8LAuhB8kUdQ", "yZ0VmDwLZWN", "uvmLHfVrf2", "nips_2021_N6ubGJ2lQf", "nips_2021_N6ubGJ2lQf", "nips_2021_N6ubGJ2lQf" ]
nips_2021_zmVumB1Flg
Universal Semi-Supervised Learning
Universal Semi-Supervised Learning (UniSSL) aims to solve the open-set problem where both the class distribution (i.e., class set) and feature distribution (i.e., feature domain) are different between labeled dataset and unlabeled dataset. Such a problem seriously hinders the realistic landing of classical SSL. Different from the existing SSL methods targeting at the open-set problem that only study one certain scenario of class distribution mismatch and ignore the feature distribution mismatch, we consider a more general case where a mismatch exists in both class and feature distribution. In this case, we propose a ''Class-shAring data detection and Feature Adaptation'' (CAFA) framework which requires no prior knowledge of the class relationship between the labeled dataset and unlabeled dataset. Particularly, CAFA utilizes a novel scoring strategy to detect the data in the shared class set. Then, it conducts domain adaptation to fully exploit the value of the detected class-sharing data for better semi-supervised consistency training. Exhaustive experiments on several benchmark datasets show the effectiveness of our method in tackling open-set problems.
accept
This paper studies universal semi-supervised learning problem. In this setting, class distribution and feature distribution both mismatch between labeled and unlabeled data. The authors propose an algorithm CAFA and demonstrate its effectiveness on several benchmark data sets. The reviewers agree that the paper provide new insight on this problem and the setting is of practical importance. Good paper. I recommend accept.
train
[ "Y-voBCyaduS", "gKb7T6FRnDn", "ArmI5LJJBSP", "qTrhhmOQGDx", "3b81OWhkv9j", "R5CScZWGJyj", "J3cXIO7gF0-", "QE81khj0AiW", "dpthTcWzTsC", "KrSlzZcDl1" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes an approach for universal semi-supervised setting that handles both class distribution and feature distribution mismatch. The key idea of the CAFA approach lies in introducing a scoring mechanism that identifies shared classes in the labeled and unlabeled data. The objective function in CAFA con...
[ 6, 7, -1, -1, -1, -1, -1, -1, 7, 9 ]
[ 3, 5, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_zmVumB1Flg", "nips_2021_zmVumB1Flg", "Y-voBCyaduS", "J3cXIO7gF0-", "Y-voBCyaduS", "KrSlzZcDl1", "dpthTcWzTsC", "gKb7T6FRnDn", "nips_2021_zmVumB1Flg", "nips_2021_zmVumB1Flg" ]
nips_2021_x4zs7eC-BsI
Improving Deep Learning Interpretability by Saliency Guided Training
Saliency methods have been widely used to highlight important input features in model predictions. Most existing methods use backpropagation on a modified gradient function to generate saliency maps. Thus, noisy gradients can result in unfaithful feature attributions. In this paper, we tackle this issue and introduce a {\it saliency guided training} procedure for neural networks to reduce noisy gradients used in predictions while retaining the predictive performance of the model. Our saliency guided training procedure iteratively masks features with small and potentially noisy gradients while maximizing the similarity of model outputs for both masked and unmasked inputs. We apply the saliency guided training procedure to various synthetic and real data sets from computer vision, natural language processing, and time series across diverse neural architectures, including Recurrent Neural Networks, Convolutional Networks, and Transformers. Through qualitative and quantitative evaluations, we show that saliency guided training procedure significantly improves model interpretability across various domains while preserving its predictive performance.
accept
The paper proposes a new procedure for training neural networks that achieves improved interpretability by masking the bottom k input gradients, helping address the noise in saliency map techniques. The reviewers found the paper well motivated, liked the experiments (especially the fact that they’re conducted on many domains), and found the method novel. The reviewers appreciated the thoroughness of the rebuttal, both in terms of experiments and responses, clarifying the points around related works, writing and additional discussions. I therefore recommend acceptance. I encourage the authors to integrate all the suggestions from the reviewers into the camera ready.
train
[ "5JTYr5mrQCj", "BZy95sSDhJy", "hsG2oSfBDmA", "AGSDYGuSeTf", "Eltb837xkvM", "XOueSQJFPa", "0vF8Gd2ZWXY", "y3beMOJO9o3", "jtLxHry1us1", "U2NQ4eNo8B6", "Z40kpbGTcXt", "Ema6ht2uogm" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. Using the gradient of the entire input as a sensitivity map has been done in [1] (section 3.1 Class Saliency Extraction). Then, it was adopted by other works including [2-4]. It has been proven by Ancona et al. [5] that complex gradient-based attribution methods [2-4] can be reformul...
[ -1, -1, 6, -1, -1, 6, -1, -1, -1, -1, 5, 8 ]
[ -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, 3, 4 ]
[ "BZy95sSDhJy", "0vF8Gd2ZWXY", "nips_2021_x4zs7eC-BsI", "y3beMOJO9o3", "U2NQ4eNo8B6", "nips_2021_x4zs7eC-BsI", "Z40kpbGTcXt", "hsG2oSfBDmA", "XOueSQJFPa", "Ema6ht2uogm", "nips_2021_x4zs7eC-BsI", "nips_2021_x4zs7eC-BsI" ]
nips_2021_f0_tkoEJV88
SurvITE: Learning Heterogeneous Treatment Effects from Time-to-Event Data
We study the problem of inferring heterogeneous treatment effects from time-to-event data. While both the related problems of (i) estimating treatment effects for binary or continuous outcomes and (ii) predicting survival outcomes have been well studied in the recent machine learning literature, their combination -- albeit of high practical relevance -- has received considerably less attention. With the ultimate goal of reliably estimating the effects of treatments on instantaneous risk and survival probabilities, we focus on the problem of learning (discrete-time) treatment-specific conditional hazard functions. We find that unique challenges arise in this context due to a variety of covariate shift issues that go beyond a mere combination of well-studied confounding and censoring biases. We theoretically analyse their effects by adapting recent generalization bounds from domain adaptation and treatment effect estimation to our setting and discuss implications for model design. We use the resulting insights to propose a novel deep learning method for treatment-specific hazard estimation based on balancing representations. We investigate performance across a range of experimental settings and empirically confirm that our method outperforms baselines by addressing covariate shifts from various sources.
accept
The authors formalize heterogeneous treatment effect estimation from time-to-event data under empirical risk minimization. This is a natural extension of the prior work done for continuous and binary outcomes with the added challenges from to the time-to-event data such as censoring bias. The paper being a straightforward follow the recipe type of paper doesn't detract from its quality. The main thing I'd suggest would be complete the references in the related work for ML methods to include things like: @inproceedings{avati2020countdown, title={Countdown regression: sharp and calibrated survival predictions}, author={Avati, Anand and Duan, Tony and Zhou, Sharon and Jung, Kenneth and Shah, Nigam H and Ng, Andrew Y}, booktitle={Uncertainty in Artificial Intelligence}, pages={145--155}, year={2020}, organization={PMLR} } and @inproceedings{ranganath2016deep, title={Deep survival analysis}, author={Ranganath, Rajesh and Perotte, Adler and Elhadad, No{\'e}mie and Blei, David}, booktitle={Machine Learning for Healthcare Conference}, pages={101--114}, year={2016}, organization={PMLR} }
train
[ "_1V9TrAJ0my", "5iY5GPQeiE", "903awm2RU96", "GQfsMJC9BBw", "KLaPvNVCci6", "9zSAz7ZftFS", "rb7K-dC0Egr", "EFCNnqjNARR", "1SirhJALc4", "cv2DAGq9MwL", "vB90SYwAB99", "q67IP4o4k1e", "W272jCwnkPl", "031_u33OiEE" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Dear Reviewer 4Jiu,\n\nWe are sincerely grateful for your time and energy in the review process. In light of your satisfaction with our response, we wonder whether the reviewer would kindly consider revising the rating.\n\nThank you,\nPaper 4461 Authors", " Thank you for the very detailed answer.\n\nI do not h...
[ -1, -1, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "5iY5GPQeiE", "vB90SYwAB99", "nips_2021_f0_tkoEJV88", "nips_2021_f0_tkoEJV88", "GQfsMJC9BBw", "031_u33OiEE", "903awm2RU96", "GQfsMJC9BBw", "031_u33OiEE", "903awm2RU96", "031_u33OiEE", "GQfsMJC9BBw", "903awm2RU96", "nips_2021_f0_tkoEJV88" ]
nips_2021_CsV-Gms_JKy
Optimal Rates for Nonparametric Density Estimation under Communication Constraints
We consider density estimation for Besov spaces when the estimator is restricted to use only a limited number of bits about each sample. We provide a noninteractive adaptive estimator which exploits the sparsity of wavelet bases, along with a simulate-and-infer technique from parametric estimation under communication constraints. We show that our estimator is nearly rate-optimal by deriving minmax lower bounds that hold even when interactive protocols are allowed. Interestingly, while our wavelet-based estimator is almost rate-optimal for Sobolev spaces as well, it is unclear whether the standard Fourier basis, which arise naturally for those spaces, can be used to achieve the same performance.
accept
The paper studies estimation of densities in Besov spaces in a distributed interactive setting with communication constraints. The obtained estimator is nearly minimax optimal. This is a nice contribution to the literature on distributed adaptive density estimation. I would encourage the authors to revise the paper to include the reviewers' comments into the final version of the manuscript.
train
[ "-Xi-CvB-OTE", "qD4S-ixo67r", "SD_3B4gV3RG", "Hy4lvqnq76R", "fMEpY47AvXR", "iJ1eGwfuyQs", "3cvXBNrzLt6", "3d8IT5W-afB", "tTxVUc1wpq", "mawwBS93UhB", "gfLzMnHSxZ", "B36wD42QS6g" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " ... we apologize for not having responded point by point to the reviewer's suggestions. However, as mentioned in our response, we read\nthe reviewer's suggestions, and agree -- and (as stated) will take them into account and implement said suggestions. As there were no\nquestions (only suggestions), we didn't fee...
[ -1, 7, -1, 6, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, 3, -1, 3, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "qD4S-ixo67r", "nips_2021_CsV-Gms_JKy", "fMEpY47AvXR", "nips_2021_CsV-Gms_JKy", "iJ1eGwfuyQs", "mawwBS93UhB", "qD4S-ixo67r", "B36wD42QS6g", "gfLzMnHSxZ", "Hy4lvqnq76R", "nips_2021_CsV-Gms_JKy", "nips_2021_CsV-Gms_JKy" ]
nips_2021_bm1Mrc3WHSe
Rank Overspecified Robust Matrix Recovery: Subgradient Method and Exact Recovery
Lijun Ding, Liwei Jiang, Yudong Chen, Qing Qu, Zhihui Zhu
accept
There is a consensus among the reviewers that this paper presents significant results of interest to the conference.
test
[ "dHVuYk6RGur", "SvmClaKLjoM", "bk5PcYrQkLV", "tZM1cKdtRXt", "-JeqsnHibz5", "bg8DsXu2g7q", "yE73LB0JPeh", "YkZw_a0DIcd", "y12Uhdrx1KO" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\nThank you for your detailed response and discussion! After reading the other reviews and author responses, I have decided to raise my score and recommend acceptance. I think that the paper provides a nice contribution in rigorously analyzing this non-smooth and overparameterized formulation of ro...
[ -1, 7, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, 4, -1, -1, -1, -1, 3, 3, 3 ]
[ "-JeqsnHibz5", "nips_2021_bm1Mrc3WHSe", "YkZw_a0DIcd", "y12Uhdrx1KO", "SvmClaKLjoM", "yE73LB0JPeh", "nips_2021_bm1Mrc3WHSe", "nips_2021_bm1Mrc3WHSe", "nips_2021_bm1Mrc3WHSe" ]
nips_2021_2NJstikrGfP
Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings
Recent advances in off-policy deep reinforcement learning (RL) have led to impressive success in complex tasks from visual observations. Experience replay improves sample-efficiency by reusing experiences from the past, and convolutional neural networks (CNNs) process high-dimensional inputs effectively. However, such techniques demand high memory and computational bandwidth. In this paper, we present Stored Embeddings for Efficient Reinforcement Learning (SEER), a simple modification of existing off-policy RL methods, to address these computational and memory requirements. To reduce the computational overhead of gradient updates in CNNs, we freeze the lower layers of CNN encoders early in training due to early convergence of their parameters. Additionally, we reduce memory requirements by storing the low-dimensional latent vectors for experience replay instead of high-dimensional images, enabling an adaptive increase in the replay buffer capacity, a useful technique in constrained-memory settings. In our experiments, we show that SEER does not degrade the performance of RL agents while significantly saving computation and memory across a diverse set of DeepMind Control environments and Atari games.
accept
An interesting approach to addressing the problem of dealing with vision input in the reinforcement learning setting. Presented approach is more computationally efficient than its counterparts. The algorithm is also flexible enough and can be applied in the on-policy context, even though in the paper the authors focus on the off-policy setting. Well written paper. All the reviewers see the importance of the proposed method in dealing with the notoriously difficult problem of computationally-efficient vision-based RL.
train
[ "SgPkkVHJwc", "JjuHfGPO_V", "xu-MKpao7I", "mSPqenAjno", "XgVYoRZtoON", "EWLd8TSwWtM", "a1UAxxIMyM0" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Stored Embeddings for Efficient Reinforcement learning (SEER) is a method to reduce the computational cost of value-based off-policy reinforcement learning, e.g. Q-learning, in settings with pixel-based observations. It uses a convolutional encoder whose lower layers are frozen early in agent training, as there ha...
[ 6, -1, -1, -1, -1, 7, 6 ]
[ 4, -1, -1, -1, -1, 4, 5 ]
[ "nips_2021_2NJstikrGfP", "nips_2021_2NJstikrGfP", "a1UAxxIMyM0", "SgPkkVHJwc", "EWLd8TSwWtM", "nips_2021_2NJstikrGfP", "nips_2021_2NJstikrGfP" ]
nips_2021_oErdeq9ajjX
Learning Generalized Gumbel-max Causal Mechanisms
To perform counterfactual reasoning in Structural Causal Models (SCMs), one needs to know the causal mechanisms, which provide factorizations of conditional distributions into noise sources and deterministic functions mapping realizations of noise to samples. Unfortunately, the causal mechanism is not uniquely identified by data that can be gathered by observing and interacting with the world, so there remains the question of how to choose causal mechanisms. In recent work, Oberst & Sontag (2019) propose Gumbel-max SCMs, which use Gumbel-max reparameterizations as the causal mechanism due to an appealing counterfactual stability property. However, the justification requires appealing to intuition. In this work, we instead argue for choosing a causal mechanism that is best under a quantitative criteria such as minimizing variance when estimating counterfactual treatment effects. We propose a parameterized family of causal mechanisms that generalize Gumbel-max. We show that they can be trained to minimize counterfactual effect variance and other losses on a distribution of queries of interest, yielding lower variance estimates of counterfactual treatment effect than fixed alternatives, also generalizing to queries not seen at training time.
accept
The authors study couplings between discrete distributions that allow to infer counterfactuals. Overall, this is an important topic and in many applications, counterfactuals are desired. An inherent difficulty of inferring counterfactuals is the need to make unverifiable assumptions. The authors argue to choose a causal mechanism that is best under a quantitative criterion such as minimizing the variance when estimating causal effects. The “minimum variance” assumption seems hard to reason about in practice. In contrast, the monotonicity assumption (Pearl, 2000) seems much easier to defend. In the discussion, the authors write that the minimum variance condition is attractive for average treatment effect estimation (see the author’s answer to a comment by ybLo). This is not clear to me, since average treatment effect estimation does not require inferring counterfactuals. More generally, I am hesitant to deal with non-identifiability by “favoring statistical properties” (reponse of the authors to gci3 and comment by ybLo). Reviewer XBKm, gci3, and I found the paper hard to read.
train
[ "vujIK7LM43m", "Fu0xgHKy1hI", "6vBhVe_vRFR", "8LFw7ZpR15F", "7654Ln9iw2B", "eMObqssaBJE" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a parameterized family of causal mechanisms that generalize Gumbel-max, which can be trained to minimize counterfactual-effect variance. It is motivated by the fact that the causal mechanism is not uniquely identifiable with Gumbel-max SCMs.\n Due to the nonidentifiability, the authors instead...
[ 5, -1, -1, -1, 5, 3 ]
[ 2, -1, -1, -1, 3, 3 ]
[ "nips_2021_oErdeq9ajjX", "7654Ln9iw2B", "eMObqssaBJE", "vujIK7LM43m", "nips_2021_oErdeq9ajjX", "nips_2021_oErdeq9ajjX" ]
nips_2021_i2bTx7ZWFfI
Bandit Learning with Delayed Impact of Actions
Wei Tang, Chien-Ju Ho, Yang Liu
accept
This paper introduces a novel bandit setting which is certainly of contemporary interest; the delayed impact of actions can well be an important consideration in applications like police patrol policies, and getting this right is important in light of the potential for negative societal impact. The reviewers were uniformly positive on this work. The authors clearly explain why direct application of previous approaches will not achieve the optimal regret, and they successfully devise an algorithm based on a simple intuition (repeatedly pulling the same meta-arm until rewards stabilize) which provably achieves the optimal regret. As this setting is new, another important contribution is proving a lower bound, which allows the authors to claim optimality. I would like to briefly mention a few minor weaknesses, in the hopes that the camera-ready version of the paper can be stronger. First, essentially nothing is said in the main text about how the lower bounds are proved, and I agree with reviewer ZiMd that the paper's main text would benefit from including the intuition behind the lower bound. Also, similar to reviewer ZiMd, I found the assumption that it is the probability of choosing an arm that impacts the evolution of the arm's reward as odd in some situations. The authors' response to this point, about security games where the probabilities are revealed, does work, but this only makes sense for certain applications. Therefore, I suggest that the authors provide more justification (or at least discussion) of why it is the probability (and not the pulling of an arm itself) which influences the evolution of an arm’s reward. Overall, this paper provides a very strong contribution and should be of interest not only to researchers in online learning but also the broader community given the motivation for this work. This paper would be a welcome contribution to the NeurIPS 2021 proceedings.
train
[ "xOscAunMYEo", "iaPmzIbzNur", "Y4dtoU2XgKl", "-ZflqO3A11L", "A7WVOgoRYax", "ll_dWG-amO_", "8BAdFOEQUHA", "dLuX9EB0Emf", "LQeO5wfxV2", "UmjG3fAhhie" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors formulate and solve the problem of resource allocation to K groups under the bandit setting where the impact of actions perseveres over time and affects the corresponding reward function. They key idea is to realize that deploying the policy for a certain period of time helps us estimate its utility an...
[ 7, 7, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ 4, 4, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "nips_2021_i2bTx7ZWFfI", "nips_2021_i2bTx7ZWFfI", "xOscAunMYEo", "iaPmzIbzNur", "LQeO5wfxV2", "UmjG3fAhhie", "dLuX9EB0Emf", "nips_2021_i2bTx7ZWFfI", "nips_2021_i2bTx7ZWFfI", "nips_2021_i2bTx7ZWFfI" ]
nips_2021_ui0sz9Y2x9X
A Stochastic Newton Algorithm for Distributed Convex Optimization
We propose and analyze a stochastic Newton algorithm for homogeneous distributed stochastic convex optimization, where each machine can calculate stochastic gradients of the same population objective, as well as stochastic Hessian-vector products (products of an independent unbiased estimator of the Hessian of the population objective with arbitrary vectors), with many such stochastic computations performed between rounds of communication. We show that our method can reduce the number, and frequency, of required communication rounds, compared to existing methods without hurting performance, by proving convergence guarantees for quasi-self-concordant objectives (e.g., logistic regression), alongside empirical evidence.
accept
This paper analyzes a stochastic Newton's method for distributed convex optimization. I share many concerns mentioned by the reviewers, including but not limited to the unusual dependence on parameters in the rate, the poor performance of the algorithm compared to first-order methods with access to more information, and inconsistency between theory and experiments. I agree that the paper is not in publishable form and I regretfully cannot recommend acceptance.
train
[ "eugjLTWMIvk", "vy34liMps4K", "VCdzL9bhXZF", "upu0sXBER1E", "d0nsGw9ggdd", "-_znaQv8on", "9m2bOmsOfmE", "brKWwlAlgZa", "c122rNmTGDI", "IDtxUHylE1a" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for the detailed response. Overall, I find the natural scaling setting improves the readability and interpretability greatly. It would be better if the authors can provide more examples to help simplify the rate and make it clear about how the rates compare with each other. \n\nT...
[ -1, 4, -1, -1, -1, -1, -1, 3, 6, 3 ]
[ -1, 2, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "9m2bOmsOfmE", "nips_2021_ui0sz9Y2x9X", "upu0sXBER1E", "IDtxUHylE1a", "c122rNmTGDI", "vy34liMps4K", "brKWwlAlgZa", "nips_2021_ui0sz9Y2x9X", "nips_2021_ui0sz9Y2x9X", "nips_2021_ui0sz9Y2x9X" ]
nips_2021_hbHkvGBZB9
Are Transformers more robust than CNNs?
Transformer emerges as a powerful tool for visual recognition. In addition to demonstrating competitive performance on a broad range of visual benchmarks, recent works also argue that Transformers are much more robust than Convolutions Neural Networks (CNNs). Nonetheless, surprisingly, we find these conclusions are drawn from unfair experimental settings, where Transformers and CNNs are compared at different scales and are applied with distinct training frameworks. In this paper, we aim to provide the first fair & in-depth comparisons between Transformers and CNNs, focusing on robustness evaluations. With our unified training setup, we first challenge the previous belief that Transformers outshine CNNs when measuring adversarial robustness. More surprisingly, we find CNNs can easily be as robust as Transformers on defending against adversarial attacks, if they properly adopt Transformers' training recipes. While regarding generalization on out-of-distribution samples, we show pre-training on (external) large-scale datasets is not a fundamental request for enabling Transformers to achieve better performance than CNNs. Moreover, our ablations suggest such stronger generalization is largely benefited by the Transformer's self-attention-like architectures per se, rather than by other training setups. We hope this work can help the community better understand and benchmark the robustness of Transformers and CNNs. The code and models are publicly available at: https://github.com/ytongbai/ViTs-vs-CNNs.
accept
This paper did a comparative study of the robustness between CNNs and Vision Transformers (ViTs). They found that after carefully adjusting training setups for fair comparisons, there is no difference in the robustness to adversarial attack between the two families of architectures. Meanwhile, ViTs generalize better on OOD samples, which the paper suggested was due to the self-attention architectures rather than training setups. The reviewers generally found those results interesting and valuable to the community. There were concerns about the evaluation setup and sub-par baseline results. The author responded in the rebuttal that this is mainly due to lower number of training epochs (100), and reported results with 300 training epochs.
train
[ "kM9vjlSmYd9", "NIsb_g59Cbq", "jKvLfqJIqdS", "fMYi3hRorCA", "MKmkQy9fNJu", "-O8mvqLZqo3", "8CF2J0qbjeM", "KoHUIfImmb", "Jcddp8-B5rE", "A2pUnGBrNv", "RnigRjKiByw", "2cKgX7SlzXi" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your comments. \n\nAs for the experiment and evaluation, we are confident it is correct since \n1) For training, we rely on a open-source codebase to build the experiments, (https://github.com/rwightman/pytorch-image-models), which is very stable and has been tested by many papers. \n2) For robustness ...
[ -1, -1, 7, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "NIsb_g59Cbq", "Jcddp8-B5rE", "nips_2021_hbHkvGBZB9", "KoHUIfImmb", "jKvLfqJIqdS", "RnigRjKiByw", "nips_2021_hbHkvGBZB9", "2cKgX7SlzXi", "A2pUnGBrNv", "nips_2021_hbHkvGBZB9", "nips_2021_hbHkvGBZB9", "nips_2021_hbHkvGBZB9" ]
nips_2021_Wt3unuWMyl
Towards Sharper Generalization Bounds for Structured Prediction
Shaojie Li, Yong Liu
accept
This paper proves generalization bounds for structured prediction that are sharper than existing bounds. In particular, under certain conditions, they obtain the "fast rate" of $\tilde{O}(1/n)$. Also of interest is the fact that the bounds are logarithmically dependent on the cardinality of each output variable; meaning, the bounds accommodate very large label sets -- which is the case for things like sentence completion or object classification. The key technical innovation that enables the tighter bounds seems to be a refined analysis of the factor graph representation. The reviews were mixed. The biggest complaint (shared by multiple reviewers) is that the paper is difficult to read. Having only skimmed it myself, I agree that there are a bunch of grammatical/syntactical errors that should have been caught before submission, but in this case I don't think it's enough (on its own) to warrant rejection; the paper is readable. These types of mistakes can be fixed in a camera-ready after a detailed proofreading. **I _strongly_ suggest the authors go over the paper "with a fine-toothed comb" (or ask someone else to).** Other than the writing, the presentation seems pretty good. I think it was wise of the authors to present the main results in asymptotic notation, to make them easier for the reader to digest. Similarly, it was wise to defer proofs (which are quite involved) to the appendix and give just proof sketches. That said, Reviewer Mdym makes a good point that "it's hard to pinpoint which technical contributions of the work are instrumental to the derivation of the results." The authors responded, and I hope their explanation makes it into the paper. Another complaint is that the paper ignores some related works (London et al., JMLR 2016; Maurer & Pontil, COLT 2009; McAllester, COLT 2013). The authors responded to these suggestions, providing convincing comparisons, during the discussion period. I ask them to please add this discussion to the paper. As for the content, the results are solid. I'm glad that progress is being made on generalization in structured prediction. It's nice that the paper gives bounds for both the margin-rescaling (additive) and slack-rescaling (multiplicative) form of the max-margin loss. It's worth noting that the paper is entirely theoretical, with no experiments. While this situation is not unusual, it's becoming more common these days for theory papers to include a numerical study that tests the assumptions or evaluates the bounds on real data, to see if the bounds are still meaningful. Since the paper's primary claim is that these bounds are tighter than others, it would be interesting to see if the claim holds in practice. Are these bounds meaningful when others are vacuous? So, while I think it's OK to not provide experiments, I think the paper would have been stronger with experiments. I am recommending Accept because the work is interesting and valuable, and I believe the problems (writing and related work) can be worked through for the camera-ready.
train
[ "0x3Y-0LLQ_c", "OAgCHrzs48n", "AjGT1WzB1t", "6Ahufe7DWKF", "OACzDrSGc10", "nv-PnBJxbv7", "YzqGM1ejN7-", "GfIrQCJwUt5", "sx1_gYS6X8v", "EqRj_qK3VGD", "YDtW54TiqR8", "yVRdUGhFvl5", "Z5YKrNu3m4q", "2WZaaiY-J_X", "B7nu2uit6xh", "Bq62UqGQi-5" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Area Chair j4WL: \n\nAs you claimed, instead of using the relative deviation PAC-Bayesian bound, McAllester used a looser form for structured prediction. The bounds presented by McAllester are thus of order $\\widetilde{\\mathcal{O}}\\left( \\frac{1}{\\sqrt{n}} \\right)$ [4]. [5-7] extend [4] to more complex...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "OAgCHrzs48n", "GfIrQCJwUt5", "OACzDrSGc10", "nv-PnBJxbv7", "nips_2021_Wt3unuWMyl", "Z5YKrNu3m4q", "nips_2021_Wt3unuWMyl", "sx1_gYS6X8v", "nips_2021_Wt3unuWMyl", "OACzDrSGc10", "Bq62UqGQi-5", "B7nu2uit6xh", "2WZaaiY-J_X", "nips_2021_Wt3unuWMyl", "nips_2021_Wt3unuWMyl", "nips_2021_Wt3un...
nips_2021_nWz-Si-uTzt
Automated Discovery of Adaptive Attacks on Adversarial Defenses
Reliable evaluation of adversarial defenses is a challenging task, currently limited to an expert who manually crafts attacks that exploit the defense’s inner workings, or to approaches based on ensemble of fixed attacks, none of which may be effective for the specific defense at hand. Our key observation is that adaptive attacks are composed from a set of reusable building blocks that can be formalized in a search space and used to automatically discover attacks for unknown defenses. We evaluated our approach on 24 adversarial defenses and show that it outperforms AutoAttack, the current state-of-the-art tool for reliable evaluation of adversarial defenses: our tool discovered significantly stronger attacks by producing 3.0%-50.8% additional adversarial examples for 10 models, while obtaining attacks with slightly stronger or similar strength for the remaining models.
accept
This paper introduces a technique to better perform adaptive attacks on adversarial example defenses. Current evaluations are large done by hand, and the paper here introduces a technique to automate some of that process. While the reviewers all like the overall tone of the paper, the reviewers are generally concerned about three aspects of the paper. First, the paper does not demonstrate that it can automate all possible hand-specified attacks (for example, it has a limited design space that might not identify redundant neurons). Second, the paper considers only a subset of the attacks used in prior papers. And so saying that this attack can match prior work may not actually be valid. Third, the attack may not yet be practical to use for defenders. Taken as a while, while I agree that these are definite limitations to this technique, this is the first paper to try these kinds of automated adaptive attacks and is worth publishing. While the implementation is imperfect at the moment, and is more work to be done on evaluation and improved attack, this can be done with followup work. If it turns out that this idea is not actually better than the human-designed attacks, future work could help identify the cases where this is true or improve the attack to make it more effective. Even though this attack is not likely practically useful for defenders yet, the new direction it opens might help in the future.
train
[ "QFNwb8eWGa", "nw7Rn292cQt", "f7h93pWlYO", "GrybC3rys8z", "RA0w_yKFDyk", "rs_qPeT1K8o", "-g_X1S98jN", "uZ_WxtPnaHq", "eQ64wF9FJFv", "1tAK51bIRY1", "sCRVAYmtkh9", "J4yNIId0iFC", "5z4OI-KOptT", "K7mubI93S-d", "P1n5hrH1UHj", "ox24mY3LTgn", "dRmybAHX_iP", "4pOT4oQccV8", "g5KLzt834wX"...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_revi...
[ " I have read the author's reply and other reviewer's comments. A^3 is a great framework for discovering adaptive attacks but the contribution is limited. CAA put forward similar ideas before A^3, but A^3 has not been cited and compared with CAA in practice, including different L-norm settings. I prefer to see the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "K7mubI93S-d", "GrybC3rys8z", "sCRVAYmtkh9", "RA0w_yKFDyk", "-g_X1S98jN", "eQ64wF9FJFv", "J4yNIId0iFC", "1tAK51bIRY1", "P1n5hrH1UHj", "nips_2021_nWz-Si-uTzt", "g5KLzt834wX", "nips_2021_nWz-Si-uTzt", "4pOT4oQccV8", "dRmybAHX_iP", "ox24mY3LTgn", "nips_2021_nWz-Si-uTzt", "nips_2021_nWz-...
nips_2021_9N_vdopOU0h
PolarStream: Streaming Object Detection and Segmentation with Polar Pillars
Recent works recognized lidars as an inherently streaming data source and showed that the end-to-end latency of lidar perception models can be reduced significantly by operating on wedge-shaped point cloud sectors rather then the full point cloud. However, due to use of cartesian coordinate systems these methods represent the sectors as rectangular regions, wasting memory and compute. In this work we propose using a polar coordinate system and make two key improvements on this design. First, we increase the spatial context by using multi-scale padding from neighboring sectors: preceding sector from the current scan and/or the following sector from the past scan. Second, we improve the core polar convolutional architecture by introducing feature undistortion and range stratified convolutions. Experimental results on the nuScenes dataset show significant improvements over other streaming based methods. We also achieve comparable results to existing non-streaming methods but with lower latencies.
accept
4 expert reviewers suggest acceptance after rebuttal. This is a good quality paper making valuable contributions for streaming lidar-based object localization and that is promised to provide an open benchmark and codebase for the community.
train
[ "FNfadn0nsm", "c2t3VUIxLUm", "eYVJ61YhucN", "AEZ4aGbqPMX", "wQLcDllSflg", "ja4Hc7lj51N", "4EWrFAnLgVi", "n4DSOrogzln", "6XsWXh3uobc", "c_UGyTxy1H5", "wpwh8OFMI9q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work proposes a method of unwarping lidar sectors obtaining in a streaming setting (i.e., without waiting for a complete sweep to complete) in the (r,theta) (polar) space, and using standard convolution on top. A new “contextual” padding scheme using ego-motion-compensated previous scan-data is proposed. Fur...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "nips_2021_9N_vdopOU0h", "4EWrFAnLgVi", "n4DSOrogzln", "wQLcDllSflg", "wpwh8OFMI9q", "c_UGyTxy1H5", "FNfadn0nsm", "6XsWXh3uobc", "nips_2021_9N_vdopOU0h", "nips_2021_9N_vdopOU0h", "nips_2021_9N_vdopOU0h" ]
nips_2021_pSNs0PKx0Mw
Representation Costs of Linear Neural Networks: Analysis and Design
Zhen Dai, Mina Karzand, Nathan Srebro
accept
The paper gives a fairly complete list of results covering the representation cost for linear networks of many different architectures. The paper then studies how to design parametrization to encourage certain regularization effects. The reviewers find the results to be a solid theoretical contribution to the understanding of different parametrization.
train
[ "31CsA392sY9", "8UDOJTDJKa", "M6aKeYCyVJE", "Z6h4bTr950Z", "A4irAUfvLlP", "AVjQNYX2NN", "vJwzfzrlwg7", "hWBaJ3w6Oj", "qlN0acZ0jEt" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your helpful response. The connection between the grouping layers and transfer learning sounds quite interesting to explore further. I intend to keep my score, and would be happy to see this paper accepted.", " Thanks for the review!\n\nOur current definition of ResNet is indeed not clear now. We wil...
[ -1, -1, -1, -1, -1, 8, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "Z6h4bTr950Z", "qlN0acZ0jEt", "hWBaJ3w6Oj", "vJwzfzrlwg7", "AVjQNYX2NN", "nips_2021_pSNs0PKx0Mw", "nips_2021_pSNs0PKx0Mw", "nips_2021_pSNs0PKx0Mw", "nips_2021_pSNs0PKx0Mw" ]
nips_2021_Ee7IOrpLwT
Teaching via Best-Case Counterexamples in the Learning-with-Equivalence-Queries Paradigm
We study the sample complexity of teaching, termed as "teaching dimension" (TD) in the literature, for the learning-with-equivalence-queries (LwEQ) paradigm. More concretely, we consider a learner who asks equivalence queries (i.e., "is the queried hypothesis the target hypothesis?"), and a teacher responds either "yes" or "no" along with a counterexample to the queried hypothesis. This learning paradigm has been extensively studied when the learner receives worst-case or random counterexamples; in this paper, we consider the optimal teacher who picks best-case counterexamples to teach the target hypothesis within a hypothesis class. For this optimal teacher, we introduce LwEQ-TD, a notion of TD capturing the teaching complexity (i.e., the number of queries made) in this paradigm. We show that a significant reduction in queries can be achieved with best-case counterexamples, in contrast to worst-case or random counterexamples, for different hypothesis classes. Furthermore, we establish new connections of LwEQ-TD to the well-studied notions of TD in the learning-from-samples paradigm.
accept
Throughout the discussion the reviewers agreed unanimously that the paper provides an interesting theoretical contribution and should be accepted to NeurIPS. We urge the authors to revise the paper according to the comments provided by the reviewers in the rebuttal. Thanks for submitting your work to NeurIPS!
val
[ "Qupe7PPV2Fx", "h8YWHC2jSRf", "BzSdrYasfc6", "StXy_Oij_j", "T0YHyz3xcQU", "XfFlQma7SjR", "tSZgectq41", "0J3Pydhm9lt", "J_oqhO0_rll", "wlhppPZtWbV", "z1Jm5ruGkTx", "7ucXrsW52VX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This work studies the sample complexity of machine teaching finite hypothesis classes with equivalence queries (EQ), a model in which the learner queries hypotheses in the concept class directly, and the “teacher” responds “yes” or “no” along with a counter-example with the goal of eventually teaching the learner ...
[ 8, -1, 6, -1, 7, -1, -1, -1, -1, -1, -1, 7 ]
[ 2, -1, 3, -1, 3, -1, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021_Ee7IOrpLwT", "tSZgectq41", "nips_2021_Ee7IOrpLwT", "XfFlQma7SjR", "nips_2021_Ee7IOrpLwT", "z1Jm5ruGkTx", "BzSdrYasfc6", "7ucXrsW52VX", "nips_2021_Ee7IOrpLwT", "Qupe7PPV2Fx", "T0YHyz3xcQU", "nips_2021_Ee7IOrpLwT" ]
nips_2021_9jRH00HT4-4
Distilling Meta Knowledge on Heterogeneous Graph for Illicit Drug Trafficker Detection on Social Media
Driven by the considerable profits, the crime of drug trafficking (a.k.a. illicit drug trading) has co-evolved with modern technologies, e.g., social media such as Instagram has become a popular platform for marketing and selling illicit drugs. The activities of online drug trafficking are nimble and resilient, which call for novel techniques to effectively detect, disrupt, and dismantle illicit drug trades. In this paper, we propose a holistic framework named MetaHG to automatically detect illicit drug traffickers on social media (i.e., Instagram), by tackling the following two new challenges: (1) different from existing works which merely focus on analyzing post content, MetaHG is capable of jointly modeling multi-modal content and relational structured information on social media for illicit drug trafficker detection; (2) in addition, through the proposed meta-learning technique, MetaHG addresses the issue of requiring sufficient data for model training. More specifically, in our proposed MetaHG, we first build a heterogeneous graph (HG) to comprehensively characterize the complex ecosystem of drug trafficking on social media. Then, we employ a relation-based graph convolutional neural network to learn node (i.e., user) representations over the built HG, in which we introduce graph structure refinement to compensate the sparse connection among entities in the HG for more robust node representation learning. Afterwards, we propose a meta-learning algorithm for model optimization. A self-supervised module and a knowledge distillation module are further designed to exploit unlabeled data for improving the model. Extensive experiments based on the real-world data collected from Instagram demonstrate that the proposed MetaHG outperforms state-of-the-art methods. Our source code is available at {\color{black}{\href{https://github.com/Meta-HG/MetaHG}{https://github.com/Meta-HG/MetaHG}}}.
accept
The reviewers generally agree that the paper is interesting and like the new application with a novel dataset that will be published, thorough experiments that establish a baseline, and a clear presentation. The reviewers provided recommendations that should clearly be incorporated in the paper.
train
[ "KrFR90ODPa8", "cIon9JH4mbK", "pExzCiBN48W", "UG1U9aYGNMP", "UESfW-mJv8s", "ggn6ZpnEHTv", "CHp_dqSJzG9", "xVl27gVrrus", "2k4yinpBgiX", "n7FlL_H8VNO", "Q1QQv8igFJ1", "FRIzBM5aZYY", "LaqW0LHVs3k", "8GlmoOokVkw", "qVYLqufsqL4", "c82pM5b8gif" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for additional comments and we make clarifications to your questions as follows. \n\nQ1: There is no evidence to show that this model can generalize to other datasets/scenarios.\n\nA1: The point is that there is no open data for this important problem, thus we spent a lot of time/resource to creat...
[ -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 8, 8, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "cIon9JH4mbK", "UG1U9aYGNMP", "ggn6ZpnEHTv", "UESfW-mJv8s", "xVl27gVrrus", "FRIzBM5aZYY", "nips_2021_9jRH00HT4-4", "c82pM5b8gif", "8GlmoOokVkw", "LaqW0LHVs3k", "qVYLqufsqL4", "CHp_dqSJzG9", "nips_2021_9jRH00HT4-4", "nips_2021_9jRH00HT4-4", "nips_2021_9jRH00HT4-4", "nips_2021_9jRH00HT4-...
nips_2021_d1FHmxHPEQ0
Curriculum Disentangled Recommendation with Noisy Multi-feedback
Learning disentangled representations for user intentions from multi-feedback (i.e., positive and negative feedback) can enhance the accuracy and explainability of recommendation algorithms. However, learning such disentangled representations from multi-feedback data is challenging because i) multi-feedback is complex: there exist complex relations among different types of feedback (e.g., click, unclick, and dislike, etc) as well as various user intentions, and ii) multi-feedback is noisy: there exists noisy (useless) information both in features and labels, which may deteriorate the recommendation performance. Existing works on disentangled representation learning only focus on positive feedback, failing to handle the complex relations and noise hidden in multi-feedback data. To solve this problem, in this work we propose a Curriculum Disentangled Recommendation (CDR) model that is capable of efficiently learning disentangled representations from complex and noisy multi-feedback for better recommendation. Concretely, we design a co-filtering dynamic routing mechanism that simultaneously captures the complex relations among different behavioral feedback and user intentions as well as denoise the representations in the feature level. We then present an adjustable self-evaluating curriculum that is able to evaluate sample difficulties for better model training and conduct denoising in the label level via disregarding useless information. Our extensive experiments on several real-world datasets demonstrate that the proposed CDR model can significantly outperform several state-of-the-art methods in terms of recommendation accuracy.
accept
Overall strong scores, and reviewers are mostly aligned in finding the paper to be above the bar. Scores were initially more mixed, though after the discussion the weakest score was improved (to "borderline"). Reviewers praised the originality of the idea, and found the model performance through the experiments convincing enough. Several issues were raised regarding clarity and other specific details, and an in-depth discussion took place to resolve most of these issues. Ultimately the scores are strong, and the rebuttal was fairly persuasive. The reviewers do highlight several issues (and in spite of the strong scores, the reviews are not particularly gushing in the text). But the issues raised seem mostly in terms of clarity, or otherwise are adequately addressed during the rebuttal phase.
test
[ "O_xA53dQdm5", "PhaaEJRCjit", "b11FkQfaXm4", "o36b_aTu94F", "UuJo1q6Ooqc", "cU63ENpc27m", "GLSTrrMO3cK", "AcVd1_Ve0Fd", "kkf_VwbWWvp", "7onpTAck7Dh", "UADtLvPRlTQ" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ " Thank the reviewer for affirmation to our work and the suggestions that help further to improve our paper.", "The paper proposes a curriculum disentangled recommendation model to learn disentangled representations from multi-feedback data. Specifically, the authors design a co-filtering routing mechanism to cap...
[ -1, 7, -1, -1, 5, 8, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, -1, 4, 4, -1, -1, -1, -1, -1 ]
[ "b11FkQfaXm4", "nips_2021_d1FHmxHPEQ0", "AcVd1_Ve0Fd", "UuJo1q6Ooqc", "nips_2021_d1FHmxHPEQ0", "nips_2021_d1FHmxHPEQ0", "UuJo1q6Ooqc", "PhaaEJRCjit", "UuJo1q6Ooqc", "UuJo1q6Ooqc", "cU63ENpc27m" ]
nips_2021_1AvtkM4H-y7
Interpretable agent communication from scratch (with a generic visual processor emerging on the side)
As deep networks begin to be deployed as autonomous agents, the issue of how they can communicate with each other becomes important. Here, we train two deep nets from scratch to perform realistic referent identification through unsupervised emergent communication. We show that the largely interpretable emergent protocol allows the nets to successfully communicate even about object types they did not see at training time. The visual representations induced as a by-product of our training regime, moreover, show comparable quality, when re-used as generic visual features, to a recent self-supervised learning model. Our results provide concrete evidence of the viability of (interpretable) emergent deep net communication in a more realistic scenario than previously considered, as well as establishing an intriguing link between this field and self-supervised visual learning.
accept
Although ratings were quite divergent, I find myself in agreement with the reviewer arguing for acceptance. There are several interesting empirical findings in this work, and I find the connections being established between emergent communication and contrastive learning very interesting. I would have liked to see referential games with message lengths greater than one and experiments varying the way distractors were sampled. Though, I do appreciate these being called out explicitly as future work. Please incorporate the additional results and clarifications from your rebuttal/discussions into the camera ready version of the paper. A couple of small suggestions on my side: I think it could be smart to reduce the size of Figure 3 and use the extra space to walk the reader through the abstract connection you are making more thoroughly. The core of the argument and the setup for why comparing with SimCLR makes sense appear to be things that readers could misunderstand, and perhaps the authors can guard against this with a bit more discussion of them. Finally, thank you for your engagement with the reviewers during the rebuttal period.
train
[ "J_hju6r-CSE", "gmsjCIBUaj8", "SJYKqVq07Bt", "9OCPdmQK9a6", "J63ycLGn7D", "VUNtQrBT9v3", "oKyxZdYSYb", "XcIBPGW4V1t", "uuv7Fsd3EwX", "kBi4IcepVS-", "Q45mX-zs0bj", "I--qI2iff08", "N7ub0WPjevn", "8oQFg039vdx" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer rUAU,\nThanks again for your constructive feedback and for engaging in further discussion. Do you feel that, by adding the experiments with augmentations at test time and with the explanations provided in the response (and that we are adding to the paper), we have addressed your main concerns, or ar...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "Q45mX-zs0bj", "oKyxZdYSYb", "9OCPdmQK9a6", "XcIBPGW4V1t", "VUNtQrBT9v3", "uuv7Fsd3EwX", "N7ub0WPjevn", "8oQFg039vdx", "kBi4IcepVS-", "Q45mX-zs0bj", "I--qI2iff08", "nips_2021_1AvtkM4H-y7", "nips_2021_1AvtkM4H-y7", "nips_2021_1AvtkM4H-y7" ]
nips_2021_qwtfY-3ibt7
MAU: A Motion-Aware Unit for Video Prediction and Beyond
Accurately predicting inter-frame motion information plays a key role in video prediction tasks. In this paper, we propose a Motion-Aware Unit (MAU) to capture reliable inter-frame motion information by broadening the temporal receptive field of the predictive units. The MAU consists of two modules, the attention module and the fusion module. The attention module aims to learn an attention map based on the correlations between the current spatial state and the historical spatial states. Based on the learned attention map, the historical temporal states are aggregated to an augmented motion information (AMI). In this way, the predictive unit can perceive more temporal dynamics from a wider receptive field. Then, the fusion module is utilized to further aggregate the augmented motion information (AMI) and current appearance information (current spatial state) to the final predicted frame. The computation load of MAU is relatively low and the proposed unit can be easily applied to other predictive models. Moreover, an information recalling scheme is employed into the encoders and decoders to help preserve the visual details of the predictions. We evaluate the MAU on both video prediction and early action recognition tasks. Experimental results show that the MAU outperforms the state-of-the-art methods on both tasks.
accept
There was a robust discussion between the reviewers on the merits of this work. The author response helped clarify many of the things that the reviewers asked for. All in all, I feel this work should be accepted at NeurIPS. While the reviewers as a whole felt that there is some lack of novelty in the work, and had some minor constructive feedback (e.g. more justification for not performing multi-step prediction in their TownCentreXVID experiments), the conclusion is that the contributions are sufficient and interesting enough to warrant acceptance.
train
[ "230IkHwcs3x", "6YEQGl29C9", "ZmHandTTVfa", "1vjHjS15FNN", "wa3tdWik88A", "bSX-TIRfEF", "a2vauWnW8J1", "ClAvROjTnq", "zWOcZKcmyd6", "Kf0Cz1cw-Va", "wPpLwikO94q", "PtHp6LL9Nwo" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a method for video prediction that relies on a newly proposed Motion-Aware Unit (MAU). MAU acts as a temporal receptive field based on an attention mechanism that is used to aggregate features from previous frames in order to predict future frames. The aggregate features are combined with “cont...
[ 7, -1, -1, -1, 6, -1, -1, -1, -1, -1, 6, 6 ]
[ 5, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021_qwtfY-3ibt7", "ZmHandTTVfa", "1vjHjS15FNN", "ClAvROjTnq", "nips_2021_qwtfY-3ibt7", "a2vauWnW8J1", "PtHp6LL9Nwo", "230IkHwcs3x", "wa3tdWik88A", "wPpLwikO94q", "nips_2021_qwtfY-3ibt7", "nips_2021_qwtfY-3ibt7" ]
nips_2021_rD6ulZFTbf
Successor Feature Landmarks for Long-Horizon Goal-Conditioned Reinforcement Learning
Operating in the real-world often requires agents to learn about a complex environment and apply this understanding to achieve a breadth of goals. This problem, known as goal-conditioned reinforcement learning (GCRL), becomes especially challenging for long-horizon goals. Current methods have tackled this problem by augmenting goal-conditioned policies with graph-based planning algorithms. However, they struggle to scale to large, high-dimensional state spaces and assume access to exploration mechanisms for efficiently collecting training data. In this work, we introduce Successor Feature Landmarks (SFL), a framework for exploring large, high-dimensional environments so as to obtain a policy that is proficient for any goal. SFL leverages the ability of successor features (SF) to capture transition dynamics, using it to drive exploration by estimating state-novelty and to enable high-level planning by abstracting the state-space as a non-parametric landmark-based graph. We further exploit SF to directly compute a goal-conditioned policy for inter-landmark traversal, which we use to execute plans to "frontier" landmarks at the edge of the explored state space. We show in our experiments on MiniGrid and ViZDoom that SFL enables efficient exploration of large, high-dimensional state spaces and outperforms state-of-the-art baselines on long-horizon GCRL tasks.
accept
All reviewers agree that this is a strong, clearly described empirical work. Although it uses mostly pre-existing concepts, it combines them in an effective way, and its well-executed experiments demonstrate impressive results on pixel-based benchmarks. The ablation studies conducted as part of the discussion with reviewers will further strengthen this paper.
train
[ "2B89Hb5dJc4", "l2oRbIGngm9", "N90itDXQNYO", "x1GJ2HHVZ5", "5uP9Qo4ch9V", "QSYWr57s_bH", "fgItrvVf2n4", "sZRYPFIS7bw", "YjzkJcY1F4X", "-x8dmlX7V7b", "5APfedAEv42", "ATxjDtCSyGu", "uUSOrjgfEiR", "3sHYeOfnjlq", "ZTftuizD8bX", "N9vfxJNV7uX", "q0-jlR98D9T", "YNVdgSuRD_Q", "wb_VDIjQJw...
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ " We will include the clarifications in the paper revision. Thank you again for your time and constructive feedback.", " To more directly measure the degree of exploration achieved by each frontier sampling strategy, we decided to track state coverage. We define state coverage as the thresholded state visitation ...
[ -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "N90itDXQNYO", "fgItrvVf2n4", "3sHYeOfnjlq", "-x8dmlX7V7b", "YjzkJcY1F4X", "fgItrvVf2n4", "cEcYofzCxR9", "nips_2021_rD6ulZFTbf", "-x8dmlX7V7b", "5APfedAEv42", "ZTftuizD8bX", "nips_2021_rD6ulZFTbf", "YNVdgSuRD_Q", "wb_VDIjQJwP", "sZRYPFIS7bw", "q0-jlR98D9T", "fgItrvVf2n4", "nips_202...