paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2021_JnAU9HkXr2 | Efficient Neural Network Training via Forward and Backward Propagation Sparsification | Sparse training is a natural idea to accelerate the training speed of deep neural networks and save the memory usage, especially since large modern neural networks are significantly over-parameterized. However, most of the existing methods cannot achieve this goal in practice because the chain rule based gradient (w.r.t. structure parameters) estimators adopted by previous methods require dense computation at least in the backward propagation step. This paper solves this problem by proposing an efficient sparse training method with completely sparse forward and backward passes. We first formulate the training process as a continuous minimization problem under global sparsity constraint. We then separate the optimization process into two steps, corresponding to weight update and structure parameter update. For the former step, we use the conventional chain rule, which can be sparse via exploiting the sparse structure. For the latter step, instead of using the chain rule based gradient estimators as in existing methods, we propose a variance reduced policy gradient estimator, which only requires two forward passes without backward propagation, thus achieving completely sparse training. We prove that the variance of our gradient estimator is bounded. Extensive experimental results on real-world datasets demonstrate that compared to previous methods, our algorithm is much more effective in accelerating the training process, up to an order of magnitude faster.
| accept | This paper proposes several effective schemes to ensuring the sparsity of a deep network being trained during backward and forward propagation. The reviewers recommendations are somewhat polarized and there are some serious disagreements between some reviewer and the authors regarding fair comparison with many existing sparsity promoting techniques. Despite some concerns, the proposed methods are relatively novel and effective and have been justified with adequate analysis and empirical evidences. The AC believes the quality of the paper is above the acceptance threshold. | train | [
"saNcqpcI976",
"R1jMQRW5Cn5",
"Fud-egL0c0o",
"dmrS44VClxG",
"p3K515XlHNG",
"GxK1LQh1yi",
"1yfXGW1_NrP",
"3JzeP7OOJ0D",
"3yutpWLlRBd",
"n3WFW8MqbV",
"-uQ5s1T6SYJ",
"TMLlkREt_sT",
"r1PWTHaHqjm",
"5WOXEzJN1f",
"rN7JrQiuPm",
"T8OdYMPxiJy",
"2PaOTpCCt5f",
"wOKDBCIYuld",
"xIGewxvbz62",... | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_... | [
" Dear ACs, \n\nThanks for your hard work in reviewing our paper. We would like to report the current status to you due to the approaching deadline of discussion. \n\nWe address the main concerns of negative reviewers as follows:\n\n**1**. Reviewer CRsd and X59J are concerned about the novelty of this paper. They c... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
7,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"nips_2021_JnAU9HkXr2",
"5WOXEzJN1f",
"TMLlkREt_sT",
"r1PWTHaHqjm",
"6igBAAV_Ed5",
"nips_2021_JnAU9HkXr2",
"LCi9XmB06M0",
"JS3C6s0lN8I",
"kaH76vIlf2G",
"nips_2021_JnAU9HkXr2",
"ho1B57bgdZ7",
"-uQ5s1T6SYJ",
"1yfXGW1_NrP",
"3yutpWLlRBd",
"oK5e8owCHl_",
"nips_2021_JnAU9HkXr2",
"nips_202... |
nips_2021_8bxZ7WGg4b2 | Learning to Ground Multi-Agent Communication with Autoencoders | Communication requires having a common language, a lingua franca, between agents. This language could emerge via a consensus process, but it may require many generations of trial and error. Alternatively, the lingua franca can be given by the environment, where agents ground their language in representations of the observed world. We demonstrate a simple way to ground language in learned representations, which facilitates decentralized multi-agent communication and coordination. We find that a standard representation learning algorithm -- autoencoding -- is sufficient for arriving at a grounded common language. When agents broadcast these representations, they learn to understand and respond to each other's utterances and achieve surprisingly strong task performance across a variety of multi-agent communication environments.
| accept | Reviewers found the proposed method a simple yet intriguing contribution to the emergent communication literature. The authors addressed important initial concerns about baselines and ablations in the rebuttal period, and reviewers were satisfied with the new results. There was concern about whether sending autoencoded latents "really" constitutes communication, however I believe that makes the work all the more thought-provoking for the community. I expect this paper will be a useful contribution for other emergent communication research to use as a baseline or build upon. | train | [
"eiLaZkB-OT7",
"oNmvLzq2XDT",
"luttPSyCNtp",
"1H3BIO6y051",
"EcBE51YOdpi",
"VzNWBqtRL7g",
"xJramJoarKq",
"EDwDDLXJSid",
"ZS7yTtcWFUx",
"jx38svb6IFa",
"TBWryj6pjzn",
"CauKfLrNLrd",
"PO4DJ8FKZ5Z",
"G1b9WnqRZ3t",
"HhAm3uWv-km",
"iBB6BxdU_FJ",
"wNnElFYDrdU",
"sJU4n6Chlv",
"C0ECodRkwk... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"... | [
" I thank the authors for their reply. I appreciate the addition of the baselines (particularly the fc-comm and latent-comm mentioned in a response to another reviewer). I'm now lean towards acceptance, though my score remains a 6.",
"The goal of the paper is to show that learning a representation to ground langu... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"xJramJoarKq",
"nips_2021_8bxZ7WGg4b2",
"EDwDDLXJSid",
"EcBE51YOdpi",
"jx38svb6IFa",
"CauKfLrNLrd",
"MlN3muqhlN",
"f04wMjTDDz_",
"jx38svb6IFa",
"TBWryj6pjzn",
"PO4DJ8FKZ5Z",
"C0ECodRkwk",
"HhAm3uWv-km",
"wNnElFYDrdU",
"wNnElFYDrdU",
"nips_2021_8bxZ7WGg4b2",
"isWvikHwo0M",
"nips_202... |
nips_2021_nlLjIuHsMHp | Large-Scale Wasserstein Gradient Flows | Wasserstein gradient flows provide a powerful means of understanding and solving many diffusion equations. Specifically, Fokker-Planck equations, which model the diffusion of probability measures, can be understood as gradient descent over entropy functionals in Wasserstein space. This equivalence, introduced by Jordan, Kinderlehrer and Otto, inspired the so-called JKO scheme to approximate these diffusion processes via an implicit discretization of the gradient flow in Wasserstein space. Solving the optimization problem associated with each JKO step, however, presents serious computational challenges. We introduce a scalable method to approximate Wasserstein gradient flows, targeted to machine learning applications. Our approach relies on input-convex neural networks (ICNNs) to discretize the JKO steps, which can be optimized by stochastic gradient descent. Contrarily to previous work, our method does not require domain discretization or particle simulation. As a result, we can sample from the measure at each time step of the diffusion and compute its probability density. We demonstrate the performance of our algorithm by computing diffusions following the Fokker-Planck equation and apply it to unnormalized density sampling as well as nonlinear filtering.
| accept | This work focuses on a new method to approximate Wasserstein gradient flows, allowing to sample from and compute the density at each time step. The use of ICNN to play the role of the convex potential allows to discretize the time dimension efficiently. Based on the overall evaluation by the reviewers - which is mostly positive - and my own, I recommend to accept this work.
There was a serious and fruitful discussion between reviewers and authors, the latter providing interesting perspective on the motivation of their work, that should be included in a camera-ready version. | train | [
"Ft3k-HWg3Cn",
"SMA6bOG4MiX",
"VDUU_9BqajR",
"t76mTAlolk5",
"reUkR-M6pmF",
"vQO0qLLWPx4",
"AMbL1844hk5",
"sRuk_Fw_hV",
"g_6Qk3af_VE",
"mCxKKRtd4KF",
"mMNQhmgiaFM",
"OhQZ5mDm9rP",
"wwKSfg58on2",
"tUTHFIYodXZ",
"wFZICnCC_Tm"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear area chair and reviewers, \n\nBefore discussion closes, we wanted to share some discussion and evidence involving the broad applicability of our work; we do not find that it is too narrow.\n\nOverall, we provide an efficient method to model diffusion processes arising in many practical tasks. We considered a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"nips_2021_nlLjIuHsMHp",
"VDUU_9BqajR",
"mCxKKRtd4KF",
"vQO0qLLWPx4",
"mMNQhmgiaFM",
"sRuk_Fw_hV",
"g_6Qk3af_VE",
"tUTHFIYodXZ",
"wFZICnCC_Tm",
"wwKSfg58on2",
"OhQZ5mDm9rP",
"nips_2021_nlLjIuHsMHp",
"nips_2021_nlLjIuHsMHp",
"nips_2021_nlLjIuHsMHp",
"nips_2021_nlLjIuHsMHp"
] |
nips_2021_cknBzDV6XvN | Who Leads and Who Follows in Strategic Classification? | As predictive models are deployed into the real world, they must increasingly contend with strategic behavior. A growing body of work on strategic classification treats this problem as a Stackelberg game: the decision-maker "leads" in the game by deploying a model, and the strategic agents "follow" by playing their best response to the deployed model. Importantly, in this framing, the burden of learning is placed solely on the decision-maker, while the agents’ best responses are implicitly treated as instantaneous. In this work, we argue that the order of play in strategic classification is fundamentally determined by the relative frequencies at which the decision-maker and the agents adapt to each other’s actions. In particular, by generalizing the standard model to allow both players to learn over time, we show that a decision-maker that makes updates faster than the agents can reverse the order of play, meaning that the agents lead and the decision-maker follows. We observe in standard learning settings that such a role reversal can be desirable for both the decision-maker and the strategic agents. Finally, we show that a decision-maker with the freedom to choose their update frequency can induce learning dynamics that converge to Stackelberg equilibria with either order of play.
| accept | While the scores appear marginal, I still recommend this paper for acceptance. The authors responded well to some points and some issues raised were found not to be much of a problem. It is clear that some work is needed on the writeup in line with the reviewers' comments (e.g., contrasting with the Ball paper), but it is very reasonable to assume this can be done before the camera ready.
While we certainly can't fault the authors for not having a high enough reference count, there are a few other things that could be expanded. For example, there is a paragraph on learning in Stackelberg games, but that literature (not necessarily restricted to the classification context) goes back to 2009 if not further, and not just for zero-sum games. This seems worth discussing. There is quite a bit of additional recent work on learning in non-zero-sum strategic classification settings, including relationships to mechanism design (which is itself a Stackelberg model), as well; see e.g. "Incentive-Aware PAC Learning" and other work cited therein. To my knowledge, none of this makes the authors' work less valuable -- if anything, it makes it more so. Finally, there is work in the economics literature on justifying Stackelberg models/outcomes through different degrees of patience between the players, e.g., "Reputation and Dynamic Stackelberg Leadership in Infinitely Repeated Games" -- it is not quite clear to me what the relationship is to update frequencies, but perhaps something interesting can be said. | train | [
"bQ6Dys0O5m",
"KboFRZ3o8iT",
"ssh6RYRf4hY",
"ObgH6kRxH_y",
"F80Sz0TOTOK",
"h66XPkOcZte",
"7K_8mx-24kT",
"9gy02E6g5e",
"y5fEqW9D4r",
"xBGQwBP7zw",
"G5jsvdTh3k",
"_Rws20T52wC",
"XxWOunBriXW",
"v-G5-BGi6oO",
"V5kMo0TxKxZ",
"JS-cS7d_2De"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies a variant of strategic classification where there is not instantaneous interaction between the learner and the agents, but rather, there is a “slower” and a “faster” player (i.e., the whole interaction is governed by the frequencies according to which the learner and the agents update their deci... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_cknBzDV6XvN",
"nips_2021_cknBzDV6XvN",
"bQ6Dys0O5m",
"JS-cS7d_2De",
"7K_8mx-24kT",
"9gy02E6g5e",
"XxWOunBriXW",
"y5fEqW9D4r",
"xBGQwBP7zw",
"v-G5-BGi6oO",
"JS-cS7d_2De",
"V5kMo0TxKxZ",
"bQ6Dys0O5m",
"KboFRZ3o8iT",
"nips_2021_cknBzDV6XvN",
"nips_2021_cknBzDV6XvN"
] |
nips_2021_DxXNxZQVcc | Unadversarial Examples: Designing Objects for Robust Vision | We study a class of computer vision settings wherein one can modify the design of the objects being recognized. We develop a framework that leverages this capability---and deep networks' unusual sensitivity to input perturbations---to design ``robust objects,'' i.e., objects that are explicitly optimized to be confidently classified. Our framework yields improved performance on standard benchmarks, a simulated robotics environment, and physical-world experiments.
| accept | Reviewers agreed that this is a solid contribution to NeurIPS. Reviewers agreed that providing more convincing evidence related to the 3D simulation and physics experiments would have significantly strengthened the paper. | train | [
"NN73hYNByCW",
"vallwSzXipl",
"k981NRrJw0I",
"oWlq447AoUB",
"NgwgVBFhjj",
"9GmCCBgnpzb",
"-7k2o9a3ejo",
"uJ0NQZ8dbWI",
"En7V7Oe55N2",
"ki0HB_7sJi0",
"Gjl3fJNdeux",
"GY_jN07MM_I",
"Ubx6kzBdxa"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes and studies two techniques to generate unadversarial images (images which cause a computer vision deep learning model to more reliably give accurate predictions). One technique is for creating unadversarial patches which can be overlaid on testing images to increase the robustness of a pre-train... | [
6,
-1,
9,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"nips_2021_DxXNxZQVcc",
"-7k2o9a3ejo",
"nips_2021_DxXNxZQVcc",
"nips_2021_DxXNxZQVcc",
"9GmCCBgnpzb",
"GY_jN07MM_I",
"Ubx6kzBdxa",
"k981NRrJw0I",
"NN73hYNByCW",
"Gjl3fJNdeux",
"nips_2021_DxXNxZQVcc",
"nips_2021_DxXNxZQVcc",
"nips_2021_DxXNxZQVcc"
] |
nips_2021_rvKD3iqtBdk | Deep Jump Learning for Off-Policy Evaluation in Continuous Treatment Settings | We consider off-policy evaluation (OPE) in continuous treatment settings, such as personalized dose-finding. In OPE, one aims to estimate the mean outcome under a new treatment decision rule using historical data generated by a different decision rule. Most existing works on OPE focus on discrete treatment settings. To handle continuous treatments, we develop a novel estimation method for OPE using deep jump learning. The key ingredient of our method lies in adaptively discretizing the treatment space using deep discretization, by leveraging deep learning and multi-scale change point detection. This allows us to apply existing OPE methods in discrete treatments to handle continuous treatments. Our method is further justified by theoretical results, simulations, and a real application to Warfarin Dosing.
| accept | The reviewers appreciated the paper and agree it provides a useful new method for off-policy evaluation supported by appropriate theory and that the paper should be accepted. The authors are expected to address the points raised by reviewers in a final version as they outlined in their response, including additional discussion on tightness of results, computation, and comparisons to other work. | train | [
"zypBAGPfoe",
"bMR8MXo4JNj",
"GRO-dCzsRX",
"a4L_v36ejgM",
"AJbBBV6YSIH",
"Kmo3ka0DXN-",
"Ef7-DiGgKs",
"B8M0gACImrn",
"gwPeE6x1_K",
"OnVvtNudmat",
"5obZ_yz0Au",
"ojRMwxwuINu",
"XeLgjP3Yd1i",
"OQT0Xa-GbAU"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We want to thank you again for your thoughtful suggestions and positive comments on our paper. And we are very glad that our response has addressed your concerns. We will add the discussion of computational complexity, and make the linear time complexity (i.e., the number of training a DNN) clear in the revised p... | [
-1,
-1,
-1,
-1,
7,
-1,
6,
7,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
-1,
3,
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
3
] | [
"gwPeE6x1_K",
"5obZ_yz0Au",
"XeLgjP3Yd1i",
"Kmo3ka0DXN-",
"nips_2021_rvKD3iqtBdk",
"ojRMwxwuINu",
"nips_2021_rvKD3iqtBdk",
"nips_2021_rvKD3iqtBdk",
"OnVvtNudmat",
"B8M0gACImrn",
"Ef7-DiGgKs",
"AJbBBV6YSIH",
"OQT0Xa-GbAU",
"nips_2021_rvKD3iqtBdk"
] |
nips_2021_WVYzd7GvaOM | Attention Approximates Sparse Distributed Memory | While Attention has come to be an important mechanism in deep learning, there remains limited intuition for why it works so well. Here, we show that Transformer Attention can be closely related under certain data conditions to Kanerva's Sparse Distributed Memory (SDM), a biologically plausible associative memory model. We confirm that these conditions are satisfied in pre-trained GPT2 Transformer models. We discuss the implications of the Attention-SDM map and provide new computational and biological interpretations of Attention.
| accept | This paper shows connections between the attention function and sparse distributed memory. The insights are interesting and could be useful for those who want to understand why it works so well as well as those who want to extend it further. All reviewers generally agree that this is a good paper. Reviewer PZYv has concerns about the clarity of the paper and the significance of the insights. While I agree that the writing could be improved to make it easier to read for general readers (which the authors promised they will address in the version), I believe that the insights from this paper is useful. The authors also promised to add more discussions based on feedback from Reviewer PZYv, as summarized in their author response. I recommend accepting the paper. | train | [
"l-hU8Fav3x",
"Lw7f3Jr_bEG",
"vr0zn-lbvhW",
"yxnbeDnsHhP",
"1e04lxbfy5",
"rrM-DkDDcr6",
"Q3ff0QFReCq",
"-lsiWGrmdr9",
"EfatnIQ4S98",
"CtAc4re8tLY",
"UKxK23X9oyG",
"fK_oizWjXLO"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their continued engagement, discussions, and improvements to the paper.\n\n* **\"I understand that under certain assumptions, learned and data-dependent keys (Transformers, etc.) can be similar to random keys (SDM), but I'm not convinced that this approximation holds well in many importa... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
8,
6,
7
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"Lw7f3Jr_bEG",
"vr0zn-lbvhW",
"1e04lxbfy5",
"nips_2021_WVYzd7GvaOM",
"rrM-DkDDcr6",
"yxnbeDnsHhP",
"fK_oizWjXLO",
"CtAc4re8tLY",
"UKxK23X9oyG",
"nips_2021_WVYzd7GvaOM",
"nips_2021_WVYzd7GvaOM",
"nips_2021_WVYzd7GvaOM"
] |
nips_2021_XiZYCewdxMQ | Augmented Shortcuts for Vision Transformers | Transformer models have achieved great progress on computer vision tasks recently. The rapid development of vision transformers is mainly contributed by their high representation ability for extracting informative features from input images. However, the mainstream transformer models are designed with deep architectures, and the feature diversity will be continuously reduced as the depth increases, \ie, feature collapse. In this paper, we theoretically analyze the feature collapse phenomenon and study the relationship between shortcuts and feature diversity in these transformer models. Then, we present an augmented shortcut scheme, which inserts additional paths with learnable parameters in parallel on the original shortcuts. To save the computational costs, we further explore an efficient approach that uses the block-circulant projection to implement augmented shortcuts. Extensive experiments conducted on benchmark datasets demonstrate the effectiveness of the proposed method, which brings about 1% accuracy increase of the state-of-the-art visual transformers without obviously increasing their parameters and FLOPs.
| accept | This paper got mixed ratings initially. All the reviewers agree with the motivation and effectiveness of the proposed method, but they also raise the concerns on the novelty and technique contributions of this work. In particular, the feature collapse issue has been identified by a prior work [8]. The theorem listed in the paper was not developed by this work either. The reviewers also suggested to add more ablation experiments on this work. After the response, the authors address these concerns and all the reviewers give positive ratings. AC has read the submission, reviews and authors' response. Although the finding of feature collapse is not firstly identified by this work, the proposed shortcut augmentation is simple and the performance improvement is significant. Thus, AC agrees with reviewers that this work would be valuable for the community and recommend acceptance. | train | [
"46yB4II4sw7",
"XK4hheqzA_0",
"rHKbnaSHYcd",
"1HIdo7jYc50",
"9dL3Tbj8FH1",
"Qp6kosHjRq",
"4Mu5Y0zOR0",
"-7eykr6fTi",
"2UzkWJJwYn",
"edJ51A34tr",
"cXG6_kmhSx"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper introduce a universal module to improve vision transformers. They show that vision transformers suffer feature collapse, which mean that features in deep layers will become similar. To address, this paper presents an augmented shortcut module to be flexibly embedded into various transformer models. Ext... | [
8,
-1,
7,
6,
6,
-1,
-1,
-1,
-1,
-1,
7
] | [
5,
-1,
5,
4,
3,
-1,
-1,
-1,
-1,
-1,
5
] | [
"nips_2021_XiZYCewdxMQ",
"-7eykr6fTi",
"nips_2021_XiZYCewdxMQ",
"nips_2021_XiZYCewdxMQ",
"nips_2021_XiZYCewdxMQ",
"1HIdo7jYc50",
"46yB4II4sw7",
"cXG6_kmhSx",
"rHKbnaSHYcd",
"9dL3Tbj8FH1",
"nips_2021_XiZYCewdxMQ"
] |
nips_2021_4hBXGTdS6Lc | Finding Regions of Heterogeneity in Decision-Making via Expected Conditional Covariance | Individuals often make different decisions when faced with the same context, due to personal preferences and background. For instance, judges may vary in their leniency towards certain drug-related offenses, and doctors may vary in their preference for how to start treatment for certain types of patients. With these examples in mind, we present an algorithm for identifying types of contexts (e.g., types of cases or patients) with high inter-decision-maker disagreement. We formalize this as a causal inference problem, seeking a region where the assignment of decision-maker has a large causal effect on the decision. Our algorithm finds such a region by maximizing an empirical objective, and we give a generalization bound for its performance. In a semi-synthetic experiment, we show that our algorithm recovers the correct region of heterogeneity accurately compared to baselines. Finally, we apply our algorithm to real-world healthcare datasets, recovering variation that aligns with existing clinical knowledge.
| accept | In this work the authors propose a method for identifying regions (types of instances) where there is a high level of heterogeneity in in human decision making. Their approach is grounded in a causal inference framework that models high-heterogeneity regions as ones where the choice of decision maker ("treatment") has a large effect on the resulting decision.
This paper received a number of excellent high-quality reviews that raised important concerns and requests for clarification. The authors responded well to the major concerns raised by the different reviewers and have clearly articulated how they will revise the manuscript to improve clarity and further contextualize the work within a broader set of relevant literature. I agree with the reviewers that this paper tackles an important problem and that the work meets the bar both in terms of technical contribution and exposition. In revising the manuscript, the authors should seek to respond to as many of the reviewer suggestions as possible, with an emphasis on:
- Adding and briefly discussing the related work suggested by reviewers
- Making assumptions more tangible by giving clear examples of when they may or may not hold in practice
- Clarifying the particular type of overlap assumption upon which the work relies, and contrasting with all-agent overlap
- Clarifying why agent-specific biases do not cancel out in the aggregate objective | train | [
"Q9Pb0Gq80ME",
"I-kyBKn9Mir",
"R4vXURT9iZ",
"sc9DIpJngdw",
"i0javXnvDvG",
"X73lGJjnejt",
"1KSn21sHH-",
"_dAX7k7ND7",
"A4AdrGJcReO",
"pYY0o11tuP",
"DY0JvQZpcx_"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Summary:\n\nThis paper considers settings where decisions for individuals are each made by different decision makers (agents). The paper defines the problem of finding a region in feature space in which the agents administer highly varying decisions. For example, perhaps judges mete out different sentences for mis... | [
7,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
9
] | [
4,
5,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_4hBXGTdS6Lc",
"nips_2021_4hBXGTdS6Lc",
"A4AdrGJcReO",
"nips_2021_4hBXGTdS6Lc",
"_dAX7k7ND7",
"1KSn21sHH-",
"Q9Pb0Gq80ME",
"sc9DIpJngdw",
"DY0JvQZpcx_",
"I-kyBKn9Mir",
"nips_2021_4hBXGTdS6Lc"
] |
nips_2021_ByPR_hOE_EY | Identifying and Benchmarking Natural Out-of-Context Prediction Problems | Deep learning systems frequently fail at out-of-context (OOC) prediction, the problem of making reliable predictions on uncommon or unusual inputs or subgroups of the training distribution. To this end, a number of benchmarks for measuring OOC performance have been recently introduced. In this work, we introduce a framework unifying the literature on OOC performance measurement, and demonstrate how rich auxiliary information can be leveraged to identify candidate sets of OOC examples in existing datasets. We present NOOCh: a suite of naturally-occurring "challenge sets", and show how varying notions of context can be used to probe specific OOC failure modes. Experimentally, we explore the tradeoffs between various learning approaches on these challenge sets and demonstrate how the choices made in designing OOC benchmarks can yield varying conclusions.
| accept | Meta-review: In the context of image classification, the authors propose a method for constructing challenge sets of natural out-of-context examples from bounding-box annotations. They apply the method to COCO to construct a suite of challenge test sets, and evaluate their new benchmark task against other algorithms. All reviewers agreed on technical soundness. Most reviewers (h5rh, LYwb, wXHX) agreed that a benchmark of natural out-of-context examples is a valuable contribution. Reviewer 25HS questions the motivation given that other natural challenge sets exist, but in their rebuttal the authors differentiate by focusing on context shifts specifically. Reviewers 25HS and h5rh argue that the method is too specific to COCO (e.g. due to reliance on bounding boxes), but after rebuttal, h5rh doesn't think this detracts significantly and I tend to agree: the methodology seems secondary to the result, i.e. a potentially-useful new benchmark. On balance, I recommend acceptance. | val | [
"owDEWSuL1hx",
"ynF7hJgPDzG",
"wt1CZqSs0kU",
"BIhvSp0uHti",
"R0e-lO7yO1e",
"HmgFLrTGGiL",
"mA_mMeu8en",
"r0ttojBFTvJ",
"lHebHS5tvBC",
"H_xI9WNdKhj",
"xoNy2GCdSs0",
"OI4d-Ewj1yR",
"ctKOHiW5DDF",
"acxAtsR0Qp_",
"daBgMenZNG0"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response! We're glad the rebuttal was helpful for improving your understanding of the paper, we appreciate the feedback and plan to incorporate it heavily in future versions of the paper. ",
" Thank you for your clarifications in response to this review as well as the others in this paper, espec... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
4
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"ynF7hJgPDzG",
"r0ttojBFTvJ",
"R0e-lO7yO1e",
"nips_2021_ByPR_hOE_EY",
"lHebHS5tvBC",
"mA_mMeu8en",
"xoNy2GCdSs0",
"daBgMenZNG0",
"BIhvSp0uHti",
"acxAtsR0Qp_",
"ctKOHiW5DDF",
"nips_2021_ByPR_hOE_EY",
"nips_2021_ByPR_hOE_EY",
"nips_2021_ByPR_hOE_EY",
"nips_2021_ByPR_hOE_EY"
] |
nips_2021_eUTd06-qCrT | Label Disentanglement in Partition-based Extreme Multilabel Classification | Partition-based methods are increasingly-used in extreme multi-label classification (XMC) problems due to their scalability to large output spaces (e.g., millions or more). However, existing methods partition the large label space into mutually exclusive clusters, which is sub-optimal when labels have multi-modality and rich semantics. For instance, the label “Apple” can be the fruit or the brand name, which leads to the following research question: can we disentangle these multi-modal labels with non-exclusive clustering tailored for downstream XMC tasks? In this paper, we show that the label assignment problem in partition-based XMC can be formulated as an optimization problem, with the objective of maximizing precision rates. This leads to an efficient algorithm to form flexible and overlapped label clusters, and a method that can alternatively optimizes the cluster assignments and the model parameters for partition-based XMC. Experimental results on synthetic and real datasets show that our method can successfully disentangle multi-modal labels, leading to state-of-the-art (SOTA) results on four XMC benchmarks.
| accept | The authors propose a simple plug-in technique to improve the performance of label-partition based extreme classification algorithm. The improvements in performance are small but consistent.The paper is clear and easy to follow. On the downside, the contribution is a bit limited and the method does incur a computational overhead, which might be significant for more efficient XC algorithms. The theoretical results in the paper are rather obvious, and under unrealistic assumptions, but they do help motivate the method. The paper is also missing a comparison on the Wikipedia 500K dataset which is a standard dataset in the extreme classification.
| train | [
"WLq081Qq6U",
"KHeCou7yVGe",
"sI2T46oOBGj",
"rwxMmMl4Xe",
"sYyvBgq01io",
"wnkqCeRgOTF",
"NgenBWc_3pw",
"N1ijP-L-rJc",
"DQZlZ3oq2yw",
"urdKglYtqQ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a new method for partitioning labels space that can be used with a family of currently popular partition-based/label tree-based extreme multi-label classifiers, which organize labels in the form of the tree. The proposed method introduces redundancy of labels in the different label clusters, wh... | [
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
7,
4
] | [
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_eUTd06-qCrT",
"NgenBWc_3pw",
"urdKglYtqQ",
"nips_2021_eUTd06-qCrT",
"urdKglYtqQ",
"rwxMmMl4Xe",
"WLq081Qq6U",
"DQZlZ3oq2yw",
"nips_2021_eUTd06-qCrT",
"nips_2021_eUTd06-qCrT"
] |
nips_2021_wGRNAqVBQT2 | Leveraging SE(3) Equivariance for Self-supervised Category-Level Object Pose Estimation from Point Clouds | Category-level object pose estimation aims to find 6D object poses of previously unseen object instances from known categories without access to object CAD models. To reduce the huge amount of pose annotations needed for category-level learning, we propose for the first time a self-supervised learning framework to estimate category-level 6D object pose from single 3D point clouds. During training, our method assumes no ground-truth pose annotations, no CAD models, and no multi-view supervision. The key to our method is to disentangle shape and pose through an invariant shape reconstruction module and an equivariant pose estimation module, empowered by SE(3) equivariant point cloud networks. The invariant shape reconstruction module learns to perform aligned reconstructions, yielding a category-level reference frame without using any annotations. In addition, the equivariant pose estimation module achieves category-level pose estimation accuracy that is comparable to some fully supervised methods. Extensive experiments demonstrate the effectiveness of our approach on both complete andpartial depth point clouds from the ModelNet40 benchmark, and on real depth point clouds from the NOCS-REAL 275 dataset. The project page with code and visualizations can be found at: dragonlong.github.io/equi-pose.
| accept |
This paper proposes a self-supervised category-level 6D pose estimation method in which canonical shapes for the input objects are predicted without explicit supervision of orientation. It introduces an equivariant network architecture and shows its importance for canonical shape estimation. Reviewers point out missing citations of previous works that consider similar setup (yet different neural architectures), specifically:
[1] Unsupervised Learning of Shape and Pose with Differentiable Point Clouds
[2] Multi-view Consistency as Supervisory Signal for Learning Shape and Pose Prediction
[3] Weakly-supervised 3d shape completion in the wild
and the authors are encouraged to reference these and discuss in detail the difference of their method against the above. Reviewers agree regarding the good empirical performance of the method and the detailed ablations contained in the paper, and the paper is suggested for publication.
| train | [
"G89FvN5o--G",
"uu1jPeNeN44",
"ioJUHy7gLuD",
"551okfI8raG",
"ZwSl71U4QSY",
"KMrSjfrLUW",
"AfikWcN8_sV",
"2_Obt_zdnta",
"fYbkd2Vlk79",
"4ZaSlKnTuEp"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" The authors' reply has addressed all my concern.",
"This paper introduces a self-supervised category-level 6D pose estimation to minimize ideally canonical reconstructed point cloud and input point cloud based on rotation-equivariant features. Experiment results can verify the effectiveness on both complete and... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"ioJUHy7gLuD",
"nips_2021_wGRNAqVBQT2",
"551okfI8raG",
"ZwSl71U4QSY",
"2_Obt_zdnta",
"fYbkd2Vlk79",
"4ZaSlKnTuEp",
"uu1jPeNeN44",
"nips_2021_wGRNAqVBQT2",
"nips_2021_wGRNAqVBQT2"
] |
nips_2021_DyE8hmj2dse | A Theoretical Analysis of Fine-tuning with Linear Teachers | Fine-tuning is a common practice in deep learning, achieving excellent generalization results on downstream tasks using relatively little training data. Although widely used in practice, it is not well understood theoretically. Here we analyze the sample complexity of this scheme for regression with linear teachers in several settings. Intuitively, the success of fine-tuning depends on the similarity between the source tasks and the target task. But what is the right way of measuring this similarity? We show that the relevant measure has to do with the relation between the source task, the target task and the covariance structure of the target data. In the setting of linear regression, we show that under realistic settings there can be substantial sample complexity reduction when the above measure is low. For deep linear regression, we propose a novel result regarding the inductive bias of gradient-based training when the network is initialized with pretrained weights. Using this result we show that the similarity measure for this setting is also affected by the depth of the network. We conclude with results on shallow ReLU models, and analyze the dependence of sample complexity there on source and target tasks. We empirically demonstrate our results for both synthetic and realistic data.
| accept | This paper presents new results for fine-tuning in the linear regime (linear parameterization and linearized NN). While there is still a gap between the linear regime and practice, the reviewers and the AC believe this paper gives an important initial step towards building a comprehensive theory for fine-tuning. | test | [
"ErRqb38XlPD",
"HVdG_M38FjQ",
"T76q4sIlBvi",
"zryQTLHdYai",
"RfPpNata2QI",
"5W1fploEVRS",
"YXHeTP1yLq6",
"b6Ob0MuG6oT"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are grateful for the time and effort you put into this review, and for the helpful comments and suggestions.\n \n> Quality.1. How are the results in Section 5 impacted by weakening the 0-balancedness assumption on the weights to a $\\delta$-approximate balancedness? According to [1], initializing with small va... | [
-1,
-1,
-1,
-1,
6,
7,
4,
7
] | [
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"RfPpNata2QI",
"5W1fploEVRS",
"b6Ob0MuG6oT",
"YXHeTP1yLq6",
"nips_2021_DyE8hmj2dse",
"nips_2021_DyE8hmj2dse",
"nips_2021_DyE8hmj2dse",
"nips_2021_DyE8hmj2dse"
] |
nips_2021_ohZjthN1ncg | Overinterpretation reveals image classification model pathologies | Image classifiers are typically scored on their test set accuracy, but high accuracy can mask a subtle type of model failure. We find that high scoring convolutional neural networks (CNNs) on popular benchmarks exhibit troubling pathologies that allow them to display high accuracy even in the absence of semantically salient features. When a model provides a high-confidence decision without salient supporting input features, we say the classifier has overinterpreted its input, finding too much class-evidence in patterns that appear nonsensical to humans. Here, we demonstrate that neural networks trained on CIFAR-10 and ImageNet suffer from overinterpretation, and we find models on CIFAR-10 make confident predictions even when 95% of input images are masked and humans cannot discern salient features in the remaining pixel-subsets. We introduce Batched Gradient SIS, a new method for discovering sufficient input subsets for complex datasets, and use this method to show the sufficiency of border pixels in ImageNet for training and testing. Although these patterns portend potential model fragility in real-world deployment, they are in fact valid statistical patterns of the benchmark that alone suffice to attain high test accuracy. Unlike adversarial examples, overinterpretation relies upon unmodified image pixels. We find ensembling and input dropout can each help mitigate overinterpretation.
| accept | This work argues that deep neural networks overinterpret the input, i.e. they rely on too small portions of the image to make the decision. This is novel insight and I am enthusiastic that it will help the community understand better deep networks. All reviewers agree that the paper should be accepted, and I am glad to support this decision. Please remember to address all reviewers' comments in the camera-ready version. | train | [
"BEfK5nXlK9",
"5r1f1izsNH1",
"A7v4FRDbLz4",
"CEwNDhpzA2v",
"sdFc6VIV64k",
"nKdtoWNeP7",
"SxDjFVASLgn",
"k6stsV7y2AM",
"Qulel2kkbFL",
"bxt_F6cOgyz",
"IG41Lu4Oz00",
"c_HWE_Kfc4"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper identifies sparse sets of pixels which can be used with deep neural nets to classify images with high confidence. The authors demonstrate results with CIFAR-10, an augmentation corrupted version of it and with ImageNet. They show a number of experiments to obtain insights in why these spurious subsets ar... | [
8,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_ohZjthN1ncg",
"IG41Lu4Oz00",
"nips_2021_ohZjthN1ncg",
"k6stsV7y2AM",
"nips_2021_ohZjthN1ncg",
"bxt_F6cOgyz",
"Qulel2kkbFL",
"A7v4FRDbLz4",
"c_HWE_Kfc4",
"sdFc6VIV64k",
"BEfK5nXlK9",
"nips_2021_ohZjthN1ncg"
] |
nips_2021_O4TE57kehc1 | Neural Circuit Synthesis from Specification Patterns | We train hierarchical Transformers on the task of synthesizing hardware circuits directly out of high-level logical specifications in linear-time temporal logic (LTL). The LTL synthesis problem is a well-known algorithmic challenge with a long history and an annual competition is organized to track the improvement of algorithms and tooling over time. New approaches using machine learning might open a lot of possibilities in this area, but suffer from the lack of sufficient amounts of training data. In this paper, we consider a method to generate large amounts of additional training data, i.e., pairs of specifications and circuits implementing them. We ensure that this synthetic data is sufficiently close to human-written specifications by mining common patterns from the specifications used in the synthesis competitions. We show that hierarchical Transformers trained on this synthetic data solve a significant portion of problems from the synthesis competitions, and even out-of-distribution examples from a recent case study.
| accept | This paper elicited significant discussion. On the one hand, circuit synthesis is a significant and interesting application for DL, and the technical approach is reasonable. On the other hand, there were some concerns about the level of rigor in the experiments (see the reviews for more details), and scalability of the approach. At the end, the additional clarifications/results that the authors provided during the author feedback period persuaded the reviewers and me. I am therefore recommending acceptance. Please make sure to incorporate the new results presented in your author response into the final paper. | train | [
"-tChH4F1jx",
"K_wWA82sNiJ",
"iGfMNAHg7wS",
"bL3MkKGp9Tj",
"Hl-gzlLiWp",
"RL5wm26EClM",
"-3yCYqtZGEH",
"PIQLUYKTQWL",
"6jtdeO4ufTg",
"TjgLQ5GMRj",
"fX4Lg15R6x",
"uUGVOj0q2uv",
"nKdOzyWnd7j",
"JtlxrlfYOjz",
"E252a7697jh",
"rmOfnmWhs-v"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for adding the new results. They certainly increase my confidence in the results. \n\nRegarding more input/output variables --- I feel the paper will be stronger with a thorough discussion on this and if possible even repeating the experiments with a larger limit (say 8 or 10). It is fine even if the appro... | [
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"K_wWA82sNiJ",
"-3yCYqtZGEH",
"fX4Lg15R6x",
"uUGVOj0q2uv",
"nips_2021_O4TE57kehc1",
"6jtdeO4ufTg",
"PIQLUYKTQWL",
"nKdOzyWnd7j",
"TjgLQ5GMRj",
"rmOfnmWhs-v",
"E252a7697jh",
"Hl-gzlLiWp",
"JtlxrlfYOjz",
"nips_2021_O4TE57kehc1",
"nips_2021_O4TE57kehc1",
"nips_2021_O4TE57kehc1"
] |
nips_2021_ZRu0_3azrCd | Directional Message Passing on Molecular Graphs via Synthetic Coordinates | Graph neural networks that leverage coordinates via directional message passing have recently set the state of the art on multiple molecular property prediction tasks. However, they rely on atom position information that is often unavailable, and obtaining it is usually prohibitively expensive or even impossible. In this paper we propose synthetic coordinates that enable the use of advanced GNNs without requiring the true molecular configuration. We propose two distances as synthetic coordinates: Distance bounds that specify the rough range of molecular configurations, and graph-based distances using a symmetric variant of personalized PageRank. To leverage both distance and angular information we propose a method of transforming normal graph neural networks into directional MPNNs. We show that with this transformation we can reduce the error of a normal graph neural network by 55% on the ZINC benchmark. We furthermore set the state of the art on ZINC and coordinate-free QM9 by incorporating synthetic coordinates in the SMP and DimeNet++ models. Our implementation is available online.
| accept | The ratings were 3, 6, 4, 6.
The reviewer who gave the 3 wrote "it is simply meaningless to predict a property of the 3d structure independent of the 3d structure", however the authors gave a detailed response that suggests there might have been some confusion, and the reviewer didn't follow up. Moreover, this approach effectively uses carefully chosen inductive biases, which seems to go beyond "some inefficiency in the GNNs compared to was exploited" as the reviewer suggests.
For the reviewer who gave the 4, since the authors provide some new results in response, the score probably should be elevated.
While this is still a borderline paper, I recommend "accept" given that the results are pretty good, there doesn't seem to be any technical problem, and there is clear novelty. | train | [
"HJU8v_iONJh",
"nj27_2zjbuN",
"aWepqTo5ZXp",
"QMkM4XG-Gzk",
"iZ9DC1sJ0l5",
"D26jtnSuGsR",
"9onNONXYL9A",
"ot4UMHkQ1C",
"mXr_rHEVJt"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We suspect that your preliminary review is largely based on an **unfortunate misunderstanding**. We are aware of the mentioned issues and discuss them extensively in the paper:\n1. We discuss the issue associated with the ambiguity of conformers at length in Section 3. We also discuss the best accuracy we can exp... | [
-1,
-1,
-1,
-1,
-1,
3,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
4
] | [
"D26jtnSuGsR",
"nips_2021_ZRu0_3azrCd",
"mXr_rHEVJt",
"ot4UMHkQ1C",
"9onNONXYL9A",
"nips_2021_ZRu0_3azrCd",
"nips_2021_ZRu0_3azrCd",
"nips_2021_ZRu0_3azrCd",
"nips_2021_ZRu0_3azrCd"
] |
nips_2021_YCqx6zhEzRp | Federated Multi-Task Learning under a Mixture of Distributions | The increasing size of data generated by smartphones and IoT devices motivated the development of Federated Learning (FL), a framework for on-device collaborative training of machine learning models. First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client, due to the inherent heterogeneity of local data distributions. Federated multi-task learning (MTL) approaches can learn personalized models by formulating an opportune penalized optimization problem. The penalization term can capture complex relations among personalized models, but eschews clear statistical assumptions about local data distributions. In this work, we propose to study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions. This assumption encompasses most of the existing personalized FL approaches and leads to federated EM-like algorithms for both client-server and fully decentralized settings. Moreover, it provides a principled way to serve personalized models to clients not seen at training time. The algorithms' convergence is analyzed through a novel federated surrogate optimization framework, which can be of general interest. Experimental results on FL benchmarks show that our approach provides models with higher accuracy and fairness than state-of-the-art methods.
| accept | The reviewers are generally in favor of accepting the paper for its algorithmic and theoretical contributions on federated multi-task learning. Based on that, I recommend acceptance. However, please make sure to incorporate the reviewers' comments and the rebuttal into the final version. | test | [
"Y2pkLbfXSuL",
"x1eZzZKQPjO",
"atk98QiOmXc",
"t5VwvqlzuRo",
"9zz8oUUsKQC",
"3Cc_5kn4L5C",
"RD8qd81ir3",
"jBU-YXuPevP",
"JtFWfdKhPZ",
"lGuEUgFIzt",
"3N9fonVEBJX",
"D6WP50MJd9X",
"PjsN-zLa2H",
"sDOddXbDpFn",
"5kUhEKQxVVm",
"KJurSs2DpcQ",
"QpY8aSPGL4l"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We implemented MOCHA in Python following the official implementation written in MATLAB; the corresponding code will be made publicly available. \n\nWe tuned the parameter $\\lambda$ of MOCHA on a holdout validation set via grid search in $\\\\{10^{1}, 10^{0}, 10^{-1}, 10^{-2}, 10^{-3}\\\\}$, and we found that the... | [
-1,
-1,
-1,
7,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
-1,
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"x1eZzZKQPjO",
"5kUhEKQxVVm",
"lGuEUgFIzt",
"nips_2021_YCqx6zhEzRp",
"jBU-YXuPevP",
"JtFWfdKhPZ",
"nips_2021_YCqx6zhEzRp",
"PjsN-zLa2H",
"sDOddXbDpFn",
"nips_2021_YCqx6zhEzRp",
"RD8qd81ir3",
"QpY8aSPGL4l",
"RD8qd81ir3",
"t5VwvqlzuRo",
"KJurSs2DpcQ",
"nips_2021_YCqx6zhEzRp",
"nips_202... |
nips_2021_LoUdcqLuPej | Learning Generative Vision Transformer with Energy-Based Latent Space for Saliency Prediction | Vision transformer networks have shown superiority in many computer vision tasks. In this paper, we take a step further by proposing a novel generative vision transformer with latent variables following an informative energy-based prior for salient object detection. Both the vision transformer network and the energy-based prior model are jointly trained via Markov chain Monte Carlo-based maximum likelihood estimation, in which the sampling from the intractable posterior and prior distributions of the latent variables are performed by Langevin dynamics. Further, with the generative vision transformer, we can easily obtain a pixel-wise uncertainty map from an image, which indicates the model confidence in predicting saliency from the image. Different from the existing generative models which define the prior distribution of the latent variables as a simple isotropic Gaussian distribution, our model uses an energy-based informative prior which can be more expressive to capture the latent space of the data. We apply the proposed framework to both RGB and RGB-D salient object detection tasks. Extensive experimental results show that our framework can achieve not only accurate saliency predictions but also meaningful uncertainty maps that are consistent with the human perception.
| accept | This paper describes a novel transformer-based latent variable model for salient object detection. Reviewers were generally positive on the approach and results, and the work was deemed to be technically sound. The reviews and authors had a productive discussion which we hope will lead to some clarifications that will enhance the paper. One reviewer provided a score below the threshold for acceptance, but this reviewer appears to have had some confusion about the nature of the method and did not reengage with authors when they sought to clarify; as such, this review was down-weighted in the overall evaluation. All in all, this is an interesting method producing state of the art results, with solid theoretical grounding. | train | [
"YzFmuPdLvFH",
"jjgWYyByRlR",
"4Mxv138_c83",
"D1y4_wCs0Zm",
"2v8KSUDwq6s",
"l4H8-MiSEWE",
"dnC9WmuwRl",
"83EbGk5LVp",
"20n_kL6X5me",
"AWva9NVlFaS",
"Y92CeSwfeFJ",
"T2OmCPQN32E",
"TBT96vTHdV",
"t4LYp_2xKqf",
"3AIV9v7Tws5",
"snXcCoETsLP",
"d5TggMDv-4",
"8mHnm-wtVd",
"Z7fQWGVZQpj",
... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" My questions and concerns are solved based on the author's detailed reply. Thus i re-scored this paper.",
"The author construct a generative model for salient object detection in the form of top-down conditional latent variable model.\nA generative vision transformer is used by adding latent variables into the ... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"2v8KSUDwq6s",
"nips_2021_LoUdcqLuPej",
"2v8KSUDwq6s",
"snXcCoETsLP",
"l4H8-MiSEWE",
"AWva9NVlFaS",
"Z7fQWGVZQpj",
"Y92CeSwfeFJ",
"TBT96vTHdV",
"Y92CeSwfeFJ",
"3AIV9v7Tws5",
"nips_2021_LoUdcqLuPej",
"MC1ud1wnUk6",
"nips_2021_LoUdcqLuPej",
"jjgWYyByRlR",
"d5TggMDv-4",
"xzb2u2WmxV",
... |
nips_2021_8v4Sev9pXv | Regularization in ResNet with Stochastic Depth | Regularization plays a major role in modern deep learning. From classic techniques such as L1, L2 penalties to other noise-based methods such as Dropout, regularization often yields better generalization properties by avoiding overfitting. Recently, Stochastic Depth (SD) has emerged as an alternative regularization technique for residual neural networks (ResNets) and has proven to boost the performance of ResNet on many tasks [Huang et al., 2016]. Despite the recent success of SD, little is known about this technique from a theoretical perspective. This paper provides a hybrid analysis combining perturbation analysis and signal propagation to shed light on different regularization effects of SD. Our analysis allows us to derive principled guidelines for choosing the survival rates used for training with SD.
| accept | This paper gives a theoretical analysis of the stochastic depth (SD) technique for ResNet. It is shown that SD mitigates the gradient explosion phenomenon. Moreover, it is shown that SD behaves like a Gaussian noise injection to the features in the internal layers and thus it works as a regularization. Finally, the authors proposed a method called SenseMode that determines the mode under a small computational budget limitation.
The analysis given in this paper is interesting. It shed light to understand what SD is essentially doing. The proposed SenseMode is also useful.
On the other hand, as the reviewers pointed out, there are some issues in its writing, i.e., typos, confusing notations and unclear descriptions. In particular, the definition of the linear mapping $\varphi$ in the definition of SenseMode should be explicitly given in the final version. I guess everybody would be confused at this point.
In summary, this paper has sufficient novelty and gives a good insight for SD, which is beneficial to the community. I think this paper can be accepted by NeurIPS. | train | [
"klNFCXY3gFQ",
"3NPQamGQZoW",
"p6TCkJHQ4Dm",
"uxFkTLgo2BH",
"RiGzgNLctR",
"4cs9aIBMJOD",
"wjAPh-xks8b",
"JZJ7goztsT0",
"2cwt_LMgLP",
"34ufxAhaaoV",
"VZqhG-cZiwd",
"313xAYOVxh2"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper focuses on understanding the regularization effect of stochastic depth in residual neural networks. The authors give 3 main results: \n\n(a) Stochastic depth helps to mitigate (to some extent) the gradient explosion phenomena observed at initialization in the Vanilla ResNet model, \n\n(b) Stochastic dept... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_8v4Sev9pXv",
"JZJ7goztsT0",
"nips_2021_8v4Sev9pXv",
"p6TCkJHQ4Dm",
"klNFCXY3gFQ",
"wjAPh-xks8b",
"RiGzgNLctR",
"p6TCkJHQ4Dm",
"313xAYOVxh2",
"VZqhG-cZiwd",
"nips_2021_8v4Sev9pXv",
"nips_2021_8v4Sev9pXv"
] |
nips_2021_6Ab68Ip4Mu | ResT: An Efficient Transformer for Visual Recognition | This paper presents an efficient multi-scale vision Transformer, called ResT, that capably served as a general-purpose backbone for image recognition. Unlike existing Transformer methods, which employ standard Transformer blocks to tackle raw images with a fixed resolution, our ResT have several advantages: (1) A memory-efficient multi-head self-attention is built, which compresses the memory by a simple depth-wise convolution, and projects the interaction across the attention-heads dimension while keeping the diversity ability of multi-heads; (2) Positional encoding is constructed as spatial attention, which is more flexible and can tackle with input images of arbitrary size without interpolation or fine-tune; (3) Instead of the straightforward tokenization at the beginning of each stage, we design the patch embedding as a stack of overlapping convolution operation with stride on the token map. We comprehensively validate ResT on image classification and downstream tasks. Experimental results show that the proposed ResT can outperform the recently state-of-the-art backbones by a large margin, demonstrating the potential of ResT as strong backbones. The code and models will be made publicly available at https://github.com/wofmanaf/ResT.
| accept | Three reviewers recommend borderline-acceptance, one reviewer recommends borderline-rejection. Most remaining concerns stem from a lack of novelty compared to a very rapidly moving (partially concurrent) work on vision transformers. However, the AC agrees with the majority of reviewers that there is enough merit in the presented work to justify acceptance. | train | [
"s-8q33FFEp",
"_PM6M6Wcf1D",
"2cSPkLkLZdW",
"nHdhn0n6-9",
"5tCqqMZa4Z-",
"JODJ13I5BgZ",
"lJjX_BDaUQB",
"ZBfhonR7_Dz",
"5YPJhuQNKOj",
"N81J6zpgfm2",
"Fz_Nqfv2TPX",
"PfU8Kfrqu73",
"_9hOLXoToD"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your follow-up rebuttal. As you show here, with the main setting, even though the relative order still holds but the gain becomes much smaller (0.68% -> 0.25%, DWconv vs. Max.). ",
" We greatly appreciate your precious feedback for our research. The mathematically defined of GL will be included in th... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"2cSPkLkLZdW",
"JODJ13I5BgZ",
"5tCqqMZa4Z-",
"nips_2021_6Ab68Ip4Mu",
"ZBfhonR7_Dz",
"N81J6zpgfm2",
"PfU8Kfrqu73",
"nHdhn0n6-9",
"_9hOLXoToD",
"Fz_Nqfv2TPX",
"nips_2021_6Ab68Ip4Mu",
"nips_2021_6Ab68Ip4Mu",
"nips_2021_6Ab68Ip4Mu"
] |
nips_2021_2j3B_YkC8r | Adversarial Examples for k-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams | Chawin Sitawarin, Evgenios Kornaropoulos, Dawn Song, David Wagner | accept | This paper studies algorithms to find adversarial examples for k-NN classifiers in a white box fashion. The reviewers have generally agreed that the work is novel, and that the experiments demonstrate an improvement over prior work. However, some comments have not been addressed in the paper, and there are many parts of the discussion that should inform improvements to the next version of the paper. For example, when designing new attack algorithms, it is important to be evaluating against good defense algorithms, such as the one from Yang et al., which appears to be missing. Similarly, it is important to provide reproducible results and error bars, especially when the improvement over existing work is small. Overall, I believe the merits outweighs the shortcoming, and that this paper provides a number of contributions to the literature around k-NN adversarial examples. I weakly recommend acceptance of this paper. | train | [
"u2jahJeegrr",
"kvqqcR5s-M",
"cqZxQ48b8QA",
"zCaLt-vzRwf",
"KZouAlleyZf",
"zK0A5LZKlAA",
"1mbEyfDRJsh",
"kKucnRurerZ",
"StfZl9HgJMP",
"22NuuvBrFaM",
"ZmJYFkY_gWp",
"SwAKLgN10h",
"H865PcuM84N",
"6dHf6QfaCwK",
"pSy4r93auY",
"nTKWNQywImV",
"3OrR185-Mz"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The tables below display the mean adversarial distance and the runtime with error bars (10 repeated runs with different train-test data shufflings to compute confidence intervals) for $k=3$. We try to “tune” hyperparameters of the baselines so that their runtimes are approximately less than one magnitude away fro... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"kvqqcR5s-M",
"1mbEyfDRJsh",
"kKucnRurerZ",
"zK0A5LZKlAA",
"nips_2021_2j3B_YkC8r",
"H865PcuM84N",
"StfZl9HgJMP",
"ZmJYFkY_gWp",
"3OrR185-Mz",
"3OrR185-Mz",
"nTKWNQywImV",
"pSy4r93auY",
"KZouAlleyZf",
"nips_2021_2j3B_YkC8r",
"nips_2021_2j3B_YkC8r",
"nips_2021_2j3B_YkC8r",
"nips_2021_2... |
nips_2021_srHp6A1c2z- | Adversarially Robust 3D Point Cloud Recognition Using Self-Supervisions | 3D point cloud data is increasingly used in safety-critical applications such as autonomous driving. Thus, the robustness of 3D deep learning models against adversarial attacks becomes a major consideration. In this paper, we systematically study the impact of various self-supervised learning proxy tasks on different architectures and threat models for 3D point clouds with adversarial training. Specifically, we study MLP-based (PointNet), convolution-based (DGCNN), and transformer-based (PCT) 3D architectures. Through extensive experimentation, we demonstrate that appropriate applications of self-supervision can significantly enhance the robustness in 3D point cloud recognition, achieving considerable improvements compared to the standard adversarial training baseline. Our analysis reveals that local feature learning is desirable for adversarial robustness in point clouds since it limits the adversarial propagation between the point-level input perturbations and the model's final output. This insight also explains the success of DGCNN and the jigsaw proxy task in achieving stronger 3D adversarial robustness.
| accept | After some discussion, four reviews, all of them pretty thorough, recommend accepting this paper.
I see nothing that would cause me to want to contradict this unanimous decision, and so I am recommending acceptance. | train | [
"SXqk1f6341g",
"masBgvTM-tC",
"gV1Xdck8gju",
"jKrddjF_11G",
"m5nXGme3CrI",
"IQ91q7AosOl",
"lQyBZRwOUcq",
"lznxjMReBR",
"EYtFUbbCkxC",
"EwZszwKTQsr",
"6exiejkWegd",
"M_roRMzqct",
"CEtA6qSuNp",
"lgA4Q54pdD",
"ztA9WTn5unq",
"0XjeBZtp3-o",
"z0ebp00fmNx",
"ENKIkuTmlRw",
"ziRV4SALWd3",... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_r... | [
"This paper presents a self-supervision based method for adversarially robust 3D point cloud recognition, in which three types of self-supervised tasks are dealt with to help learn 3D point features: 3D rotation, 3D jigsaw and autoencoder. Empirical results demonstrate the effectiveness of the proposed method. Besi... | [
7,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
5,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_srHp6A1c2z-",
"CEtA6qSuNp",
"EwZszwKTQsr",
"IQ91q7AosOl",
"nips_2021_srHp6A1c2z-",
"ztA9WTn5unq",
"lznxjMReBR",
"z0ebp00fmNx",
"EwZszwKTQsr",
"6exiejkWegd",
"M_roRMzqct",
"SXqk1f6341g",
"lgA4Q54pdD",
"A622l2KAQ30",
"0XjeBZtp3-o",
"m5nXGme3CrI",
"ENKIkuTmlRw",
"ziRV4SALWd... |
nips_2021_no-Jsrx9ytl | Tuning Mixed Input Hyperparameters on the Fly for Efficient Population Based AutoRL | Despite a series of recent successes in reinforcement learning (RL), many RL algorithms remain sensitive to hyperparameters. As such, there has recently been interest in the field of AutoRL, which seeks to automate design decisions to create more general algorithms. Recent work suggests that population based approaches may be effective AutoRL algorithms, by learning hyperparameter schedules on the fly. In particular, the PB2 algorithm is able to achieve strong performance in RL tasks by formulating online hyperparameter optimization as time varying GP-bandit problem, while also providing theoretical guarantees. However, PB2 is only designed to work for \emph{continuous} hyperparameters, which severely limits its utility in practice. In this paper we introduce a new (provably) efficient hierarchical approach for optimizing \emph{both continuous and categorical} variables, using a new time-varying bandit algorithm specifically designed for the population based training regime. We evaluate our approach on the challenging Procgen benchmark, where we show that explicitly modelling dependence between data augmentation and other hyperparameters improves generalization.
| accept | There was extensive discussion between the authors and reviewers, and amongst the reviewers privately. Many concerns were raised, and largely addressed. This is particularly the case with the number of runs and statistical significance, which is a concern that has largely been allayed.
The primary remaining concern is simply in the simplicity of the BO (Bayesian optimization) results. An expert in BO assessed the BO theory to be relatively solid, but not particularly exciting in the BO literature. The claims about novelty for addressing both categorial and continuous variables is a bit overstated, considering there are several works that already address this problem in BO (that can be used here). This work extends an existing algorithm, developed for the RL setting, that only works for continuous hyperparameters, so the contribution is placed relative to that. But, the limitation of this previous approach does not imply that these existing BO algorithms could not have already been applied to RL to solve this issue of mixed hyperparameters. This claim on the technical novelty should be clarified, especially given that there appears to be more effective BO algorithms for mixed hyperparameters.
Additionally, the work begins by highlighting the importance of making RL more usable in practice. But, the work focuses on the setting where you can have populations of agents learning in parallel. The primary setting where it is feasible to train many agents in parallel is in simulation. Many real-world RL settings are restricted to a single stream of experience. This contrasts AutoML, which is typically done on a dataset, and so many hyperparameters can be tested. In RL, it is often not feasible to test many hyperparameters, except (a) in experiments to better understand our algorithms (not a deployment setting) and (b) learning in simulators (a useful but relatively restricted setting). The paper should better place the problem setting for which you are developing AutoRL algorithms. This statement may seem to be more generally about AutoRL (where many papers do not do this); but we are here to discuss and improve this paper, and clarity of problem setting is critical.
| train | [
"Fu3SwyngwJt",
"q6MLoHv9aWW",
"Vbb8_7embk6",
"QU9im6HMvcX",
"QiHQtJbLI8j",
"RXTx6bOoJRd",
"HOy5Q0VCBGA",
"8SaftCXY7Tn",
"HK9H0oMRZm",
"nD17GL7bEf",
"v6IsRS35zWr",
"KSGsDwWi9i",
"c6rHUZ5wuOs",
"NPT71fhrH_4",
"ZgEd-dSf2Rs",
"wR8lT0FJbjh",
"F7LEMwVNuSj",
"-NyqDUKHMYa"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks for introducing this paper. Based on this final comment, I improved my score to 6.",
"This paper extends the existing AutoRL algorithm, PB2, that was specifically designed to work with continuous hyperparameters by enabling it to efficiently search over both continuous and categorical hyperparameters of ... | [
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
-1,
-1,
2,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"Vbb8_7embk6",
"nips_2021_no-Jsrx9ytl",
"QU9im6HMvcX",
"HOy5Q0VCBGA",
"ZgEd-dSf2Rs",
"nips_2021_no-Jsrx9ytl",
"wR8lT0FJbjh",
"HK9H0oMRZm",
"v6IsRS35zWr",
"nips_2021_no-Jsrx9ytl",
"KSGsDwWi9i",
"NPT71fhrH_4",
"nips_2021_no-Jsrx9ytl",
"nD17GL7bEf",
"RXTx6bOoJRd",
"q6MLoHv9aWW",
"-NyqDU... |
nips_2021_K5YKjaMjbja | Neural Algorithmic Reasoners are Implicit Planners | Implicit planning has emerged as an elegant technique for combining learned models of the world with end-to-end model-free reinforcement learning. We study the class of implicit planners inspired by value iteration, an algorithm that is guaranteed to yield perfect policies in fully-specified tabular environments. We find that prior approaches either assume that the environment is provided in such a tabular form---which is highly restrictive---or infer "local neighbourhoods" of states to run value iteration over---for which we discover an algorithmic bottleneck effect. This effect is caused by explicitly running the planning algorithm based on scalar predictions in every state, which can be harmful to data efficiency if such scalars are improperly predicted. We propose eXecuted Latent Value Iteration Networks (XLVINs), which alleviate the above limitations. Our method performs all planning computations in a high-dimensional latent space, breaking the algorithmic bottleneck. It maintains alignment with value iteration by carefully leveraging neural graph-algorithmic reasoning and contrastive self-supervised learning. Across seven low-data settings---including classical control, navigation and Atari---XLVINs provide significant improvements to data efficiency against value iteration-based implicit planners, as well as relevant model-free baselines. Lastly, we empirically verify that XLVINs can closely align with value iteration.
| accept | The paper proposes proposes eXecuted Latent Value Iteration Networks (XLVINs) to relax the assumptions of Value Iteration Networks. While most neural architectures in deep RL are not well motivated, this paper builds on an important line of work that treats value representations as implicit planners. This is very nice work that generalizes previous work. Well done! | train | [
"ImXNvDCmx6U",
"lAty1NisEA9",
"WR2i-wIp_x5",
"1KmXbaDOv0u",
"96BO5nyUCd0",
"dFEpfvawuP8",
"Im3FVU_WywC",
"tE2ruHcitKs",
"E9gYYCK1RN9",
"1fANSC1l3md",
"z7vF6mxCCg8",
"7_FdYG0iLzM",
"_J0KL-Z_3K"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank all the reviewers for their careful consideration of our work, their insightful comments, and their kind remarks on our responses. \n\nAs mentioned, we are very happy to revise the paper incorporating all of the discussion points raised!",
" Thank you for the acknowledgement and the addit... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3
] | [
"nips_2021_K5YKjaMjbja",
"96BO5nyUCd0",
"Im3FVU_WywC",
"nips_2021_K5YKjaMjbja",
"E9gYYCK1RN9",
"1fANSC1l3md",
"z7vF6mxCCg8",
"1KmXbaDOv0u",
"_J0KL-Z_3K",
"7_FdYG0iLzM",
"nips_2021_K5YKjaMjbja",
"nips_2021_K5YKjaMjbja",
"nips_2021_K5YKjaMjbja"
] |
nips_2021_0HW7A5YZjq7 | Self-Supervised Learning with Kernel Dependence Maximization | We approach self-supervised learning of image representations from a statistical dependence perspective, proposing Self-Supervised Learning with the Hilbert-Schmidt Independence Criterion (SSL-HSIC). SSL-HSIC maximizes dependence between representations of transformations of an image and the image identity, while minimizing the kernelized variance of those representations. This framework yields a new understanding of InfoNCE, a variational lower bound on the mutual information (MI) between different transformations. While the MI itself is known to have pathologies which can result in learning meaningless representations, its bound is much better behaved: we show that it implicitly approximates SSL-HSIC (with a slightly different regularizer). Our approach also gives us insight into BYOL, a negative-free SSL method, since SSL-HSIC similarly learns local neighborhoods of samples. SSL-HSIC allows us to directly optimize statistical dependence in time linear in the batch size, without restrictive data assumptions or indirect mutual information estimators. Trained with or without a target network, SSL-HSIC matches the current state-of-the-art for standard linear evaluation on ImageNet, semi-supervised learning and transfer to other classification and vision tasks such as semantic segmentation, depth estimation and object recognition.
| accept | This paper studies the HSIC loss for self-supervised learning. While there was significant enthusiasm about this paper from the reviewers, there were also major concerns surrounding the conclusions of the experiments and questions about what to take away from the results. The authors do provide some good clarifications in their general response surrounding the fact that they don’t expect that HSIC should beat other methods and that this is not the point of the paper. However, the exact conclusions are still a bit muddy in the text.
An important point: there is a high degree of similarity between the objective underlying HSIC and BarlowTwins. This must be clarified in the paper. Now, the description is rather vague and there are no explicit ablations to compare or understand the differences between the two.
| train | [
"zzk98cpOzB5",
"Pl7JpNltDc4",
"fGmvZI-w2DZ",
"39ESRH-OhjQ",
"3oYUNGFDLHA",
"VidF8Bd6j4i",
"nGW5lVBZHjT",
"2nbtZDBAWy1",
"0DlLqtlXZcc",
"CTFTsxCBQFD",
"lzUqLVjXREw",
"xOjajEmCApv",
"AtG-phnmeMY",
"O1OlO75L_dI",
"drTgvAwbXdJ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed a new Self-Supervised Learning framework with the Hilbert Schmidt Independence Criterion(SSL-HSIC). SSL-HSIC yields a new understanding of InfoNCE, while can optimize statistical dependence in time linear in the batch size. This paper is well written with detailed proofs and theoretical resul... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"nips_2021_0HW7A5YZjq7",
"fGmvZI-w2DZ",
"3oYUNGFDLHA",
"VidF8Bd6j4i",
"0DlLqtlXZcc",
"nips_2021_0HW7A5YZjq7",
"nips_2021_0HW7A5YZjq7",
"drTgvAwbXdJ",
"AtG-phnmeMY",
"O1OlO75L_dI",
"zzk98cpOzB5",
"nips_2021_0HW7A5YZjq7",
"nips_2021_0HW7A5YZjq7",
"nips_2021_0HW7A5YZjq7",
"nips_2021_0HW7A5Y... |
nips_2021_JW2nIBL2tzN | CROCS: Clustering and Retrieval of Cardiac Signals Based on Patient Disease Class, Sex, and Age | The process of manually searching for relevant instances in, and extracting information from, clinical databases underpin a multitude of clinical tasks. Such tasks include disease diagnosis, clinical trial recruitment, and continuing medical education. This manual search-and-extract process, however, has been hampered by the growth of large-scale clinical databases and the increased prevalence of unlabelled instances. To address this challenge, we propose a supervised contrastive learning framework, CROCS, where representations of cardiac signals associated with a set of patient-specific attributes (e.g., disease class, sex, age) are attracted to learnable embeddings entitled clinical prototypes. We exploit such prototypes for both the clustering and retrieval of unlabelled cardiac signals based on multiple patient attributes. We show that CROCS outperforms the state-of-the-art method, DTC, when clustering and also retrieves relevant cardiac signals from a large database. We also show that clinical prototypes adopt a semantically meaningful arrangement based on patient attributes and thus confer a high degree of interpretability.
| accept | This presents a well written paper which presents a framework for information retrieval and clustering of patients based on unannotated ECG/ physiological data . In discussion, there were mixed reviews with regards to novelty, experimental evaluation and baseline comparisons to related work. However, after evaluating the paper and the discussion and reviewers' comments, I agree that the proposed approach is presents a supervised technique for learning a compact representation which respect the attributes (or weak labelling) available. Such a representation could be used for other task for which access to a large database of (strong) annotated data is quite difficult. This is a non-trivial task in the healthcare domain and highlights an important application of ML for real-world impact, which is potentially generalisable to other data settings. Although there were concerns around over-fitting this model to a very stringent problem, the novelty of the approach and problem formulation from an ML perspective far outweighs any potential downsides. As both the authors and one of the reviewers highlights, this work may have a potentially positive impact on multitude of stakeholders within healthcare and could inform researchers working on fairness and thus better serve patients from underrepresented backgrounds. | train | [
"9g2mInckqCR",
"k8iy9k_Obm",
"0pb7U1KhlSV",
"G6WnS9Q796k",
"SlVXDC2swzl",
"OMbCvF0Uit",
"Yo-F5sAtwcw",
"Rm-mrEERXK6",
"vlqFstsTJiO",
"9JQm58nJ4lH",
"xF7ZsL110el",
"n8HXTAiO-e"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Paper presents for retrieving cardiac signal data corresponding to a patient described by a set (3) of attributes (Sex, age, disease type). This is built on a model constructed by clustering such cardiac signals representations and learning their corresponding patient attributes.\n\tAlthough well written, the pap... | [
5,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_JW2nIBL2tzN",
"nips_2021_JW2nIBL2tzN",
"Rm-mrEERXK6",
"SlVXDC2swzl",
"nips_2021_JW2nIBL2tzN",
"xF7ZsL110el",
"OMbCvF0Uit",
"n8HXTAiO-e",
"k8iy9k_Obm",
"9g2mInckqCR",
"nips_2021_JW2nIBL2tzN",
"nips_2021_JW2nIBL2tzN"
] |
nips_2021_U8xJty6RHH | Representing Hyperbolic Space Accurately using Multi-Component Floats | Hyperbolic space is particularly useful for embedding data with hierarchical structure; however, representing hyperbolic space with ordinary floating-point numbers greatly affects the performance due to its \emph{ineluctable} numerical errors. Simply increasing the precision of floats fails to solve the problem and incurs a high computation cost for simulating greater-than-double-precision floats on hardware such as GPUs, which does not support them. In this paper, we propose a simple, feasible-on-GPUs, and easy-to-understand solution for numerically accurate learning on hyperbolic space. We do this with a new approach to represent hyperbolic space using multi-component floating-point (MCF) in the Poincar{\'e} upper-half space model. Theoretically and experimentally we show our model has small numerical error, and on embedding tasks across various datasets, models represented by multi-component floating-points gain more capacity and run significantly faster on GPUs than prior work.
| accept | The paper proposes a new approach to numerically represent hyperbolic space with high precision using multi-component floats. Insufficient precision is known to cause problems for hyperbolic representation learning and this paper proposes an interesting method to alleviate this issue. The contributions are somewhat incremental as there exists closely related prior work (also in terms of organization of the paper) on improving numerical accuracy for hyperbolic embeddings by Yu and De Sa (2019). However, an important advantage of the proposed method is that it allows for easy GPU-training what significantly expands it applicability compared to prior work. Reviewers highlighted also the sound mathematical analysis and empirical evaluation in the manuscript. After rebuttal, all reviewers supported therefore acceptance of the paper. When preparing the camera ready version of the manuscript, please take the feedback and comment from the reviewers into account to improve the paper. | train | [
"yKMAf6eM58o",
"vrZ_U06ABVS",
"4Fr_O_pG0J",
"91zceTxooa3",
"H2lLN_HoEgc",
"nhdih9BkMJ9"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"To solve the 'NaN' problem in hyperbolic embeddings, the paper proposes to learn the hyperbolic embeddings in the Poincare upper-half space model using multi-component floating-point (MCF). Theoretically, the paper proves that numerical errors can be reduced to any degree by simply increasing the number of compone... | [
6,
-1,
-1,
-1,
7,
7
] | [
3,
-1,
-1,
-1,
3,
2
] | [
"nips_2021_U8xJty6RHH",
"yKMAf6eM58o",
"nhdih9BkMJ9",
"H2lLN_HoEgc",
"nips_2021_U8xJty6RHH",
"nips_2021_U8xJty6RHH"
] |
nips_2021_cDPFOsj2G6B | Dimensionality Reduction for Wasserstein Barycenter | Zachary Izzo, Sandeep Silwal, Samson Zhou | accept | All the reviewers mentioned that it is an interesting paper, which is clearly written, with simple to understand yet non trivial theoretical results. The numerical simulations are a bit weak, but this being put aside, all the reviewers are supportive of acceptance. As a side note, a paper which is related to the proposed method and is worth discussing is “B. Muzellec, MC, Subspace Detours: Building Transport Plans that are Optimal on Subspace Projections, Neurips 2019.” | train | [
"cl5E3r1hq1Y",
"aEVvM3lD1ao",
"J_jUQxqdAN0",
"bDYdvL7RXL8",
"V-YoBInUzYX",
"Gm3aeccMQ8g",
"TRgkB2sjc3I",
"4k4p5ER7dYR",
"Go-1xhZBuQp",
"6KGrbE-mIp_",
"OplDVyThAh3",
"5aZMg1NJsjr",
"RL8rM5mtPkC",
"3LDzgoqheo"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the authors for their answers to all the reviews, and we confirm our score.",
" Thank you, authors, for your response. I have read your responses, as well as other reviews. I understand that the focus of the paper is on theory, but given that the demonstrated empirical improvement is only shown on two ... | [
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"TRgkB2sjc3I",
"Go-1xhZBuQp",
"TRgkB2sjc3I",
"OplDVyThAh3",
"nips_2021_cDPFOsj2G6B",
"4k4p5ER7dYR",
"5aZMg1NJsjr",
"3LDzgoqheo",
"RL8rM5mtPkC",
"nips_2021_cDPFOsj2G6B",
"V-YoBInUzYX",
"nips_2021_cDPFOsj2G6B",
"nips_2021_cDPFOsj2G6B",
"nips_2021_cDPFOsj2G6B"
] |
nips_2021_BfcE_TDjaG6 | Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception | Adversarial examples are often cited by neuroscientists and machine learning researchers as an example of how computational models diverge from biological sensory systems. Recent work has proposed adding biologically-inspired components to visual neural networks as a way to improve their adversarial robustness. One surprisingly effective component for reducing adversarial vulnerability is response stochasticity, like that exhibited by biological neurons. Here, using recently developed geometrical techniques from computational neuroscience, we investigate how adversarial perturbations influence the internal representations of standard, adversarially trained, and biologically-inspired stochastic networks. We find distinct geometric signatures for each type of network, revealing different mechanisms for achieving robust representations. Next, we generalize these results to the auditory domain, showing that neural stochasticity also makes auditory models more robust to adversarial perturbations. Geometric analysis of the stochastic networks reveals overlap between representations of clean and adversarially perturbed stimuli, and quantitatively demonstrate that competing geometric effects of stochasticity mediate a tradeoff between adversarial and clean performance. Our results shed light on the strategies of robust perception utilized by adversarially trained and stochastic networks, and help explain how stochasticity may be beneficial to machine and biological computation.
| accept | This is an excellent paper with good consensus and engagement among the reviewers for the acceptance. It derives novel insights for mechanisms of robust perception, and will likely inspire researchers in the fields of adversarial robustness, biologically-inspired neural networks, and computational neuroscience, all important and active sub-fields in NeurIPS conference. | train | [
"KbxKam7vQSL",
"9BC3Fh1LYZ4",
"u4bJIAkp1p",
"McpsZADnF9",
"FFVLh0tMb7h",
"fy7oCOP9ZKo",
"ahzwS1Yn9vx",
"sjuejD00gaa",
"h3J-fyE1p9",
"bWC7fYIreQm",
"WRSAp61QfwC",
"Sf7wjd-jTih",
"gmyxZaQ1FHn"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper compares networks that are adversarially trained with networks that have biologically inspired stochasticity in the first layer, including previous visual object-recognition networks and a new auditory network. The analysis is based on manifolds of classes and exemplars in representation space, and consi... | [
7,
7,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_BfcE_TDjaG6",
"nips_2021_BfcE_TDjaG6",
"fy7oCOP9ZKo",
"nips_2021_BfcE_TDjaG6",
"h3J-fyE1p9",
"sjuejD00gaa",
"WRSAp61QfwC",
"bWC7fYIreQm",
"McpsZADnF9",
"9BC3Fh1LYZ4",
"KbxKam7vQSL",
"gmyxZaQ1FHn",
"nips_2021_BfcE_TDjaG6"
] |
nips_2021_2RgFZHCrI0l | Unsupervised Learning of Compositional Energy Concepts | Humans are able to rapidly understand scenes by utilizing concepts extracted from prior experience. Such concepts are diverse, and include global scene descriptors, such as the weather or lighting, as well as local scene descriptors, such as the color or size of a particular object. So far, unsupervised discovery of concepts has focused on either modeling the global scene-level or the local object-level factors of variation, but not both. In this work, we propose COMET, which discovers and represents concepts as separate energy functions, enabling us to represent both global concepts as well as objects under a unified framework. COMET discovers energy functions through recomposing the input image, which we find captures independent factors without additional supervision. Sample generation in COMET is formulated as an optimization process on underlying energy functions, enabling us to generate images with permuted and composed concepts. Finally, discovered visual concepts in COMET generalize well, enabling us to compose concepts between separate modalities of images as well as with other concepts discovered by a separate instance of COMET trained on a different dataset. Code and data available at https://energy-based-model.github.io/comet/.
| accept | There is a reviewer consensus that the paper is just above the bar. I concur with that. | train | [
"yRUJeVEbWSb",
"C3PfDbqprHG",
"xDpeJcBeph-",
"-ONx6fXw8AB",
"Uo8b50qdF4x",
"sIcV9ZaBZ3",
"xmiKqvUoPrk",
"5W6FIVg5efY"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes a novel unsupervised learning method called COMET that discovers global and local factors of variation in image datasets by representing these factors as energy functions. The experiments show that the proposed approach performs better than beta-VAE in terms of disentanglement evaluation. Qualit... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_2RgFZHCrI0l",
"nips_2021_2RgFZHCrI0l",
"nips_2021_2RgFZHCrI0l",
"nips_2021_2RgFZHCrI0l",
"5W6FIVg5efY",
"C3PfDbqprHG",
"yRUJeVEbWSb",
"nips_2021_2RgFZHCrI0l"
] |
nips_2021_bhCEPHQR0hB | Nearly Horizon-Free Offline Reinforcement Learning | Tongzheng Ren, Jialian Li, Bo Dai, Simon S. Du, Sujay Sanghavi | accept | According to the reviews, which are all in favor of accepting the paper (although most of them only slightly), this is a solid contribution for which I recommend acceptance. The points raised in the reviews and discussed in the rebuttal (e.g. concerning motivation and some of the debated technical details) shall be taken into account when preparing the final version. | test | [
"_zMBleuSpyL",
"YA_gtvkGt94",
"Mnemlh9nIeB",
"8ZvLlsqcpR0",
"4KtCkj40dv6",
"LbTMtlDyhlY",
"QMJ9B6F4txn",
"_V4NwuZ3wEt",
"HpQxy7-AAv0",
"NegkdlMQI-",
"z9AqbbzMIxv",
"FwirHNY3sil",
"DYoV0aOAu-w",
"DZNzJSk7-jO",
"o4Fqi9zOlu3",
"geaY7CJp2d8",
"PhqHVQaG04",
"Z-vnYe4SIf",
"BG9Fdrb229u"... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"... | [
" We would like to thank the reviewer for going through over our responses. We address the new concern below.\n\n* We are afraid we don't quite understand the issue raised by the reviewer, so we would like to discuss based on our understanding of the reviewer's question.\n + *If the data collection policy was un... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9,
6
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"FwirHNY3sil",
"z9AqbbzMIxv",
"4KtCkj40dv6",
"nips_2021_bhCEPHQR0hB",
"PhqHVQaG04",
"NegkdlMQI-",
"_V4NwuZ3wEt",
"_zMBleuSpyL",
"nips_2021_bhCEPHQR0hB",
"Z-vnYe4SIf",
"Z-vnYe4SIf",
"BG9Fdrb229u",
"DZNzJSk7-jO",
"o4Fqi9zOlu3",
"geaY7CJp2d8",
"0ggwapK2_8U",
"8ZvLlsqcpR0",
"HpQxy7-AAv... |
nips_2021_70eD741FHyI | Combinatorial Optimization for Panoptic Segmentation: A Fully Differentiable Approach | We propose a fully differentiable architecture for simultaneous semantic and instance segmentation (a.k.a. panoptic segmentation) consisting of a convolutional neural network and an asymmetric multiway cut problem solver. The latter solves a combinatorial optimization problem that elegantly incorporates semantic and boundary predictions to produce a panoptic labeling. Our formulation allows to directly maximize a smooth surrogate of the panoptic quality metric by backpropagating the gradient through the optimization problem. Experimental evaluation shows improvement by backpropagating through the optimization problem w.r.t. comparable approaches on Cityscapes and COCO datasets. Overall, our approach of combinatorial optimization for panoptic segmentation (COPS) shows the utility of using optimization in tandem with deep learning in a challenging large scale real-world problem and showcases benefits and insights into training such an architecture.
| accept | The authors propose a method for simultaneous semantic and instance segmentation which can be optimized end-to-end. In particular, the authors make use of predictions obtained by solving the asymmetric multiway-cut problem, coupled with a surrogate loss whose gradient is estimated using a perturbation-based approach. Promising empirical results were presented on COCO and CityScapes datasets.
Reviewers appreciated the clarity of exposition and felt that the proposed approach was a rather elegant solution, notwithstanding the fact that the empirical results were not state-of-the-art. The rebuttals did a great job addressing most concerns, but scalability and practicality remain one of the key drawbacks. Nevertheless, the reviewers felt that the contribution is significant enough to merit acceptance. I agree and I would like to see the authors include a detailed limitations section to highlight the remaining issues. | val | [
"qYIPi_aiziz",
"pWYpGXKCjrm",
"npZmxD0YCRK",
"6LW_5cfbu68",
"zvywNf2s2u",
"kmSiAk6CyrS",
"J_EyvOUuus",
"_ZN19_k-ESW",
"ts_tITPOM4P",
"CPoOrI7-eo",
"HkUSMeCn5vj",
"0XDDo6cYYgg",
"OhE_mw4Pip"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Sorry for the last-minute response.\n\nThank you to the authors for clarifying some points of the paper I thought were missing or found confusing. Generally, the further details in the rebuttal make sense to me. I continue to recommend acceptance.",
"This paper focuses on the problem of panoptic segmentation. T... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"ts_tITPOM4P",
"nips_2021_70eD741FHyI",
"_ZN19_k-ESW",
"J_EyvOUuus",
"kmSiAk6CyrS",
"OhE_mw4Pip",
"0XDDo6cYYgg",
"pWYpGXKCjrm",
"HkUSMeCn5vj",
"nips_2021_70eD741FHyI",
"nips_2021_70eD741FHyI",
"nips_2021_70eD741FHyI",
"nips_2021_70eD741FHyI"
] |
nips_2021_jgze2dDL9y8 | Reinforcement Learning with State Observation Costs in Action-Contingent Noiselessly Observable Markov Decision Processes | Many real-world problems that require making optimal sequences of decisions under uncertainty involve costs when the agent wishes to obtain information about its environment. We design and analyze algorithms for reinforcement learning (RL) in Action-Contingent Noiselessly Observable MDPs (ACNO-MDPs), a special class of POMDPs in which the agent can choose to either (1) fully observe the state at a cost and then act; or (2) act without any immediate observation information, relying on past observations to infer the underlying state. ACNO-MDPs arise frequently in important real-world application domains like healthcare, in which clinicians must balance the value of information gleaned from medical tests (e.g., blood-based biomarkers) with the costs of gathering that information (e.g., the costs of labor and materials required to administer such tests). We develop a PAC RL algorithm for tabular ACNO-MDPs that provides substantially tighter bounds, compared to generic POMDP-RL algorithms, on the total number of episodes exhibiting worse than near-optimal performance. For continuous-state, continuous-action ACNO-MDPs, we propose a novel method of incorporating observation information that, when coupled with modern RL algorithms, yields significantly faster learning compared to other POMDP-RL algorithms in several simulated environments.
| accept | The paper proposes a new framework for partially observable RL where agents can pay an additional price to fuilly observe the environment. This is a useful paradigm for several application domains. The paper makes contributions both at the algorithmic and theoretical level. Well done! | train | [
"aNb9I0Aba0R",
"AYuYXv0TAAM",
"qeEknMIy9B",
"nF_7VKOTffe",
"KrOgBxahsc",
"BYUsgzHZNP",
"hsZfQxWQeKR",
"cdZbBOlPKFz",
"FuNIR-TT7OY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper presents a framework to handle a class of problems where an agent, operating in an uncertain and partially observable environment, makes decisions that attempt to maximize a discounted finite-horizon reward. The framework assumes that the agent has some ability to fully infer the state of the system thr... | [
6,
7,
6,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
3,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_jgze2dDL9y8",
"nips_2021_jgze2dDL9y8",
"nips_2021_jgze2dDL9y8",
"cdZbBOlPKFz",
"AYuYXv0TAAM",
"aNb9I0Aba0R",
"qeEknMIy9B",
"FuNIR-TT7OY",
"nips_2021_jgze2dDL9y8"
] |
nips_2021_vuFJO_W85VU | Iterative Amortized Policy Optimization | Policy networks are a central feature of deep reinforcement learning (RL) algorithms for continuous control, enabling the estimation and sampling of high-value actions. From the variational inference perspective on RL, policy networks, when used with entropy or KL regularization, are a form of amortized optimization, optimizing network parameters rather than the policy distributions directly. However, direct amortized mappings can yield suboptimal policy estimates and restricted distributions, limiting performance and exploration. Given this perspective, we consider the more flexible class of iterative amortized optimizers. We demonstrate that the resulting technique, iterative amortized policy optimization, yields performance improvements over direct amortization on benchmark continuous control tasks.
| accept | All the reviewers think introduce the advanced variational inference for making the policy flexible by exploiting the connection between RL and variational inference is interesting.
Although I recommend the acceptance for this submission based on the reviewers feedback, there are several issues should be extensively discussed:
- EBM closed-form for policy parametrization: in fact, with entropy-regularization, the policy will lie in energy-based model. Therefore, a natural competitor is advanced sampling algorithms, e.g., Langevin, HMC, SVGD [1], Wasserstein flow [2] etc, based on learned EBM.
- Performance tradeoff in computation, sample complexity, vs flexibility: as the complicated parametrization introduced, the computation cost and sample complexity will increase. This should be carefully discussed, as almost all reviewers pointed out. From the empirical study, it seems the flexible policy family does not provide significant benefits, which diminishes the significance of this paper.
In sum, I think this paper is interesting and could be better if the authors carefully address these questions.
[1] Haarnoja, Tuomas, Haoran Tang, Pieter Abbeel, and Sergey Levine. "Reinforcement learning with deep energy-based policies." In International Conference on Machine Learning, pp. 1352-1361. PMLR, 2017.
[2] Zhang, Ruiyi, Changyou Chen, Chunyuan Li, and Lawrence Carin. "Policy optimization as wasserstein gradient flows." In International Conference on Machine Learning, pp. 5737-5746. PMLR, 2018. | train | [
"RAicCjP08tG",
"X0NXaq2af8I",
"eKc-Saz_0t0",
"KlMS7DR8ZI-",
"F1CiEOGA7-S",
"qX4uPo4ByVV",
"65v7oyiXRbt",
"4Vfdb53YyZD",
"5faDgw3kUQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I want to thank the authors for addressing my concerns. I am satisfied with the authors' response and I am keeping my original score. Good work!",
" Thank you for your review. We are glad that you found the paper novel, thorough, and technically sound. To address your questions:\n\nWe will include an additional... | [
-1,
-1,
-1,
-1,
-1,
6,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
4
] | [
"KlMS7DR8ZI-",
"5faDgw3kUQ",
"4Vfdb53YyZD",
"65v7oyiXRbt",
"qX4uPo4ByVV",
"nips_2021_vuFJO_W85VU",
"nips_2021_vuFJO_W85VU",
"nips_2021_vuFJO_W85VU",
"nips_2021_vuFJO_W85VU"
] |
nips_2021_QRBvLayFXI | Revisiting the Calibration of Modern Neural Networks | Accurate estimation of predictive uncertainty (model calibration) is essential for the safe application of neural networks. Many instances of miscalibration in modern neural networks have been reported, suggesting a trend that newer, more accurate models produce poorly calibrated predictions. Here, we revisit this question for recent state-of-the-art image classification models. We systematically relate model calibration and accuracy, and find that the most recent models, notably those not using convolutions, are among the best calibrated. Trends observed in prior model generations, such as decay of calibration with distribution shift or model size, are less pronounced in recent architectures. We also show that model size and amount of pretraining do not fully explain these differences, suggesting that architecture is a major determinant of calibration properties.
| accept | This paper studies the question of calibration in neural network image classifiers. Prior and widely cited empirical work in this area showed that state-of-the art models (at the time) could exhibit quite poor calibration properties. This paper empirically demonstrates that these same issues appear to be much less of a concern with recent, high-performing deep architectures including the MLP-Mixer and Vision Transformers in both the in-distribution setting and the out-of-distribution setting. The primary novelty in this work comes from re-visiting the question of calibration in light of recent advances in model architectures. The empirical results have strong practical significance as they indicate that past findings about calibration in convolutional models appear not to apply to recent deep architectures. The experiments conducted and metrics chosen are sound. Following the author response and discussion, the consensus is that the paper should be accepted. In revising the manuscript, the authors should include the additional results described during the discussion and take the reviewers' suggestions on the partitioning of material between the main body of the paper and the supplemental material into account. | train | [
"jh0tiS_HvHZ",
"5TpS_dMIpAE",
"l2jBiJcMHB",
"M8smBCZWktk",
"JFylLveH7a_",
"GOKXFt2MZv",
"g268vdCu7b"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper provides a systematic benchmark of recent deep classifiers in terms of the level of calibration.\n\nOn the calibration side, this paper focus on the definition of top-label (argmax, confidence) calibration. As a result, the main evaluation metrics are based on the top-label (confidence) ECE. The authors... | [
7,
8,
-1,
-1,
-1,
-1,
6
] | [
4,
4,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_QRBvLayFXI",
"nips_2021_QRBvLayFXI",
"M8smBCZWktk",
"5TpS_dMIpAE",
"g268vdCu7b",
"jh0tiS_HvHZ",
"nips_2021_QRBvLayFXI"
] |
nips_2021_026hEw26i3- | The decomposition of the higher-order homology embedding constructed from the $k$-Laplacian | Yu-Chia Chen, Marina Meila | accept | This paper investigates the reconstruction of a connected sum of (prime) manifolds from finite samples. It exploits the connection between the null space of the combinatorial Laplacian and the homology of a manifold, in order to recover the homological basis corresponding to the prime manifolds. Theoretical results are proven as to when will the reconstruction be successful. The method is applied to the computation of the shortest homologous cycle problem, and results were shown on various datasets.
All reviewers highly rated the paper. They considered it original, well written, providing strong theoretical results, and delivering inspiring connections between topological data analysis and spectral analysis. | train | [
"DdqP1gPazNg",
"mignCLXl_d",
"fULl03yxtOE",
"70zK2xD-BD",
"q4JGjyNtRmv",
"f-Agh3bbi61",
"bmw4gkQzxN",
"K2SUcpU7_Bv",
"NcxHDcjUU1_",
"V2mBVRBxgFE",
"u9wMifDOBQ6",
"xTWrGgyd_g5",
"Nf2URndFhtD",
"aRXoaxzZYYZ",
"f2uj83qvgqS"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to first thank you for your response!\n\nWe created a new environment and installed all the packages; now, we can reproduce the bug. This bug comes from the fact that `dat.scx.triangles` is an empty array (you can verify it by `print(dat.scx.triangles.shape)`, it will show shape = `(0,)`). After som... | [
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
9,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"fULl03yxtOE",
"70zK2xD-BD",
"u9wMifDOBQ6",
"xTWrGgyd_g5",
"bmw4gkQzxN",
"nips_2021_026hEw26i3-",
"K2SUcpU7_Bv",
"f-Agh3bbi61",
"nips_2021_026hEw26i3-",
"Nf2URndFhtD",
"f2uj83qvgqS",
"aRXoaxzZYYZ",
"nips_2021_026hEw26i3-",
"nips_2021_026hEw26i3-",
"nips_2021_026hEw26i3-"
] |
nips_2021_fHfdquSc2kT | Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits with Super Heavy-Tailed Payoffs | Han Zhong, Jiayi Huang, Lin Yang, Liwei Wang | accept | Dear authors,
There was an intense discussion and still some disagreement between the reviewers.
Hence I decided to take a look at the paper.
The article requires a serious polishing as there are a lot of typos (both in text and math).
The considered idea is interesting.
When applied to the bandit scenario, one difficulty when working with means of medians instead of means is the possible inversion of the order of arms that is, we may solve the wrong problem, since no longer targeting an arm with highest mean (when this exists).
Now, I appreciate the authors mention the symmetry assumption, as it seems indeed crucial to avoid such an issue.
Actually, I realize that the proof of Theorem 4.1 does not mention this important fact: When doing the reduction, it is not enough that the pb reduces to the regret of a algorithm facing sub-Gaussian arms: the order of the arms should also be the same.
Luckily, this seems to be true. There seems to be also an important missing part on l.252, to go from applying to Theorem 3.2 to some property of the noise. I believe the statement should be about the mean of medians first.
[Note that on l.151, when you say that for alpha>1, we can relax the symmetry assumption to only considering centered noise, I am not sure this is correct, and suggest you remove this claim.]
The final result shows a reduction, that, in principle could potentially be arbitrarily bad (if $\\tilde n$ is too close to $T$ for instance).
Now, Corollary 4.5 shows that in some situations the final regret is controlled. What can be criticised is that the choice of $\\tilde n$ should depend on $T$ hence making the algorithm no longer anytime.
All in all, despite these points, I believe the authors make a descent work and that the paper can be accepted.
However, I strongly encourage the authors to polish the wording and maths of the article.
| train | [
"AO-bOpImjsX",
"x1jMCKnO9iS",
"hG8l3CWDTqR",
"egkL4FpnkyY",
"EEZm2f_mjR9",
"JcjcIVtN6Xw",
"oe740ZurYDh",
"Q58kZxI2Oe6",
"-gYE_03XU6R",
"IBoq29iIe-",
"b2L8knZuyA",
"eyGKSSo69g5",
"PugF9p2f9M",
"cI92m2zycMI",
"5En63r6vE-c",
"HlsOxpgCD7h",
"FWUM0hXb7zm",
"YVu1s7Y5NIo",
"hNfy-_qvWt6"... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
... | [
" Thanks for your patience and valuable advice during the discussion. We will incorporate your suggestions in the next revision of our paper. ",
"This paper addresses Multi-Armed Bandit (MAB) problems when rewards are \"super heavy-tailed\", meaning when the mean (first order moment) might not exist. Recall that ... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"hG8l3CWDTqR",
"nips_2021_fHfdquSc2kT",
"egkL4FpnkyY",
"EEZm2f_mjR9",
"JcjcIVtN6Xw",
"oe740ZurYDh",
"HlsOxpgCD7h",
"-gYE_03XU6R",
"b2L8knZuyA",
"nips_2021_fHfdquSc2kT",
"eyGKSSo69g5",
"hNfy-_qvWt6",
"x1jMCKnO9iS",
"5En63r6vE-c",
"YVu1s7Y5NIo",
"x1jMCKnO9iS",
"VJCuwUHil9P",
"bFJZHu_... |
nips_2021_bo75bBsQzeZ | A nonparametric method for gradual change problems with statistical guarantees | We consider the detection and localization of gradual changes in the distribution of a sequence of time-ordered observations. Existing literature focuses mostly on the simpler abrupt setting which assumes a discontinuity jump in distribution, and is unrealistic for some applied settings. We propose a general method for detecting and localizing gradual changes that does not require any specific data generating model, any particular data type, or any prior knowledge about which features of the distribution are subject to change. Despite relaxed assumptions, the proposed method possesses proven theoretical guarantees for both detection and localization.
| accept | The authors are proposing a general method for detecting and localizing gradual changes without a specific data model, type or prior knowledge on the features. Despite these relaxations, they provide a new method with guarantees, which they empirically demonstrate. Overall, this is great scholarly work, from its writing to empirical demonstrations and the work should be accepted. | train | [
"VbRP9yQ6qC",
"ELNdNrFme0t",
"6wjPpZBMEaG",
"UmhxdtsYZkf",
"Zrrs64JuqrI",
"TNxFYcVVgUp",
"7nF5z1lTx7S",
"E9PXuGQ_wHR",
"TF8xn4sRdw4",
"Li-HwI7H_NM",
"DoaOmftbWv9",
"UJ65gxA4YKM",
"8IHML13hiRV"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a nonparametric, kernel-based approach for detecting and localizing gradual changes in time-series data, provides theoretical guarantees on its performance, and validates its efficacy via numerical experiments with both synthetic and real data. I think this work makes solid contributions to th... | [
7,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
7,
7,
5
] | [
2,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nips_2021_bo75bBsQzeZ",
"8IHML13hiRV",
"E9PXuGQ_wHR",
"Zrrs64JuqrI",
"nips_2021_bo75bBsQzeZ",
"8IHML13hiRV",
"VbRP9yQ6qC",
"UJ65gxA4YKM",
"DoaOmftbWv9",
"Zrrs64JuqrI",
"nips_2021_bo75bBsQzeZ",
"nips_2021_bo75bBsQzeZ",
"nips_2021_bo75bBsQzeZ"
] |
nips_2021_7_eLEvFjCi3 | Nested Graph Neural Networks | Graph neural network (GNN)'s success in graph classification is closely related to the Weisfeiler-Lehman (1-WL) algorithm. By iteratively aggregating neighboring node features to a center node, both 1-WL and GNN obtain a node representation that encodes a rooted subtree around the center node. These rooted subtree representations are then pooled into a single representation to represent the whole graph. However, rooted subtrees are of limited expressiveness to represent a non-tree graph. To address it, we propose Nested Graph Neural Networks (NGNNs). NGNN represents a graph with rooted subgraphs instead of rooted subtrees, so that two graphs sharing many identical subgraphs (rather than subtrees) tend to have similar representations. The key is to make each node representation encode a subgraph around it more than a subtree. To achieve this, NGNN extracts a local subgraph around each node and applies a base GNN to each subgraph to learn a subgraph representation. The whole-graph representation is then obtained by pooling these subgraph representations. We provide a rigorous theoretical analysis showing that NGNN is strictly more powerful than 1-WL. In particular, we proved that NGNN can discriminate almost all r-regular graphs, where 1-WL always fails. Moreover, unlike other more powerful GNNs, NGNN only introduces a constant-factor higher time complexity than standard GNNs. NGNN is a plug-and-play framework that can be combined with various base GNNs. We test NGNN with different base GNNs on several benchmark datasets. NGNN uniformly improves their performance and shows highly competitive performance on all datasets.
| accept | This work proposes a way to overcome some of the known expressivity issues of permutation invariant message-passing GNN. It does so by learning node features by applying a GNN on a subgraph around each node.
The reviewers agreed that the paper is well written and that it makes an interesting theoretical contribution. Moreover, the empirical evaluation (taking also into account the results presented during the rebuttal period) demonstrates that the proposed architecture can bring practical benefits in terms of accuracy. The main catch seems to be that the computational/memory complexity of the proposed method might be prohibitive for large graphs.
The reviewers were divided in their evaluations. The main issue expressed was that the proposed idea is not surprising given the literature and that a thorough discussion of relevant works was missing from the submitted paper. Even though these concerns are justified, it seems that the specific idea proposed has not been considered/tested before. Moreover, the authors acknowledged and explained the connections to previous works in the rebuttal period. Thus, I don't see an issue with accepting the work under the condition that the camera-ready version is updated appropriately. | train | [
"ca6p3Rj8J0x",
"S0HUdYhWcpm",
"TvMpD1k32dH",
"U7idf04taBa",
"fdmbACeAkPc",
"LJzxbMUQJYx",
"dKrnfi2cBCP",
"aIRGdQByBaq",
"-vyFarbJ4un",
"lMOKGIXNXgf",
"qugX5W_eDYf",
"TAgvmsY5QJ4",
"RfXVvF4Fj5u",
"LtKFxjvLmaz"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new architecture to strengthen message passing GNNs. Since the weaknesses of GNNs usually come from the unidentifability of nodes, the authors suggest to first preprocess given graphs and add each node a new initial attribute, and then apply the outer GNN. The new node attributes come from a... | [
4,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
7
] | [
5,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"nips_2021_7_eLEvFjCi3",
"TAgvmsY5QJ4",
"ca6p3Rj8J0x",
"nips_2021_7_eLEvFjCi3",
"nips_2021_7_eLEvFjCi3",
"U7idf04taBa",
"TAgvmsY5QJ4",
"LtKFxjvLmaz",
"lMOKGIXNXgf",
"ca6p3Rj8J0x",
"RfXVvF4Fj5u",
"nips_2021_7_eLEvFjCi3",
"nips_2021_7_eLEvFjCi3",
"nips_2021_7_eLEvFjCi3"
] |
nips_2021_6fmgB38rLI1 | Multimodal and Multilingual Embeddings for Large-Scale Speech Mining | We present an approach to encode a speech signal into a fixed-size representation which minimizes the cosine loss with the existing massively multilingual LASER text embedding space. Sentences are close in this embedding space, independently of their language and modality, either text or audio. Using a similarity metric in that multimodal embedding space, we perform mining of audio in German, French, Spanish and English from Librivox against billions of sentences from Common Crawl. This yielded more than twenty thousand hours of aligned speech translations. To evaluate the automatically mined speech/text corpora, we train neural speech translation systems for several languages pairs. Adding the mined data, achieves significant improvements in the BLEU score on the CoVoST2 and the MUST-C test sets with respect to a very competitive baseline. Our approach can also be used to directly perform speech-to-speech mining, without the need to first transcribe or translate the data. We obtain more than one thousand three hundred hours of aligned speech in French, German, Spanish and English. This speech corpus has the potential to boost research in speech-to-speech translation which suffers from scarcity of natural end-to-end training data. All the mined multimodal corpora will be made freely available.
| accept | Reviewers unanimously voted to accepted this paper with high scores. | train | [
"YIfIrLIzEt",
"FZ__7-WMw7",
"prVKt4WoZM",
"aL1vwe7-18d",
"DDq0Tn5GNaM",
"wyuOvw_0w-j",
"5rTWO9wTstR",
"nc8j-_BY5z",
"kYC0ngewBgf",
"bLRZPdyT-o",
"kZvdawImdrE"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the reviewer for taking the time to read our comments.\nWe have not updated the paper since we understood that the submission rules in the NeurIPS CFP do not allow to update the paper during the review process.\n\nWe are providing some details below.\n\n1) number of hours when aligning Engl... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
7,
7,
8
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"prVKt4WoZM",
"wyuOvw_0w-j",
"DDq0Tn5GNaM",
"nips_2021_6fmgB38rLI1",
"kYC0ngewBgf",
"bLRZPdyT-o",
"aL1vwe7-18d",
"kZvdawImdrE",
"nips_2021_6fmgB38rLI1",
"nips_2021_6fmgB38rLI1",
"nips_2021_6fmgB38rLI1"
] |
nips_2021_s6MWPKgL5XB | Necessary and sufficient graphical conditions for optimal adjustment sets in causal graphical models with hidden variables | The problem of selecting optimal backdoor adjustment sets to estimate causal effects in graphical models with hidden and conditioned variables is addressed. Previous work has defined optimality as achieving the smallest asymptotic estimation variance and derived an optimal set for the case without hidden variables. For the case with hidden variables there can be settings where no optimal set exists and currently only a sufficient graphical optimality criterion of limited applicability has been derived. In the present work optimality is characterized as maximizing a certain adjustment information which allows to derive a necessary and sufficient graphical criterion for the existence of an optimal adjustment set and a definition and algorithm to construct it. Further, the optimal set is valid if and only if a valid adjustment set exists and has higher (or equal) adjustment information than the Adjust-set proposed in Perkovi{\'c} et~al. [Journal of Machine Learning Research, 18: 1--62, 2018] for any graph. The results translate to minimal asymptotic estimation variance for a class of estimators whose asymptotic variance follows a certain information-theoretic relation. Numerical experiments indicate that the asymptotic results also hold for relatively small sample sizes and that the optimal adjustment set or minimized variants thereof often yield better variance also beyond that estimator class. Surprisingly, among the randomly created setups more than 90\% fulfill the optimality conditions indicating that also in many real-world scenarios graphical optimality may hold.
| accept | The paper aims to get around a known negative result in the theory of covariate adjustment due to Rotnitzky and Smucler: in hidden variable causal models, an adjustment set that minimizes the variance of the target parameter does not depend only on the model, but also on the element within the model (in other words, not only on the graph, but also on particular parameter settings).
To address this, the authors propose a criterion based on information theory that aims to maximally constrain the outcome, and minimally constrain the treatment. They then investigate cases when this criterion yields a variance-minimizing adjustment set.
While initially reviewer opinion on this paper was split, discussions with the authors led to a consensus that the paper is an interesting, novel, and worthwhile addition to NeurIPS proceedings. | train | [
"iRyeYbItBUF",
"5jDlkzKFDST",
"_BL14hI5qg",
"jhtrxwWXjaN",
"EdM2UwYAE1l",
"T27Mmk8AHul",
"lY-tYgSf9Ub",
"2ZSN1G72ucp",
"6oNtdav6jT4",
"Zyo1Y99tncZ",
"OK6igyXMJjU",
"W6CV1Bf7o0",
"zglqRAiywRj"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Dear Meta-Reviewer and Reviewer wcA7,\n\nI thought again about the comment to remove the parts on MAGs and focus on the more standard ADMGs in the paper. You wrote: \" Since MAG, PAG, Markov equivalence, \"visible edge\" are difficult concepts, I recommend to drop those in your work unless it's essential.\"\n\nI ... | [
-1,
-1,
7,
-1,
-1,
7,
7,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
2,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
3
] | [
"EdM2UwYAE1l",
"OK6igyXMJjU",
"nips_2021_s6MWPKgL5XB",
"6oNtdav6jT4",
"Zyo1Y99tncZ",
"nips_2021_s6MWPKgL5XB",
"nips_2021_s6MWPKgL5XB",
"W6CV1Bf7o0",
"_BL14hI5qg",
"T27Mmk8AHul",
"zglqRAiywRj",
"lY-tYgSf9Ub",
"nips_2021_s6MWPKgL5XB"
] |
nips_2021_Rizxjst0_2B | On Blame Attribution for Accountable Multi-Agent Sequential Decision Making | Blame attribution is one of the key aspects of accountable decision making, as it provides means to quantify the responsibility of an agent for a decision making outcome. In this paper, we study blame attribution in the context of cooperative multi-agent sequential decision making. As a particular setting of interest, we focus on cooperative decision making formalized by Multi-Agent Markov Decision Processes (MMDPs), and we analyze different blame attribution methods derived from or inspired by existing concepts in cooperative game theory. We formalize desirable properties of blame attribution in the setting of interest, and we analyze the relationship between these properties and the studied blame attribution methods. Interestingly, we show that some of the well known blame attribution methods, such as Shapley value, are not performance-incentivizing, while others, such as Banzhaf index, may over-blame agents. To mitigate these value misalignment and fairness issues, we introduce a novel blame attribution method, unique in the set of properties it satisfies, which trade-offs explanatory power (by under-blaming agents) for the aforementioned properties. We further show how to account for uncertainty about agents' decision making policies, and we experimentally: a) validate the qualitative properties of the studied blame attribution methods, and b) analyze their robustness to uncertainty.
| accept | All of the reviewers appreciated the general problem being tackled by this paper, and view it as an interesting one for multiagent systems and accountability/interpretability in AI. The axiomatic approach and the novelty and analysis of the new AP mechanism for blame attribution are both novel and interesting. The reviews express some concerns around presentation and explanation of some of the details (e.g., experiment motivation, the remarks around consequentialist vs. deontic perspective on the problem, etc.). The author response helped clarify a few of these issues, and nudged the consensus to acceptance. It is critical however that the authors make the revisions needed to address/clarify the issues raised. While this type of paper is somewhat a bit outside of the (current) set of mainstream topics for NeurIPS, the topic is of relevance to the community (and connects to interpretability and fairness, etc. that are becoming more mainstream in NeurIPS).
| train | [
"JijV_74Bt9",
"fzJLSlXXKPV",
"Bz3cObDYK1",
"HP4lNgNoC5f",
"OvXYKSzi2QJ",
"uv0BHsHUPiN",
"yMvJPUSWDhQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper furthers the study of blame in accountability in cooperative multi-agent systems. They provide many natural desiderata, some natural attribution techniques, and characterizations of how these relate, as well as how they relate to standard game theory ideas. I enjoyed this paper, and thought it was gen... | [
7,
7,
7,
-1,
-1,
-1,
-1
] | [
4,
4,
2,
-1,
-1,
-1,
-1
] | [
"nips_2021_Rizxjst0_2B",
"nips_2021_Rizxjst0_2B",
"nips_2021_Rizxjst0_2B",
"nips_2021_Rizxjst0_2B",
"Bz3cObDYK1",
"JijV_74Bt9",
"fzJLSlXXKPV"
] |
nips_2021__WnGcwXLYOE | FLEX: Unifying Evaluation for Few-Shot NLP | Few-shot NLP research is highly active, yet conducted in disjoint research threads with evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful experimental design. Consequently, the community does not know which techniques perform best or even if they outperform simple baselines. In response, we formulate the FLEX Principles, a set of requirements and best practices for unified, rigorous, valid, and cost-sensitive few-shot NLP evaluation. These principles include Sample Size Design, a novel approach to benchmark design that optimizes statistical accuracy and precision while keeping evaluation costs manageable. Following the principles, we release the FLEX benchmark, which includes four few-shot transfer settings, zero-shot evaluation, and a public leaderboard that covers diverse NLP tasks. In addition, we present UniFew, a prompt-based model for few-shot learning that unifies pretraining and finetuning prompt formats, eschewing complex machinery of recent prompt-based approaches in adapting downstream task formats to language model pretraining objectives. We demonstrate that despite simplicity, UniFew achieves results competitive with both popular meta-learning and prompt-based approaches.
| accept | This paper introduces a family of multi-task benchmarks for few-shot learning in NLP. It appears to be built on existing data with relatively lightweight new methodological contributions, but reviewers agreed that it represents a useful tool and a clear improvement over current standard practice.
I'd urge the authors to add some discussion of the licences that apply to the datasets used. This issue came up only toward the end of reviewer discussion, and so won't influence our decision about the paper, but it's not currently clear whether the required data for the benchmark is guaranteed to remain legally available or whether private firms can use it. | test | [
"xQ2AeqW4NRs",
"4RKMhtABMEn",
"aDK7nQbGDRe",
"b1AB9-rmp48",
"bieuG6_f-Gp",
"qqZQxyYcB7_",
"oJZnU2nOl4D",
"sTHUshDLGmM",
"w4IyIFZlUJP"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Fleet is a new benchmark and leaderboard that combines previous work on few-shot text classification. While there are no new tasks, the new benhmark adds value by combining complimentary lines of work in a well designed evaluation framework.\n This paper presents a new benchmark and associated leaderboard that ag... | [
7,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"nips_2021__WnGcwXLYOE",
"qqZQxyYcB7_",
"w4IyIFZlUJP",
"sTHUshDLGmM",
"oJZnU2nOl4D",
"xQ2AeqW4NRs",
"nips_2021__WnGcwXLYOE",
"nips_2021__WnGcwXLYOE",
"nips_2021__WnGcwXLYOE"
] |
nips_2021_1yeYYtLqq7K | A flow-based latent state generative model of neural population responses to natural images | We present a joint deep neural system identification model for two major sources of neural variability: stimulus-driven and stimulus-conditioned fluctuations. To this end, we combine (1) state-of-the-art deep networks for stimulus-driven activity and (2) a flexible, normalizing flow-based generative model to capture the stimulus-conditioned variability including noise correlations. This allows us to train the model end-to-end without the need for sophisticated probabilistic approximations associated with many latent state models for stimulus-conditioned fluctuations. We train the model on the responses of thousands of neurons from multiple areas of the mouse visual cortex to natural images. We show that our model outperforms previous state-of-the-art models in predicting the distribution of neural population responses to novel stimuli, including shared stimulus-conditioned variability. Furthermore, it successfully learns known latent factors of the population responses that are related to behavioral variables such as pupil dilation, and other factors that vary systematically with brain area or retinotopic location. Overall, our model accurately accounts for two critical sources of neural variability while avoiding several complexities associated with many existing latent state models. It thus provides a useful tool for uncovering the interplay between different factors that contribute to variability in neural activity.
| accept | This paper introduces a system identification model for stimulus-driven and stimulus-conditioned fluctuations. They propose an architecture based on factor analysis and a flow model, and validate their performance on synthetic and mouse visual calcium imaging data. The paper addresses an important question in neuroscience, it is well written, introduces a new ML approach to computational neuroscience, and the authors have done a great deal to address all the concerns of the reviewers and convince them of the points they intended to make. Given this, I recommend this paper for acceptance and ask the authors to make all the modifications they promised in the final manuscript.
| train | [
"yehioafD92i",
"7wF6BO0mbTm",
"uU38WFnQQEC",
"bKtDfF_wpKl",
"Wa-GHOEB_Z",
"M9taSBnf4pK",
"i93RHOyTbH9",
"8vvraOFtrJB",
"2XE-Ul4rFYU",
"Q9KYhe2iSw",
"1I5Skl2eiAD",
"V2uZw6XbFvn",
"_umv1RnMeII",
"rkALXbWwj8U",
"J5xoPqC1Zh",
"oD24zYHOR4",
"NSNpEiSV9d",
"D3a9J6F0TEJ",
"lApw4QlAXK3"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"The authors present a Factor Analysis + flow model for modeling stimulus dependent neural responses. The FA + flow architecture allows the model to capture stimulus dependence, trial to trial correlated variability, and some features of the conditional and marginal response distributions. Validation is performed o... | [
8,
-1,
7,
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
-1,
3,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_1yeYYtLqq7K",
"1I5Skl2eiAD",
"nips_2021_1yeYYtLqq7K",
"M9taSBnf4pK",
"nips_2021_1yeYYtLqq7K",
"J5xoPqC1Zh",
"2XE-Ul4rFYU",
"nips_2021_1yeYYtLqq7K",
"oD24zYHOR4",
"1I5Skl2eiAD",
"lApw4QlAXK3",
"_umv1RnMeII",
"NSNpEiSV9d",
"nips_2021_1yeYYtLqq7K",
"Wa-GHOEB_Z",
"8vvraOFtrJB",
... |
nips_2021_R0h3NUMao_U | Learnable Fourier Features for Multi-dimensional Spatial Positional Encoding | Yang Li, Si Si, Gang Li, Cho-Jui Hsieh, Samy Bengio | accept | The majority of reviewers agreed that this paper presents an interesting idea related to a relevant application. | train | [
"EAqEBlECzK",
"mVZ6dSEf5DX",
"fGUIoMF1PPj",
"trgRX3S9vzG",
"ljJ56UqGzvX",
"84GvZuzaFUJ",
"EZk2D1uooxk",
"tPctXJ2pw9R",
"Vzr8vK7_Igv",
"0rBRM27FtXJ",
"7jol-tEBKp",
"2BByYFmn4ZC",
"rEhbmAHzrgo",
"s8vDp0xhyP6",
"LNzBRgrGKAz"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the rebuttal and still think the paper present an interesting idea. Thus, I retain my score.",
"This paper proposes a learnable positional encoding based on Fourier features for capturing not only positions but also complex positional relationships in the vision domain. The experimental results demo... | [
-1,
5,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
9,
6
] | [
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
3
] | [
"Vzr8vK7_Igv",
"nips_2021_R0h3NUMao_U",
"ljJ56UqGzvX",
"nips_2021_R0h3NUMao_U",
"tPctXJ2pw9R",
"LNzBRgrGKAz",
"mVZ6dSEf5DX",
"trgRX3S9vzG",
"2BByYFmn4ZC",
"rEhbmAHzrgo",
"s8vDp0xhyP6",
"nips_2021_R0h3NUMao_U",
"nips_2021_R0h3NUMao_U",
"nips_2021_R0h3NUMao_U",
"nips_2021_R0h3NUMao_U"
] |
nips_2021_WBVbl8POq8v | Doubly Robust Thompson Sampling with Linear Payoffs | Wonyoung Kim, Gi-Soo Kim, Myunghee Cho Paik | accept | This paper combines Thompson Sampling and doubly robust estimators for the linear contextual bandits. We have had many discussions and comments. The authors successfully answered all the questions raised during the rebuttal. All the reviewers agree that this is a nice paper with a solid contribution. | val | [
"no1VVuLsuR",
"DgAky18RxVW",
"mCt9Ab029G",
"5_wqOtNz8lR",
"g0EWds2WFr-",
"9cEtLDFXmmo",
"hHiklnMhQn2",
"5ZMXzq-PWw",
"6tc72rOrqg8",
"9tk_60m67Yo",
"cMaAhOlKIbE",
"sfRAXb--BuT",
"SSJXpLkCvpl",
"HUryAfnMlZz",
"_yPoVoGeIf4",
"gRfyLGDgQDo"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their response. My concerns have been addressed, and I would like to raise my score to 7.",
" Selecting $\\check{\\beta}\\_t$ as $0$ yields the regret bound stated in (3).\nIntuitively it is fine because the term $X\\_i(t)^{T}\\check{\\beta}\\_t$ in the definition of $Y^{DR... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"SSJXpLkCvpl",
"mCt9Ab029G",
"g0EWds2WFr-",
"9tk_60m67Yo",
"9cEtLDFXmmo",
"cMaAhOlKIbE",
"5ZMXzq-PWw",
"nips_2021_WBVbl8POq8v",
"5ZMXzq-PWw",
"gRfyLGDgQDo",
"HUryAfnMlZz",
"nips_2021_WBVbl8POq8v",
"_yPoVoGeIf4",
"nips_2021_WBVbl8POq8v",
"nips_2021_WBVbl8POq8v",
"nips_2021_WBVbl8POq8v"
... |
nips_2021_fxGT4XaLkpX | A Computationally Efficient Method for Learning Exponential Family Distributions | Abhin Shah, Devavrat Shah, Gregory Wornell | accept | This paper studies the problem of learning the natural parameters of a k-parameter (minimal) exponential family, given i.i.d. samples from the underlying distribution. The main result is a computationally and statistically efficient (polynomial in k/eps)
learning algorithm that outputs an estimate of the parameters within error eps under some additional assumptions. Instead of directly analyzing the log-likelihood of the minimal exponential family (which is computationally hard in general), this work uses projected gradient descent applied to a new loss function. A caveat pointed out by the reviewers (and acknowledged by the authors) is that computing the gradient of the objective is not necessarily in polynomial-time, unless additional assumptions are made on the distribution. Efficient computability might be true for specific cases, but it is not true in the full generality stated in the paper. While the reviewers leaned towards accepting the paper, they believe that this point needs to be addressed in the final version. | train | [
"rJKDyZ5lzuI",
"eUeN0HlJRf1",
"WXPvW99iE1e",
"cdOXqAjWtVT",
"P17pI-vLtsC",
"DxxLkDww_WL",
"OGCkwvzp16r",
"Hg5FJiiJ3jH",
"I1BHvPsg63U",
"UOqLXmrs4CX"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank Reviewer nf2S for their detailed feedback and comments.\n\nWe respond to the main review below:\n\n1. *Dimension $p$ of the vector of $x$*: We do not assume that $p$ is a constant. We think of $k_1$ and $k_2$ as implicit functions of $p$ i.e., we let $k_1$ and $k_2$ scale with $p$. Typically, for an expo... | [
-1,
5,
-1,
6,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
4,
3
] | [
"eUeN0HlJRf1",
"nips_2021_fxGT4XaLkpX",
"P17pI-vLtsC",
"nips_2021_fxGT4XaLkpX",
"Hg5FJiiJ3jH",
"UOqLXmrs4CX",
"I1BHvPsg63U",
"cdOXqAjWtVT",
"nips_2021_fxGT4XaLkpX",
"nips_2021_fxGT4XaLkpX"
] |
nips_2021_je4ymjfb5LC | Rethinking Neural Operations for Diverse Tasks | An important goal of AutoML is to automate-away the design of neural networks on new tasks in under-explored domains. Motivated by this goal, we study the problem of enabling users to discover the right neural operations given data from their specific domain. We introduce a search space of operations called XD-Operations that mimic the inductive bias of standard multi-channel convolutions while being much more expressive: we prove that it includes many named operations across multiple application areas. Starting with any standard backbone such as ResNet, we show how to transform it into a search space over XD-operations and how to traverse the space using a simple weight sharing scheme. On a diverse set of tasks—solving PDEs, distance prediction for protein folding, and music modeling—our approach consistently yields models with lower error than baseline networks and often even lower error than expert-designed domain-specific approaches.
| accept | This work is a well-presented and motivated perspective on NAS, where a reparameterisation of the search space using K-matrices yields a method that is able to operate across a number of interesting domains. The only major concern raised by the reviewers is, naturally, the choice of domains. The reviewers all agreed the authors have selected a diverse range of tasks and demonstrated their method on each. Naturally, given this demonstration, there is some anticipation of demonstrating this method on a diverse set of larger-scale problems, perhaps this will be future work. | train | [
"XJ7Lj25kGEk",
"wpAOvu4SyEY",
"6g0YpPOZYQ",
"RtKzhF5L_10",
"VDVoqHGUYOk",
"465Ft0QGuGN",
"lysV_FcvOpc",
"42jxkrVTdfL",
"SLS27rrFfSK",
"ipdDg6v1Ouk",
"_LAGV3hC3wG",
"6BGmjB5SzDN"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your evaluation and feedback. We agree that efficiency at evaluation-time is an important direction for improvement. In revision we will add the requested note on the lack of sparse discretization and comparison to standard methods to the discussion at the end of Section 2.",
"This paper explores ... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7
] | [
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"6g0YpPOZYQ",
"nips_2021_je4ymjfb5LC",
"ipdDg6v1Ouk",
"lysV_FcvOpc",
"465Ft0QGuGN",
"SLS27rrFfSK",
"42jxkrVTdfL",
"6BGmjB5SzDN",
"_LAGV3hC3wG",
"wpAOvu4SyEY",
"nips_2021_je4ymjfb5LC",
"nips_2021_je4ymjfb5LC"
] |
nips_2021_gwGYN1fQY8H | Motif-based Graph Self-Supervised Learning for Molecular Property Prediction | Predicting molecular properties with data-driven methods has drawn much attention in recent years. Particularly, Graph Neural Networks (GNNs) have demonstrated remarkable success in various molecular generation and prediction tasks. In cases where labeled data is scarce, GNNs can be pre-trained on unlabeled molecular data to first learn the general semantic and structural information before being finetuned for specific tasks. However, most existing self-supervised pretraining frameworks for GNNs only focus on node-level or graph-level tasks. These approaches cannot capture the rich information in subgraphs or graph motifs. For example, functional groups (frequently-occurred subgraphs in molecular graphs) often carry indicative information about the molecular properties. To bridge this gap, we propose Motif-based Graph Self-supervised Learning (MGSSL) by introducing a novel self-supervised motif generation framework for GNNs. First, for motif extraction from molecular graphs, we design a molecule fragmentation method that leverages a retrosynthesis-based algorithm BRICS and additional rules for controlling the size of motif vocabulary. Second, we design a general motif-based generative pretraining framework in which GNNs are asked to make topological and label predictions. This generative framework can be implemented in two different ways, i.e., breadth-first or depth-first. Finally, to take the multi-scale information in molecular graphs into consideration, we introduce a multi-level self-supervised pre-training. Extensive experiments on various downstream benchmark tasks show that our methods outperform all state-of-the-art baselines.
| accept | This paper presents a novel motif-level GNN pretraining. While most existing self-supervised pre-training frameworks for GNNs are only defined as node-level or graph-level tasks, molecular graphs would follow a compositionality principle, and the complex structures are defined by their parts (motifs/fragments/scaffolds) and how to combine them. The authors show that by leveraging the BRICS fragmentation algorithm to decompose a molecule into a "motif tree" representation, the pretraining could become more effective.
This paper triggered a lot of discussions between the authors and the reviewers, as well as among the reviewers themselves. One reviewer raised very serious concerns on the technical novelty (whether leveraging an existing algorithm for molecule decomposition could provide sufficient novelty to the pretraining task), and on the choice of GIN as the backbone model in the experiments (many stronger GNN models were developed recently). On the other hand, other reviewers believed that the introduction of motif to molecule graph learning was novel by its own, especially for this inter discipline, and the use of GIN makes sense and does not affect the main conclusion. Although a consensus was not successfully reached at the end, I tend to agree that this paper has its value in representation of molecule graph (and shed some lights on AI for molecular science), and there is no serious issue with the choice of GIN. Therefore, my recommendation is ACCEPT as a poster.
| train | [
"-rRSD8raiE",
"uikWoPzMck",
"GGX4SI2Qxjg",
"jYNFbAegnqD",
"Oy5PqUBdDuO",
"O13l-FjBmz8",
"19U3dBla7Sf",
"FAPQl8bma7B",
"Z7NnC4ptH-",
"MTyGV9Us-tB",
"4T096-iWrTg",
"B68U13MSUn-",
"GAcQqwm3C2a",
"BSb93H9STm",
"5mLX7k7f6TU"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We thank the reviewer for the valuable suggestions.\n\nIn Table 2 we tried four additional GNN models with MGSSL and observed the performance gains compared with finetuning from scratch. We admit that comparing MGSSL with other baseline pre-training methods on additional GNN models would make the paper more compl... | [
-1,
-1,
8,
4,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
4
] | [
"uikWoPzMck",
"GAcQqwm3C2a",
"nips_2021_gwGYN1fQY8H",
"nips_2021_gwGYN1fQY8H",
"O13l-FjBmz8",
"BSb93H9STm",
"FAPQl8bma7B",
"4T096-iWrTg",
"B68U13MSUn-",
"nips_2021_gwGYN1fQY8H",
"5mLX7k7f6TU",
"MTyGV9Us-tB",
"GGX4SI2Qxjg",
"jYNFbAegnqD",
"nips_2021_gwGYN1fQY8H"
] |
nips_2021_HWshP75OfKR | On Inductive Biases for Heterogeneous Treatment Effect Estimation | We investigate how to exploit structural similarities of an individual's potential outcomes (POs) under different treatments to obtain better estimates of conditional average treatment effects in finite samples. Especially when it is unknown whether a treatment has an effect at all, it is natural to hypothesize that the POs are similar -- yet, some existing strategies for treatment effect estimation employ regularization schemes that implicitly encourage heterogeneity even when it does not exist and fail to fully make use of shared structure. In this paper, we investigate and compare three end-to-end learning strategies to overcome this problem -- based on regularization, reparametrization and a flexible multi-task architecture -- each encoding inductive bias favoring shared behavior across POs. To build understanding of their relative strengths, we implement all strategies using neural networks and conduct a wide range of semi-synthetic experiments. We observe that all three approaches can lead to substantial improvements upon numerous baselines and gain insight into performance differences across various experimental settings.
| accept | This paper presents a novel framing and approach to the problem of conditional average treatment effect estimation, and validates it on synthetic data and a small sample of natural data. Reviewers generally found it to be a technically-simple but interesting contribution, and raised only minor concerns. | train | [
"V2Qd16n2n3l",
"H4yjpZ2JnfQ",
"mCD8Tigpdyo",
"uSC2qV6KfCo",
"71ae1irj2v",
"N1qN-47vsCM",
"fcnZib9ZoGL",
"iJTSFYCUWfG",
"kEl417VI9E2",
"uQ1hBStm84B",
"gAHZ-sxL8Nx",
"6RhVP_R8LBV"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"In this paper, the authors proposed methods to encourage structural similarities in the potential outcomes (POs) under different treatments when learning the conditional average treatment effects (CATEs). Specifically, three sets of different structure/model modifications were proposed on top of the existing TNet/... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_HWshP75OfKR",
"kEl417VI9E2",
"nips_2021_HWshP75OfKR",
"uQ1hBStm84B",
"mCD8Tigpdyo",
"V2Qd16n2n3l",
"V2Qd16n2n3l",
"gAHZ-sxL8Nx",
"V2Qd16n2n3l",
"mCD8Tigpdyo",
"6RhVP_R8LBV",
"nips_2021_HWshP75OfKR"
] |
nips_2021_NlLynLBBi01 | DP-SSL: Towards Robust Semi-supervised Learning with A Few Labeled Samples | The scarcity of labeled data is a critical obstacle to deep learning. Semi-supervised learning (SSL) provides a promising way to leverage unlabeled data by pseudo labels. However, when the size of labeled data is very small (say a few labeled samples per class), SSL performs poorly and unstably, possibly due to the low quality of learned pseudo labels. In this paper, we propose a new SSL method called DP-SSL that adopts an innovative data programming (DP) scheme to generate probabilistic labels for unlabeled data. Different from existing DP methods that rely on human experts to provide initial labeling functions (LFs), we develop a multiple-choice learning~(MCL) based approach to automatically generate LFs from scratch in SSL style. With the noisy labels produced by the LFs, we design a label model to resolve the conflict and overlap among the noisy labels, and finally infer probabilistic labels for unlabeled samples. Extensive experiments on four standard SSL benchmarks show that DP-SSL can provide reliable labels for unlabeled data and achieve better classification performance on test sets than existing SSL methods, especially when only a small number of labeled samples are available. Concretely, for CIFAR-10 with only 40 labeled samples, DP-SSL achieves 93.82% annotation accuracy on unlabeled data and 93.46% classification accuracy on test data, which are higher than the SOTA results.
| accept | The paper proposes an approach to semi-supervised learning that trains a number of classification heads with different subset of classes, as well as a label model that aggregates the prediction of these classification heads. This approach seems to work well in a self-training setting especially when only few labeled examples are available. Overall, the paper is successful in building a very competitive system for semi-supervised learning (SSL) and it achieves SOTA on several SSL benchmarks. That said, the method has many components and hyper-parameters and it is not very clear where exactly the gains come from. Nevertheless, all of the reviewers have rated the paper slightly above acceptance bar after useful feedback from the authors, and all reviewers are happy to see the paper published. One of the paper's contributions is uniting existing work on data programming and SSL, and I believe, the ideas discussed in this paper can benefit the research community and help develop even better and simpler SSL algorithms. Hence, I recommend acceptance as a poster. Many comments about clarity and ablations have been raised by reviewers. I encourage the authors to address those comments in the camera ready version. | train | [
"TqjrgfRl22-",
"v2wU5iy4JC",
"0y_59psv63e",
"MyK7iyzGs9z",
"iwwMYy0X9A6",
"YIrWB1pYc4f",
"s369oBPnlNK",
"ujuPkRUmuWp",
"ISxvYMZc2G",
"H3_CqzGbrCx",
"VqxNWDLOFeg",
"HfaxsKUPR6Y",
"bnc-QQckWgT"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Q1: **good to compare with self-training methods such as [1].**\n\nResponse: Thanks for your comment. For ReMixMatch, Fixmatch, USADTM and our method, all these methods outperform [1] with 250 labeled samples in CIFAR-10 as shown in Figure G.1. Actually, [1] benefits a lot from the self-supervised and fine-tune t... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3
] | [
"v2wU5iy4JC",
"H3_CqzGbrCx",
"iwwMYy0X9A6",
"nips_2021_NlLynLBBi01",
"s369oBPnlNK",
"bnc-QQckWgT",
"MyK7iyzGs9z",
"YIrWB1pYc4f",
"HfaxsKUPR6Y",
"VqxNWDLOFeg",
"nips_2021_NlLynLBBi01",
"nips_2021_NlLynLBBi01",
"nips_2021_NlLynLBBi01"
] |
nips_2021_iFODavhthGZ | Transformer in Transformer | Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing XU, Yunhe Wang | accept | In this paper, the authors propose a simple but effective ViT network architecture, which introduces hierarchical structure in transformers. The architecture is motivated well and the paper is well-organized.
Although in the submission, the empirical study is not sufficient to the advantage of the proposed model, during the rebuttal period, the authors did a great job to provide more evidences. It is a clear acceptance for this paper. Please include all the additional empirical experiments results into the final version, especially the running time in training & inference stage and the necessity of inner transformer. | train | [
"D2NbqWYImqg",
"Gb_l_1injt",
"ZPq-W05ADC9",
"AElIDcNyJbr",
"5yOqBGZow7_",
"Q6__WbjO-R6",
"WlGbtMZGQZi",
"YtFjYMYjb0J",
"iAF7VJN-Lin",
"oJmOl-zdbI",
"lR47F5vngIq",
"7QOR78FvrI",
"dUqTXIPh2nB"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your support. How to reduce the memory useage on high-resolution images caused by attention mechanism while maintaining high performance is an interesting topic. And we will continue to address this issue in the future work.",
" The authors' responses partially resolve my concerns.\n\nSpecifically, A... | [
-1,
-1,
7,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
5
] | [
"Gb_l_1injt",
"iAF7VJN-Lin",
"nips_2021_iFODavhthGZ",
"oJmOl-zdbI",
"Q6__WbjO-R6",
"YtFjYMYjb0J",
"nips_2021_iFODavhthGZ",
"WlGbtMZGQZi",
"ZPq-W05ADC9",
"dUqTXIPh2nB",
"7QOR78FvrI",
"nips_2021_iFODavhthGZ",
"nips_2021_iFODavhthGZ"
] |
nips_2021_ioyq7NsR1KJ | Adversarial Graph Augmentation to Improve Graph Contrastive Learning | Self-supervised learning of graph neural networks (GNN) is in great need because of the widespread label scarcity issue in real-world graph/network data. Graph contrastive learning (GCL), by training GNNs to maximize the correspondence between the representations of the same graph in its different augmented forms, may yield robust and transferable GNNs even without using labels. However, GNNs trained by traditional GCL often risk capturing redundant graph features and thus may be brittle and provide sub-par performance in downstream tasks. Here, we propose a novel principle, termed adversarial-GCL (\textit{AD-GCL}), which enables GNNs to avoid capturing redundant information during the training by optimizing adversarial graph augmentation strategies used in GCL. We pair AD-GCL with theoretical explanations and design a practical instantiation based on trainable edge-dropping graph augmentation. We experimentally validate AD-GCL by comparing with the state-of-the-art GCL methods and achieve performance gains of up-to~14\% in unsupervised, ~6\% in transfer and~3\% in semi-supervised learning settings overall with 18 different benchmark datasets for the tasks of molecule property regression and classification, and social network classification.
| accept | The paper combines the ideas of adversarial training (on augmentation) and graph contrastive learning to further boost the unsupervised performance, targeting at representing less redundant information. It is empirically demonstrated with significant performance gains in multiple experiment settings of unsupervised, semi-supervised and transfer learning.
After the discussion, all reviewers agreed that this paper is above the acceptance threshold. The authors are encouraged to improve this work based on the feedback from the reviewers, especially the following ones.
1. In experiments, compare with a recent work -- "Graph Contrastive Learning Automated", which also leverages adversarial training for GCL. In addition, the authors can compare with the following two works:
- Graph Contrastive Learning with Adaptive Augmentation
- Graph Contrastive Learning Automated
2. The performance gain of the proposed method is not consistent among datasets, especially when comparing to its direct baseline (GraphCL). This makes the author's claim and the proposed method less convincing. The authors may need more experiments to show the effectiveness of their methods on different tasks.
| train | [
"ZFIBNxdAdNP",
"wtlbqj-ZrTE",
"vXMrAXfNVKN",
"jJWoPoF_hFT",
"sRuDK5WB17",
"iZeKDl745v",
"bBo_ynOw95",
"zD0NIOyWGQw",
"vKYEw08Zsi",
"fWf-TXCMOt0",
"oEVIWt0Dhqq",
"8JcDZWKodLZ"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Current contrastive learning (CL) methods are based on various graph augmentations to generate multiple views for contrast (maximizing MI). To discover the optimal augmentations, the authors propose an adversarial approach to automatically learn views generation that tries to minimize the MI between two views. Th... | [
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"nips_2021_ioyq7NsR1KJ",
"nips_2021_ioyq7NsR1KJ",
"jJWoPoF_hFT",
"nips_2021_ioyq7NsR1KJ",
"oEVIWt0Dhqq",
"jJWoPoF_hFT",
"jJWoPoF_hFT",
"ZFIBNxdAdNP",
"8JcDZWKodLZ",
"nips_2021_ioyq7NsR1KJ",
"nips_2021_ioyq7NsR1KJ",
"nips_2021_ioyq7NsR1KJ"
] |
nips_2021_Ao2METZY4n | Online Control of Unknown Time-Varying Dynamical Systems | We study online control of time-varying linear systems with unknown dynamics in the nonstochastic control model. At a high level, we demonstrate that this setting is \emph{qualitatively harder} than that of either unknown time-invariant or known time-varying dynamics, and complement our negative results with algorithmic upper bounds in regimes where sublinear regret is possible. More specifically, we study regret bounds with respect to common classes of policies: Disturbance Action (SLS), Disturbance Response (Youla), and linear feedback policies. While these three classes are essentially equivalent for LTI systems, we demonstrate that these equivalences break down for time-varying systems. We prove a lower bound that no algorithm can obtain sublinear regret with respect to the first two classes unless a certain measure of system variability also scales sublinearly in the horizon. Furthermore, we show that offline planning over the state linear feedback policies is NP-hard, suggesting hardness of the online learning problem. On the positive side, we give an efficient algorithm that attains a sublinear regret bound against the class of Disturbance Response policies up to the aforementioned system variability term. In fact, our algorithm enjoys sublinear \emph{adaptive} regret bounds, which is a strictly stronger metric than standard regret and is more appropriate for time-varying systems. We sketch extensions to Disturbance Action policies and partial observation, and propose an inefficient algorithm for regret against linear state feedback policies.
| accept | Three of the four reviews were quite positive. There was one negative review, and I think the authors response was adequate to address concerns raised. | train | [
"J-N-B39KQjt",
"Smj8PfGB16p",
"D-a898m8mBV",
"Tvr_yBoKAaR",
"Bc4NKDKfvsW",
"u3Aa7_9Pg1B",
"qsr4rTZx_ZD",
"lIM2o0FrXIY",
"FbAwkGIrwl"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies RL policies for linear time-varying systems. Most of the points discussed in the first round of reviewing are not addressed in the rebuttal. Further, the explanations in the rebuttal together with the antagonistic responses of the authors show that the manuscript suffers from serious presentati... | [
3,
-1,
7,
-1,
-1,
-1,
-1,
7,
8
] | [
5,
-1,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_Ao2METZY4n",
"u3Aa7_9Pg1B",
"nips_2021_Ao2METZY4n",
"FbAwkGIrwl",
"lIM2o0FrXIY",
"D-a898m8mBV",
"J-N-B39KQjt",
"nips_2021_Ao2METZY4n",
"nips_2021_Ao2METZY4n"
] |
nips_2021_ZarM_uLVyGw | Contrastive Reinforcement Learning of Symbolic Reasoning Domains | Abstract symbolic reasoning, as required in domains such as mathematics and logic, is a key component of human intelligence. Solvers for these domains have important applications, especially to computer-assisted education. But learning to solve symbolic problems is challenging for machine learning algorithms. Existing models either learn from human solutions or use hand-engineered features, making them expensive to apply in new domains. In this paper, we instead consider symbolic domains as simple environments where states and actions are given as unstructured text, and binary rewards indicate whether a problem is solved. This flexible setup makes it easy to specify new domains, but search and planning become challenging. We introduce four environments inspired by the Mathematics Common Core Curriculum, and observe that existing Reinforcement Learning baselines perform poorly. We then present a novel learning algorithm, Contrastive Policy Learning (ConPoLe) that explicitly optimizes the InfoNCE loss, which lower bounds the mutual information between the current state and next states that continue on a path to the solution. ConPoLe successfully solves all four domains. Moreover, problem representations learned by ConPoLe enable accurate prediction of the categories of problems in a real mathematics curriculum. Our results suggest new directions for reinforcement learning in symbolic domains, as well as applications to mathematics education.
| accept | I've read through the reviews and responses as well as the paper. It seems like the reviewers found the paper very interesting: the problem domain is compelling and under-explored, and the motivation of the model is clear. In other words, the reviewers generally felt the paper was well-positioned to bring value to the field of abstract reasoning and planning, and the authors did a decent job at demonstrating this through benchmarks introduced here as well as showing some reasonable baselines fall short.
However, there were some concerns, and I was borderline as far as whether these concerns are addressable through a minor revision in the final draft. It seems like a lot of the issues are on the stated limitations of the work: the authors acknowledge in the discussion some of these limitations are properties of the domain they are working in.
After looking carefully at the discussion and the paper, I believe the paper still has value, despite these limitations. We shouldn't detract good research in difficult domains, and the fact that there's more to say about this domain shouldn't make it weaker than a paper that purports to study reasoning in a black-box environment like Atari. However, I strongly assert that the authors make very clear the limitations of their benchmarks and conclusions draft from running models on these benchmarks in the final draft. I cannot conditionally accept the paper, so I am going to trust the authors on this. | test | [
"LlsmRvAXiuZ",
"BVt_pSrhbds",
"vj57nQlRoW",
"iM_nkjpkYNg",
"LeKw_ZtnINu",
"6plWG8HrXZ0",
"U0GLUF_kZt",
"LNpE5yCKp9H",
"wvzRJs0Ey7n",
"nJFVUxMliYR",
"Pi1Lfdl8sI",
"UOcdfSRR1N",
"CnulqTPBVMU",
"eSo47TmrPcV",
"joBBLCWe9Np",
"0yraImcgGuS",
"WUSQitWBdhx",
"VvWavdEdM24"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the last comments and general encouragement! If you don't mind, when you mentioned \"other domains from prior works\", is there any specific domain you had in mind?\n\nWhile several learning domains for formal mathematics have been released in the last years (e.g., Gamepad [1], IsarStep [2], and Miz... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"BVt_pSrhbds",
"LNpE5yCKp9H",
"nips_2021_ZarM_uLVyGw",
"eSo47TmrPcV",
"6plWG8HrXZ0",
"CnulqTPBVMU",
"vj57nQlRoW",
"WUSQitWBdhx",
"nips_2021_ZarM_uLVyGw",
"Pi1Lfdl8sI",
"wvzRJs0Ey7n",
"wvzRJs0Ey7n",
"VvWavdEdM24",
"vj57nQlRoW",
"WUSQitWBdhx",
"nips_2021_ZarM_uLVyGw",
"nips_2021_ZarM_u... |
nips_2021_jR3WPq3WTd | Spatial Ensemble: a Novel Model Smoothing Mechanism for Student-Teacher Framework | Model smoothing is of central importance for obtaining a reliable teacher model in the student-teacher framework, where the teacher generates surrogate supervision signals to train the student. A popular model smoothing method is the Temporal Moving Average (TMA), which continuously averages the teacher parameters with the up-to-date student parameters. In this paper, we propose ''Spatial Ensemble'', a novel model smoothing mechanism in parallel with TMA. Spatial Ensemble randomly picks up a small fragment of the student model to directly replace the corresponding fragment of the teacher model. Consequentially, it stitches different fragments of historical student models into a unity, yielding the ''Spatial Ensemble'' effect. Spatial Ensemble obtains comparable student-teacher learning performance by itself and demonstrates valuable complementarity with temporal moving average. Their integration, named Spatial-Temporal Smoothing, brings general (sometimes significant) improvement to the student-teacher learning framework on a variety of state-of-the-art methods. For example, based on the self-supervised method BYOL, it yields +0.9% top-1 accuracy improvement on ImageNet, while based on the semi-supervised approach FixMatch, it increases the top-1 accuracy by around +6% on CIFAR-10 when only few training labels are available. Codes and models are available at: https://github.com/tengteng95/Spatial_Ensemble.
| accept | There is a consensus among the reviewers that this paper presents significant results and is of interest to the conference. | train | [
"lkmzZEAlxk",
"vBj_ozPDMkS",
"omzGgnAPSiw",
"term5cFF25b",
"ftSlU6C_sDY",
"qGeIs5mXEA",
"dXZV1vk3L2N"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I'm glad that the small changes I've suggested have been addressed by the authors. ",
" We thank the reviewer for the constructive comments.\n\n> 1. Using SE alone seems to achieve much lower accuracy than TMA (Table 1 and Table 2), so STS can be viewed as an upgraded version of TMA.\n\nWe beg to differ. We th... | [
-1,
-1,
-1,
-1,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"omzGgnAPSiw",
"dXZV1vk3L2N",
"qGeIs5mXEA",
"ftSlU6C_sDY",
"nips_2021_jR3WPq3WTd",
"nips_2021_jR3WPq3WTd",
"nips_2021_jR3WPq3WTd"
] |
nips_2021_1bBF5Zq1YHz | Probabilistic Tensor Decomposition of Neural Population Spiking Activity | The firing of neural populations is coordinated across cells, in time, and across experimentalconditions or repeated experimental trials; and so a full understanding of the computationalsignificance of neural responses must be based on a separation of these different contributions tostructured activity.Tensor decomposition is an approach to untangling the influence of multiple factors in data that iscommon in many fields. However, despite some recent interest in neuroscience, wider applicabilityof the approach is hampered by the lack of a full probabilistic treatment allowing principledinference of a decomposition from non-Gaussian spike-count data.Here, we extend the Pólya-Gamma (PG) augmentation, previously used in sampling-based Bayesianinference, to implement scalable variational inference in non-conjugate spike-count models.Using this new approach, we develop techniques related to automatic relevance determination to inferthe most appropriate tensor rank, as well as to incorporate priors based on known brain anatomy suchas the segregation of cell response properties by brain area.We apply the model to neural recordings taken under conditions of visual-vestibular sensoryintegration, revealing how the encoding of self- and visual-motion signals is modulated by thesensory information available to the animal.
| accept | The authors develop tensor decompositions for count data, like neural spike trains, as well as a variational inference approach based on the Pólya-gamma augmentation. The reviewers and I agree that the paper is novel, well-presented, and of broad interest to the computational and statistical neuroscience community at NeurIPS. | train | [
"To8Bm6nFK4r",
"iN-Tq1fS48n",
"MplzvfvqKfe",
"kGnL0bcGKoG",
"CDhuMKAccoF",
"ys6uykQBSo",
"vLOrRvptvty",
"-CfKD_dgqU",
"DthFUQ4X7eM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for answering my questions. This response - and the detailed responses to the other reviews - confirm my positive rating.",
"This manuscript details the inference of a probabilistic dimensionality reduction technique that is applicable to non-Gaussian data, for example spike counts. Tensor d... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"ys6uykQBSo",
"nips_2021_1bBF5Zq1YHz",
"vLOrRvptvty",
"CDhuMKAccoF",
"DthFUQ4X7eM",
"-CfKD_dgqU",
"iN-Tq1fS48n",
"nips_2021_1bBF5Zq1YHz",
"nips_2021_1bBF5Zq1YHz"
] |
nips_2021_RQJWn82Xga2 | Recurrent Bayesian Classifier Chains for Exact Multi-Label Classification | Exact multi-label classification is the task of assigning each datapoint a set of class labels such that the assigned set exactly matches the ground truth. Optimizing for exact multi-label classification is important in domains where missing a single label can be especially costly, such as in object detection for autonomous vehicles or symptom classification for disease diagnosis. Recurrent Classifier Chains (RCCs), a recurrent neural network extension of ensemble-based classifier chains, are the state-of-the-art exact multi-label classification method for maximizing subset accuracy. However, RCCs iteratively predict classes with an unprincipled ordering, and therefore indiscriminately condition class probabilities. These disadvantages make RCCs prone to predicting inaccurate label sets. In this work we propose Recurrent Bayesian Classifier Chains (RBCCs), which learn a Bayesian network of class dependencies and leverage this network in order to condition the prediction of child nodes only on their parents. By conditioning predictions in this way, we perform principled and non-noisy class prediction. We demonstrate the effectiveness of our RBCC method on a variety of real-world multi-label datasets, where we routinely outperform the state of the art methods for exact multi-label classification.
| accept | The algorithm introduced by the authors is sound and obtains promising results as given in the experimental results. The main problems pointed out by reviewers are its clarity and readability. All those problems are mainly caused by the used notation and typos in equations. Nevertheless, the authors were very responsive, clarified almost all issues, and shared with reviewers intended changes to be incorporated into the final version of the paper. Taking this into account we lean toward acceptance of the paper.
Additional comments:
- Please also correct text and equations in lines 217-218: I suppose it should be $[0, 1]^L$ instead of $\\{0,1\\}^L$; it is better to define a loss function using all its parameters (true labels and predicted probabilities). | train | [
"MzDOSIBTqRK",
"B6pDQUIVwQt",
"EHc5EsaJyT3",
"8fLypehTKSq",
"Xs7PX2cn66x",
"b53M2HmiUtM",
"YnHk55TOiX3",
"OBfX6jCYlLd",
"Qr4lJ-TESp",
"xVCfaYDNf70",
"vHTBlsF63Mo",
"JLLaz2xLLu",
"nrc18J6EP5",
"rAzLzpZ1MiL"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We are disappointed to hear that you are lowering your score, and respectfully but strongly disagree with your decision. We hope you have time to read below and reconsider.\n\nThe AC's comments are primarily discussion of notation, and we have already outlined how we will change the notation to a more standard st... | [
-1,
-1,
5,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
4,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"B6pDQUIVwQt",
"YnHk55TOiX3",
"nips_2021_RQJWn82Xga2",
"nips_2021_RQJWn82Xga2",
"8fLypehTKSq",
"nips_2021_RQJWn82Xga2",
"OBfX6jCYlLd",
"JLLaz2xLLu",
"rAzLzpZ1MiL",
"nips_2021_RQJWn82Xga2",
"b53M2HmiUtM",
"EHc5EsaJyT3",
"8fLypehTKSq",
"nips_2021_RQJWn82Xga2"
] |
nips_2021_9IJLHPuLpvZ | Wasserstein Flow Meets Replicator Dynamics: A Mean-Field Analysis of Representation Learning in Actor-Critic | Actor-critic (AC) algorithms, empowered by neural networks, have had significant empirical success in recent years. However, most of the existing theoretical support for AC algorithms focuses on the case of linear function approximations, or linearized neural networks, where the feature representation is fixed throughout training. Such a limitation fails to capture the key aspect of representation learning in neural AC, which is pivotal in practical problems. In this work, we take a mean-field perspective on the evolution and convergence of feature-based neural AC. Specifically, we consider a version of AC where the actor and critic are represented by overparameterized two-layer neural networks and are updated with two-timescale learning rates. The critic is updated by temporal-difference (TD) learning with a larger stepsize while the actor is updated via proximal policy optimization (PPO) with a smaller stepsize. In the continuous-time and infinite-width limiting regime, when the timescales are properly separated, we prove that neural AC finds the globally optimal policy at a sublinear rate. Additionally, we prove that the feature representation induced by the critic network is allowed to evolve within a neighborhood of the initial one.
| accept | All reviewers thought this paper made useful contributions around strengthening the theoretical foundations of RL through an analysis of an AC algorithm with neural networks. AC is a very popular approach in RL and this will likely be of interest to the large body of researchers interested in RL. The authors are encouraged to address the reviewers' feedback in their camera ready paper. | val | [
"WqpDEncuUQw",
"iUUe2uwVnb",
"HMg3VIQ6uB",
"1n1_61f59Zw",
"bJcygNMZ77u",
"RRgkLvxdUyc",
"3gwi5VnV270",
"K94kgWE3ve",
"Dux-9HWx2US",
"wQkNTIj1xpl"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors analyse a mean-field limit of an actor-critic algorithm for which the critic is parametrized with a single-hidden-layer neural network, and the policy is tabular. The analysis is carried out in continuous time, and in the limit of infinite network width. The analysis of reinforcement learning algorith... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
2,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"nips_2021_9IJLHPuLpvZ",
"nips_2021_9IJLHPuLpvZ",
"3gwi5VnV270",
"bJcygNMZ77u",
"wQkNTIj1xpl",
"Dux-9HWx2US",
"iUUe2uwVnb",
"WqpDEncuUQw",
"nips_2021_9IJLHPuLpvZ",
"nips_2021_9IJLHPuLpvZ"
] |
nips_2021_myJO35O7Gg | Assessing Fairness in the Presence of Missing Data | Missing data are prevalent and present daunting challenges in real data analysis. While there is a growing body of literature on fairness in analysis of fully observed data, there has been little theoretical work on investigating fairness in analysis of incomplete data. In practice, a popular analytical approach for dealing with missing data is to use only the set of complete cases, i.e., observations with all features fully observed to train a prediction algorithm. However, depending on the missing data mechanism, the distribution of complete cases and the distribution of the complete data may be substantially different. When the goal is to develop a fair algorithm in the complete data domain where there are no missing values, an algorithm that is fair in the complete case domain may show disproportionate bias towards some marginalized groups in the complete data domain. To fill this significant gap, we study the problem of estimating fairness in the complete data domain for an arbitrary model evaluated merely using complete cases. We provide upper and lower bounds on the fairness estimation error and conduct numerical experiments to assess our theoretical results. Our work provides the first known theoretical results on fairness guarantee in analysis of incomplete data.
| accept | The reviewers agreed that this is a well-written paper that addresses a relatively unexplored facet of fair ML: the impact of missing data on fairness. The setup considered in the paper is simple, yet amenable towards an interesting theoretical analysis. However, the paper also has limitations, particularly in terms of how the bounds can translate to "fair imputation methods," as noted by reviewer ykYW. I agree with this concern, and wished that the authors had gone further than the results in the paper and discussed methods to *ensure* fairness in the presence of missing data. The reviewers also noted that the experimental results are on datasets where entries are artificially missing, raising concerns about the practical impact of the paper in setting where data missingness may be correlated with group attributes.
I also found the claim "Our work provides the first known results on fairness guarantee in analysis of incomplete data" to be a bit misleading, since there have been a few papers that at least discuss the topic (e.g., "Missing the missing values: The ugly duckling of fairness in machine learning" by Fernando et. al and https://arxiv.org/abs/1911.12587), albeit to a less theoretical extent.
Overall, the merits of this paper outweigh is limitations, and its publication will encourage more discussions on the impact of missing data on fairness.
| test | [
"zo-s4R6VZb_",
"Sbxy1eWRhGr",
"QkanMv14lEL",
"BsVaubo5epP",
"EFEmNyy7YuF",
"GZNkM1_aixe",
"3-as91VwJaD",
"UEUedoVZOyJ",
"SAIuKHgHHg4"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarifications.\n\nI agree with the other reviewers that applicability of the results seem lacking and therefore deserves a more in-depth treatment. However I still think that those results are useful even in a vacuum and therefore make an interesting paper.\n\nGiven this I have decided to keep ... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"BsVaubo5epP",
"SAIuKHgHHg4",
"UEUedoVZOyJ",
"3-as91VwJaD",
"GZNkM1_aixe",
"nips_2021_myJO35O7Gg",
"nips_2021_myJO35O7Gg",
"nips_2021_myJO35O7Gg",
"nips_2021_myJO35O7Gg"
] |
nips_2021_xlNpxfGMTTu | Adversarial Attack Generation Empowered by Min-Max Optimization | The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness. Nevertheless, min-max optimization beyond the purpose of AT has not been rigorously explored in the adversarial context. In this paper, we show how a general notion of min-max optimization over multiple domains can be leveraged to the design of different types of adversarial attacks. In particular, given a set of risk sources, minimizing the worst-case attack loss can be reformulated as a min-max problem by introducing domain weights that are maximized over the probability simplex of the domain set. We showcase this unified framework in three attack generation problems -- attacking model ensembles, devising universal perturbation under multiple inputs, and crafting attacks resilient to data transformations. Extensive experiments demonstrate that our approach leads to substantial attack improvement over the existing heuristic strategies as well as robustness improvement over state-of-the-art defense methods against multiple perturbation types. Furthermore, we find that the self-adjusted domain weights learned from min-max optimization can provide a holistic tool to explain the difficulty level of attack across domains.
| accept | This review results for this paper are borderline. While the reviewers appreciate some of the positive aspects of the paper (such as the formulation and the comprehensive numerical experiments), they have reservations about the novelty and the presentation of parts of the paper. However, regarding the contribution of the paper, the authors' argument about the novelty of 1909.13806 (published) w.r.t. 1906.03563 (unpublished) is reasonable and may justify the contribution. | train | [
"EftARQaU0sq",
"j2FxTlTgGQn",
"YYDsiwmuBZj",
"lLZrtV5CRsa",
"hP6rzr996uD",
"lRz-y3hyGos",
"5qSLNR2kJIq",
"_m22id-pFZD",
"HLrP5S4zbwm",
"GBnG7jplmLW",
"9MlrYsmYirj",
"LtOLp2yZ8Rs",
"_rO9J9IKaNz",
"EFMtHKBYxea",
"MJjlArPtTL_",
"PGarEgKDEaK",
"LJDD91ArQFE",
"ksgH9S4yFX",
"5tlAwQVgEW... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
" Thank you very much for raising the score. We are very glad to learn that our responses have made the points clearer. We will revise the paper according to your insightful comments. \n\n",
" Thank you very much for the follow-up comment “I thank the authors for their response, but I am still concerned about the... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
4,
7,
5
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
4,
3
] | [
"hP6rzr996uD",
"YYDsiwmuBZj",
"MJjlArPtTL_",
"nips_2021_xlNpxfGMTTu",
"lRz-y3hyGos",
"5qSLNR2kJIq",
"9MlrYsmYirj",
"HLrP5S4zbwm",
"ksgH9S4yFX",
"ksgH9S4yFX",
"LJDD91ArQFE",
"MJjlArPtTL_",
"XQgnqNeBpYq",
"nips_2021_xlNpxfGMTTu",
"pStzwC3P7tL",
"gbCq0Oo_K6D",
"lLZrtV5CRsa",
"XEP-A3nU... |
nips_2021_wfGbrrWgXDm | Safe Pontryagin Differentiable Programming | We propose a Safe Pontryagin Differentiable Programming (Safe PDP) methodology, which establishes a theoretical and algorithmic framework to solve a broad class of safety-critical learning and control tasks---problems that require the guarantee of safety constraint satisfaction at any stage of the learning and control progress. In the spirit of interior-point methods, Safe PDP handles different types of system constraints on states and inputs by incorporating them into the cost or loss through barrier functions. We prove three fundamentals of the proposed Safe PDP: first, both the solution and its gradient in the backward pass can be approximated by solving their more efficient unconstrained counterparts; second, the approximation for both the solution and its gradient can be controlled for arbitrary accuracy by a barrier parameter; and third, importantly, all intermediate results throughout the approximation and optimization strictly respect the constraints, thus guaranteeing safety throughout the entire learning and control process. We demonstrate the capabilities of Safe PDP in solving various safety-critical tasks, including safe policy optimization, safe motion planning, and learning MPCs from demonstrations, on different challenging systems such as 6-DoF maneuvering quadrotor and 6-DoF rocket powered landing.
| accept | The authors response has address concerns from the reviewers, and all reviewers agree on an acceptance. I encourage the authors to revise the draft by including the discussion and the additional experiments from the rebuttal. | test | [
"9ZrOLZ1atS",
"qLqGOrQMJSK",
"QlGvcRbxrT",
"-1EMgQ37AT",
"jqyn-2jeJjU",
"MzX3hpG59BO",
"pPD2ln4FoX",
"tjSmoBA9436",
"3sRtmRfrro",
"hSsjwtw0oMr",
"EyrfT6pioUh",
"ibOMp_IxRt",
"Wex7jgw7ceF",
"pezqKlgzyDf",
"1X2jDj8PYSH",
"bX9Cn1yxGSn",
"5VUGFbP9kiP"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We sincerely thank the reviewer for reading through our response. We believe that your valuable comments have improved the paper. Thank you again for your comments.\n\n",
" Thanks the authors for a great elaborated response. I found it very insightful and convincing.",
" We sincerely thank the reviewer for t... | [
-1,
-1,
-1,
-1,
6,
-1,
9,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"qLqGOrQMJSK",
"1X2jDj8PYSH",
"-1EMgQ37AT",
"ibOMp_IxRt",
"nips_2021_wfGbrrWgXDm",
"tjSmoBA9436",
"nips_2021_wfGbrrWgXDm",
"Wex7jgw7ceF",
"EyrfT6pioUh",
"nips_2021_wfGbrrWgXDm",
"pezqKlgzyDf",
"jqyn-2jeJjU",
"pPD2ln4FoX",
"hSsjwtw0oMr",
"5VUGFbP9kiP",
"nips_2021_wfGbrrWgXDm",
"nips_2... |
nips_2021_jFMzBeLyTc0 | Class-Disentanglement and Applications in Adversarial Detection and Defense | Kaiwen Yang, Tianyi Zhou, yonggang zhang, Xinmei Tian, Dacheng Tao | accept | The paper addresses adversarial detection and defense through a model that disentangles task-relevant and -irrelevant information. The model performs favorably against baselines, and the authors provided additional more recent baselines following feedback from reviewers. On the whole, the paper received a large number of reviews that were positive with engaging and informative discussion, so I believe that this paper has something positive to offer for the venue. There were some noted weaknesses to the paper; the reviewers believe the paper should still be accepted, but they ask that some of their requested changes make it to the final draft. I recommend acceptance as a poster. | train | [
"2yJl264Q773",
"PPWyka-k2ZE",
"yP0aLHs9_AM",
"MjNSmCps6Yn",
"4rzMJ3jkQUy",
"fnVnZty1isp",
"RwVpQwQaArj",
"q5YlaqPL0WE",
"SqKcqGcuSR",
"pKbEE4dRRK",
"-N8IRGp2ENb",
"JKcI-T1XpMP",
"uo44q2cmK0s",
"6BkXbgjhwdS",
"CbpgQIKlsDZ",
"K6v3OVdQTVs",
"e4dYFeZCwvh",
"OR-S9UB-tVT",
"nzEYW9bA8v-... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
" Dear Reviewer qF3G,\n\nThanks for your feedback! Since **all your concerns are well addressed** with **Five new groups of experiments posted**, and you are positive about the new experimental results and the answers they indicate, would you mind reconsidering your original rating? \n\nWe believe that our method i... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"4rzMJ3jkQUy",
"MjNSmCps6Yn",
"nips_2021_jFMzBeLyTc0",
"bIWw9h2QN3",
"4Ow-qTLQyLe",
"aKLQ9Jm6hy",
"yP0aLHs9_AM",
"pKbEE4dRRK",
"nips_2021_jFMzBeLyTc0",
"OR-S9UB-tVT",
"6BkXbgjhwdS",
"aKLQ9Jm6hy",
"nips_2021_jFMzBeLyTc0",
"K3lHvROhmVo",
"aKLQ9Jm6hy",
"QqNZ7cg7vb8",
"uo44q2cmK0s",
"n... |
nips_2021_zdTW91r2wKO | Active 3D Shape Reconstruction from Vision and Touch | Humans build 3D understandings of the world through active object exploration, using jointly their senses of vision and touch. However, in 3D shape reconstruction, most recent progress has relied on static datasets of limited sensory data such as RGB images, depth maps or haptic readings, leaving the active exploration of the shape largely unexplored. In active touch sensing for 3D reconstruction, the goal is to actively select the tactile readings that maximize the improvement in shape reconstruction accuracy. However, the development of deep learning-based active touch models is largely limited by the lack of frameworks for shape exploration. In this paper, we focus on this problem and introduce a system composed of: 1) a haptic simulator leveraging high spatial resolution vision-based tactile sensors for active touching of 3D objects; 2) a mesh-based 3D shape reconstruction model that relies on tactile or visuotactile signals; and 3) a set of data-driven solutions with either tactile or visuotactile priors to guide the shape exploration. Our framework enables the development of the first fully data-driven solutions to active touch on top of learned models for object understanding. Our experiments show the benefits of such solutions in the task of 3D shape understanding where our models consistently outperform natural baselines. We provide our framework as a tool to foster future research in this direction.
| accept | The paper proposes a learning-based approach to 3D reconstruction using a combination of (optional) visual and tactile measurements. A modified neural network takes as input visual and tactile observations and outputs an estimate of the object's 3D shape. This network is coupled with an active perception policy that chooses the next contact point to facilitate tactile-based reconstruction. The method is evaluated through a newly proposed simulator that provides synthetic tactile and visual measurements.
The reviewers agree that the idea of combining tactile observations with visual measurements in an active fashion is a compelling approach to object-scale 3D reconstruction. The paper is well written and easy to follow, and provides an extensive set of experiments that demonstrate the effectiveness of the approach. However, the experiments are limited to simulated data and thus it is not clear how well the framework would perform when operating with real-world visual and tactile measurements. The authors are encouraged to provide a more compelling discussion of the generalizability of the method and, ideally, include real-world experimental results. | train | [
"OwepdVCO9Lg",
"wjeVP_wqrl",
"OLf3ieQBLlk",
"fOc9Gd1nYq-",
"S176xF-bCyy",
"A7E-tMAbjoJ",
"zoZzaoPsTpS",
"yH1DbMpv5Cs",
"0fKmiNbgsDw"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces and explores the task of active 3D reconstruction using visual and haptic data. Instead of relying on static data to perform reconstruction, the learning process is guided by exploration, from which new data can be accounted for in the reconstruction process by the proposed model. In order to... | [
7,
-1,
-1,
-1,
-1,
-1,
5,
8,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"nips_2021_zdTW91r2wKO",
"OLf3ieQBLlk",
"zoZzaoPsTpS",
"yH1DbMpv5Cs",
"0fKmiNbgsDw",
"OwepdVCO9Lg",
"nips_2021_zdTW91r2wKO",
"nips_2021_zdTW91r2wKO",
"nips_2021_zdTW91r2wKO"
] |
nips_2021_n-FqqWXnWW | CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings | Without positional information, attention-based Transformer neural networks are permutation-invariant. Absolute or relative positional embeddings are the most popular ways to feed Transformer models with positional information. Absolute positional embeddings are simple to implement, but suffer from generalization issues when evaluating on sequences longer than seen at training time. Relative positions are more robust to input length change, but are more complex to implement and yield inferior model throughput due to extra computational and memory costs. In this paper, we propose an augmentation-based approach (CAPE) for absolute positional embeddings, which keeps the advantages of both absolute (simplicity and speed) and relative positional embeddings (better generalization). In addition, our empirical evaluation on state-of-the-art models in machine translation, image and speech recognition demonstrates that CAPE leads to better generalization performance as well as increased stability with respect to training hyper-parameters.
| accept | Although there is some variance in the reviewer assessments about the paper, most reviewers are positive and think the work nicely integrates a set of techniques for positional encoding that can capture relative positional relationships. The work evaluates these techniques in several distinct tasks, including vision, language and speech. As Reviewer MLDf commented: "It (the technique) seems to help on speech and vision, but not so much on machine translation." I would want to thank the authors for providing extensive additional experimental results in the rebuttal, which strengthen the paper. The reviewers have provided many great points for improving the paper, which I hope the authors can include them in the revision of the paper. | test | [
"aS7jVv1Ui67",
"IYLsdQuXeWa",
"xlT6PP6Ec53",
"WaJAjaGCugU",
"y3k8Kx7o3PO",
"2g4xpbIVkT2",
"DebovbWo1mU",
"HROxOiFyWCi",
"ABS8v_QXRGp",
"7tvA75c5WN2",
"T2A8D25HGy",
"PqsYA6qcADS",
"TuhxCDFESXc",
"nUt74BFbBFg"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper claimed that the existing positional encoding mechanism (i.g., absolute or relative positional encodings) for the Transformer model has a generalization issue, that is, these position encoding methods have their own advantages and disadvantages, which may prevent the Transformer model from simulating po... | [
4,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_n-FqqWXnWW",
"WaJAjaGCugU",
"nips_2021_n-FqqWXnWW",
"y3k8Kx7o3PO",
"DebovbWo1mU",
"HROxOiFyWCi",
"nips_2021_n-FqqWXnWW",
"aS7jVv1Ui67",
"7tvA75c5WN2",
"TuhxCDFESXc",
"nUt74BFbBFg",
"xlT6PP6Ec53",
"nips_2021_n-FqqWXnWW",
"nips_2021_n-FqqWXnWW"
] |
nips_2021_iZDMbX1W8AV | Multi-armed Bandit Requiring Monotone Arm Sequences | Ningyuan Chen | accept | All reviewers have agreed that the paper is nicely written and presents a proper solution to the problem examined (given the fixes in the proof agreed with the authors in the discussion phase), and hence all of them recommend acceptance. On the other hand, there is also an agreement that the paper lacks proper motivation, as none of the examples mentioned by the authors (including the ones in the discussion) properly fit the framework provided. The paper would be an excellent fit to the conference with a proper motivation, and the authors are highly encouraged to demonstrate the existence of a realistic-looking problem where the framework is indeed applicable (otherwise this will remain a purely mathematical contribution). | test | [
"z3P8IhArTwN",
"0m-BsSq00FX",
"c5JFpId3ssS",
"Fyh89xV3Ki4",
"-VwHr03de4K",
"OpgkSRbZx57",
"G5CKW6VXCgo",
"kw96jVzamRk",
"1XA6ao5Nfnl",
"B1iqlQbovZ",
"QNw2VIJdW9"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your answers. Taking everything into consideration, I keep my rating as-is.",
" Dear reviewer,\n\nWe will follow your suggestion to streamline and tighten up the sequence of inequalities after line 506. In particular, for the first line, we will establish that for any arm sequence $(x_1,x_2,...,x_... | [
-1,
-1,
-1,
6,
7,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
4,
5
] | [
"0m-BsSq00FX",
"c5JFpId3ssS",
"OpgkSRbZx57",
"nips_2021_iZDMbX1W8AV",
"nips_2021_iZDMbX1W8AV",
"QNw2VIJdW9",
"-VwHr03de4K",
"Fyh89xV3Ki4",
"B1iqlQbovZ",
"nips_2021_iZDMbX1W8AV",
"nips_2021_iZDMbX1W8AV"
] |
nips_2021_yRfsADObu18 | Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning | Collaborative machine learning provides a promising framework for different agents to pool their resources (e.g., data) for a common learning task. In realistic settings where agents are self-interested and not altruistic, they may be unwilling to share data or model without adequate rewards. Furthermore, as the data/model shared by the agents may differ in quality, designing rewards which are fair to them is important so that they do not feel exploited nor discouraged from sharing. In this paper, we adopt federated learning as a gradient-based formalization of collaborative machine learning, propose a novel cosine gradient Shapley value to evaluate the agents’ uploaded model parameter updates/gradients, and design theoretically guaranteed fair rewards in the form of better model performance. Compared to existing baselines, our approach is more efficient and does not require a validation dataset. We perform extensive experiments to demonstrate that our proposed approach achieves better fairness and predictive performance.
| accept | This paper develops an interesting measure for user contribution in collaborative machine learning and an associated compensation mechanism. In general, the reviewers are positive about this paper. The discussion has resolved several issues raised by the reviewers. Please update the paper to clarify the issues in the next revision. | train | [
"PKeZLuFCLY7",
"5qy1Gx0LvLQ",
"r4nIaB3B6FK",
"YkVXHcqrDN",
"_otvZaEJFXG",
"qrDOMGZKQxC",
"qxvK2WO8A6",
"XreM_ghyMhL",
"lNRLkWQ300",
"F1CXG9BZQot",
"UxQBXSqIinr",
"cYK6q_zdHe",
"kDGk44qGggy",
"EZvV1T-uUt",
"5FFqukCUeAh",
"4IIGnnO4tJo",
"03Hky-aF_5L",
"xxV4qJ1UPX3",
"1nUc_TwZv5_",
... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"A gradient-based solution to the fair reward allocation problem in gradient-based collaborative ML/FL with theoretical guarantees on fairness. The authors proposed a Cosine Gradient Shapley to formalize the fairness.\n Strengths:\n1. An efficient approximation to the Cosine Gradient Shapley to address the computa... | [
7,
-1,
-1,
-1,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
-1,
-1,
-1,
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_yRfsADObu18",
"1nUc_TwZv5_",
"0tFgbUGV6o2",
"PKeZLuFCLY7",
"nips_2021_yRfsADObu18",
"nips_2021_yRfsADObu18",
"nips_2021_yRfsADObu18",
"F1CXG9BZQot",
"F1CXG9BZQot",
"EZvV1T-uUt",
"0tFgbUGV6o2",
"qrDOMGZKQxC",
"PKeZLuFCLY7",
"5FFqukCUeAh",
"4IIGnnO4tJo",
"qxvK2WO8A6",
"qrDOM... |
nips_2021_lp9foO8AFoD | Generalizable Imitation Learning from Observation via Inferring Goal Proximity | Task progress is intuitive and readily available task information that can guide an agent closer to the desired goal. Furthermore, a progress estimator can generalize to new situations. From this intuition, we propose a simple yet effective imitation learning from observation method for a goal-directed task using a learned goal proximity function as a task progress estimator, for better generalization to unseen states and goals. We obtain this goal proximity function from expert demonstrations and online agent experience, and then use the learned goal proximity as a dense reward for policy training. We demonstrate that our proposed method can robustly generalize compared to prior imitation learning methods on a set of goal-directed tasks in navigation, locomotion, and robotic manipulation, even with demonstrations that cover only a part of the states.
| accept | The author response convinced all reviewers, they now unanimously vote for an accept. | train | [
"kkEas-wlqR",
"eYB1z_9uTx",
"1kl3bBVa99J",
"pIO0AFbcTn",
"0lQQD2OGxG",
"64s5VjRswZz",
"samd5En8oMB",
"Oq8c36NCqET",
"T8ciCAhX5AV",
"CIl66tumYb8",
"1nR0QXl6o53",
"-CMVVVy2wvP",
"RCyQCYTZ3u3",
"u268chxINe2",
"wBJaQRZ2ax5",
"Rkm_pPKpDa",
"lBSaLWzKBs"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for providing valuable feedback and recognizing our responses.",
" Thank you for further clarification! I have updated the score.",
"The paper presents a learning from observation approach that uses a temporal goal proximity estimate as reward function for policy learning. The temporal g... | [
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"eYB1z_9uTx",
"pIO0AFbcTn",
"nips_2021_lp9foO8AFoD",
"64s5VjRswZz",
"nips_2021_lp9foO8AFoD",
"-CMVVVy2wvP",
"-CMVVVy2wvP",
"RCyQCYTZ3u3",
"CIl66tumYb8",
"u268chxINe2",
"wBJaQRZ2ax5",
"1kl3bBVa99J",
"0lQQD2OGxG",
"Rkm_pPKpDa",
"lBSaLWzKBs",
"nips_2021_lp9foO8AFoD",
"nips_2021_lp9foO8A... |
nips_2021_eQ7Kh-QeWnO | DualNet: Continual Learning, Fast and Slow | According to Complementary Learning Systems (CLS) theory~\cite{mcclelland1995there} in neuroscience, humans do effective \emph{continual learning} through two complementary systems: a fast learning system centered on the hippocampus for rapid learning of the specifics and individual experiences, and a slow learning system located in the neocortex for the gradual acquisition of structured knowledge about the environment. Motivated by this theory, we propose a novel continual learning framework named ``DualNet", which comprises a fast learning system for supervised learning of pattern-separated representation from specific tasks and a slow learning system for unsupervised representation learning of task-agnostic general representation via a Self-Supervised Learning (SSL) technique. The two fast and slow learning systems are complementary and work seamlessly in a holistic continual learning framework. Our extensive experiments on two challenging continual learning benchmarks of CORE50 and miniImageNet show that DualNet outperforms state-of-the-art continual learning methods by a large margin. We further conduct ablation studies of different SSL objectives to validate DualNet's efficacy, robustness, and scalability. Code is publicly available at \url{https://github.com/phquang/DualNet}.
| accept | This paper provides a dual-learning mechanism that uses a novel combination of a separated representation learner with a lean feature adaptation model. The reviewers found their approach convincing and that it performs well against current competitors. They also found this paper to be well written, and after discussion on various technical aspects, quite compelling. The one slightly negative review provided little justification for their points and chose not to engage with the authors after the authors provided a detailed rebuttal against the points raised in the more negative review. | train | [
"gJBbtEyun3",
"gu5-HInezY",
"canbTSZzP9t",
"1V1XnsyjOyc",
"GXqy4cX002",
"j6WMlIjsFN7",
"j0iHYZ_vDHP",
"tPVL84VuJoO",
"Grl2KrTspC",
"ZeVtfkTvpp6",
"8yMjKE_xtMq",
"qOlkm0l0SIE",
"rM-gBFNqm8h",
"LUw0Ak7iJFQ",
"aQzOfzrL7dC"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Inspired by Complementary Learning Systems (CLS), the paper proposes a framework for performing continual learning which is composed of self-supervised learning stream (aka slow learner) that conducts representation learning and of supervised learning stream (fast learner) that attains knowledge from labeled sampl... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_eQ7Kh-QeWnO",
"gJBbtEyun3",
"1V1XnsyjOyc",
"GXqy4cX002",
"gJBbtEyun3",
"tPVL84VuJoO",
"nips_2021_eQ7Kh-QeWnO",
"ZeVtfkTvpp6",
"nips_2021_eQ7Kh-QeWnO",
"8yMjKE_xtMq",
"rM-gBFNqm8h",
"Grl2KrTspC",
"j0iHYZ_vDHP",
"gJBbtEyun3",
"nips_2021_eQ7Kh-QeWnO"
] |
nips_2021_P-if5sUWBn | Deformable Butterfly: A Highly Structured and Sparse Linear Transform | We introduce a new kind of linear transform named Deformable Butterfly (DeBut) that generalizes the conventional butterfly matrices and can be adapted to various input-output dimensions. It inherits the fine-to-coarse-grained learnable hierarchy of traditional butterflies and when deployed to neural networks, the prominent structures and sparsity in a DeBut layer constitutes a new way for network compression. We apply DeBut as a drop-in replacement of standard fully connected and convolutional layers, and demonstrate its superiority in homogenizing a neural network and rendering it favorable properties such as light weight and low inference complexity, without compromising accuracy. The natural complexity-accuracy tradeoff arising from the myriad deformations of a DeBut layer also opens up new rooms for analytical and practical research. The codes and Appendix are publicly available at: https://github.com/ruilin0212/DeBut.
| accept | Congratulations, the paper is accepted to NeurIPS 2021!
Please incorporate the edits and corrections as discussed in the rebuttal and reviews.
Furthermore, tone down the "sensational" language used in the work; try to keep language as factual as possible. | train | [
"IlLLtvbBxy",
"Td2YROzzzF3",
"LyPo00w_qjK",
"MBfzpvtcJsK",
"7KzHqTg2_k7",
"SJznt-bBj-S",
"cAg_S1o79fu",
"XaHoW4OiE84",
"gCS2tA5YUD",
"3S_J8znhfwZ",
"68kIMcqliwy",
"0ox8CpUpC5R"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes \"Deformable Butterfly\" (DeBUT) a generalization of butterfly transforms (the linear operator behind the FFT), which can support non power-of-to inputs/outputs. \nIt studies how to plug such transforms into various architectures (LeNet, VGG, ResNet), from the perspective of filter design, initi... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
4
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"nips_2021_P-if5sUWBn",
"nips_2021_P-if5sUWBn",
"nips_2021_P-if5sUWBn",
"SJznt-bBj-S",
"0ox8CpUpC5R",
"7KzHqTg2_k7",
"IlLLtvbBxy",
"68kIMcqliwy",
"3S_J8znhfwZ",
"nips_2021_P-if5sUWBn",
"nips_2021_P-if5sUWBn",
"nips_2021_P-if5sUWBn"
] |
nips_2021_MDMV2SxCboX | Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning | Pretrained language models have achieved state-of-the-art performance when adapted to a downstream NLP task. However, theoretical analysis of these models is scarce and challenging since the pretraining and downstream tasks can be very different. We propose an analysis framework that links the pretraining and downstream tasks with an underlying latent variable generative model of text -- the downstream classifier must recover a function of the posterior distribution over the latent variables. We analyze head tuning (learning a classifier on top of the frozen pretrained model) and prompt tuning in this setting. The generative model in our analysis is either a Hidden Markov Model (HMM) or an HMM augmented with a latent memory component, motivated by long-term dependencies in natural language. We show that 1) under certain non-degeneracy conditions on the HMM, simple classification heads can solve the downstream task, 2) prompt tuning obtains downstream guarantees with weaker non-degeneracy conditions, and 3) our recovery guarantees for the memory-augmented HMM are stronger than for the vanilla HMM because task-relevant information is easier to recover from the long-term memory. Experiments on synthetically generated data from HMMs back our theoretical findings.
| accept | This paper provides a theoretical analysis of pretrained language models, linking the pretraining and downstream tasks with an underlying latent variable generative model of text. It analyzes head and prompt tuning in this setting, using a memory augmented HMM as the generative model in the analysis. The main findings are that under certain non-degeneracy conditions heads and prompt tuning can solve the downstream task with certain recovery guarantees. The theoretical findings are illustrated with experiments on synthetic data.
All reviewers agree that the theoretical analysis is novel and insightful, and a good first step in understanding the effectiveness of fine-tuning and prompt tuning. They pointed out as weaknesses lack of discussion on the practical implications, which the authors promised to address in the final version. While the use of synthetic data and lack of experimentation on more realistic tasks is a limitation, I believe the theoretical findings are useful and may foster future research considering more realistic scenarios. I urge the authors to take into account the detailed comments made by the reviewers when preparing the final version. | train | [
"5ijrR31h0M",
"D56T42rL1y7",
"j2DkS3RrMtD",
"aT-tfPI4zVy",
"gqPPlZCgtFk",
"aGhcVRD7Ad",
"6EvI7UxC_kp",
"o48-9-222K",
"FuVglL8sFGI"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"[UPDATE: Thanks to the authors for engaging with the comments in the review. I hope to see them fully addressed in the revision and look forward to seeing the next version of the paper!]\n\nThe paper investigates how pretrained representations might be expected to help prediction on a downstream task.\n\nIn some ... | [
7,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
2,
3,
2
] | [
"nips_2021_MDMV2SxCboX",
"FuVglL8sFGI",
"o48-9-222K",
"6EvI7UxC_kp",
"aGhcVRD7Ad",
"5ijrR31h0M",
"nips_2021_MDMV2SxCboX",
"nips_2021_MDMV2SxCboX",
"nips_2021_MDMV2SxCboX"
] |
nips_2021_xVs5d5ZSWaa | Learning Diverse Policies in MOBA Games via Macro-Goals | Recently, many researchers have made successful progress in building the AI systems for MOBA-game-playing with deep reinforcement learning, such as on Dota 2 and Honor of Kings. Even though these AI systems have achieved or even exceeded human-level performance, they still suffer from the lack of policy diversity. In this paper, we propose a novel Macro-Goals Guided framework, called MGG, to learn diverse policies in MOBA games. MGG abstracts strategies as macro-goals from human demonstrations and trains a Meta-Controller to predict these macro-goals. To enhance policy diversity, MGG samples macro-goals from the Meta-Controller prediction and guides the training process towards these goals. Experimental results on the typical MOBA game Honor of Kings demonstrate that MGG can execute diverse policies in different matches and lineups, and also outperform the state-of-the-art methods over 102 heroes.
| accept | After reading the reviews, authors response and discussions, I suggest to accept the paper. The ethical concerns have been answered and the authors took action to conform to what was required by the ethical review. Questions and some concerns on the method have been answered by the authors during the rebuttal and one reviewer upgraded its score from weak reject to a weak accept. The authors try to tackle the problem of learning in a real-time multi-player game which is extremely difficult. Even if the method is not theoretically sound and heavily rely on human knowledge, it is still a technical prowess to achieve good performance on such games. I will also suggest the authors to change their claim concerning the analysis of Fig 6 (b) where the difference between RL and their method is extremely minor. | train | [
"dSPMPBxP_bK",
"0CBB19n9tkM",
"oAwxPfwUP0N",
"GDlCwSnNfb0",
"Yu3vLpUoKaa",
"NBgOWjfuYhW",
"e92QncNssrP",
"5G0JB-sdLNv",
"T9a1vXWvxXU",
"JHh100vF7s",
"TawwT-Iifc",
"8odSV9lfWq7",
"ckRSzVIRRBI",
"WZLIGvVz4Co",
"0pxKt-DPQIG",
"A06GTSET5Kh",
"fbu_qADg5Y7"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed response (both now and throughout the discussion period) this significantly improves my view of the paper. I have updated my score to reflect this and encourage you to revise the paper to incorporate this clearer demonstration of the improved performance.",
"This paper proposes improv... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"oAwxPfwUP0N",
"nips_2021_xVs5d5ZSWaa",
"e92QncNssrP",
"T9a1vXWvxXU",
"JHh100vF7s",
"5G0JB-sdLNv",
"TawwT-Iifc",
"8odSV9lfWq7",
"nips_2021_xVs5d5ZSWaa",
"nips_2021_xVs5d5ZSWaa",
"0CBB19n9tkM",
"fbu_qADg5Y7",
"A06GTSET5Kh",
"0pxKt-DPQIG",
"nips_2021_xVs5d5ZSWaa",
"nips_2021_xVs5d5ZSWaa"... |
nips_2021_x_JOyw5CLP | Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi | Deep reinforcement learning has generated superhuman AI in competitive games such as Go and StarCraft. Can similar learning techniques create a superior AI teammate for human-machine collaborative games? Will humans prefer AI teammates that improve objective team performance or those that improve subjective metrics of trust? In this study, we perform a single-blind evaluation of teams of humans and AI agents in the cooperative card game Hanabi, with both rule-based and learning-based agents. In addition to the game score, used as an objective metric of the human-AI team performance, we also quantify subjective measures of the human's perceived performance, teamwork, interpretability, trust, and overall preference of AI teammate. We find that humans have a clear preference toward a rule-based AI teammate (SmartBot) over a state-of-the-art learning-based AI teammate (Other-Play) across nearly all subjective metrics, and generally view the learning-based agent negatively, despite no statistical difference in the game score. This result has implications for future AI design and reinforcement learning benchmarking, highlighting the need to incorporate subjective metrics of human-AI teaming rather than a singular focus on objective task performance.
| accept | Reviewers were unanimous that the paper addresses an increasingly important area of research (human-AI collaborations), focuses on an understudied but crucial aspect of the research (i.e., how do humans perceive and assess their AI teammates?), and offers insightful results. While they found the paper well-written overall, reviewers suggested several avenues for further clarification and improvement, in particular, they would like the authors to further elaborate on their choice of the learning agent, the number and representativeness of participants in the study, and their arguments around ecological validity. Given the authors’ comprehensive response to the reviews, I am confident they can revise their paper accordingly, so I recommend acceptance. | train | [
"rpq-h97Ad8c",
"cgsDHZA2WQa",
"OKQa1dxlgYh",
"s9JeFSp5E55",
"xQ_wK6YjGwB",
"4vJX3jclYza",
"0_VbxHw0-hE",
"OQhwV-6wL4",
"-Z4ae1cE8as",
"DP-pLuI8tG"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response and the suggestions. We will include the suggested changes if the paper is accepted, either in the main paper or in the supplemental materials as appropriate.",
" Thank you for your detailed response and for clarifying about the statistical tests of the study. I will adjust my score a... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"cgsDHZA2WQa",
"4vJX3jclYza",
"nips_2021_x_JOyw5CLP",
"DP-pLuI8tG",
"-Z4ae1cE8as",
"OKQa1dxlgYh",
"OQhwV-6wL4",
"nips_2021_x_JOyw5CLP",
"nips_2021_x_JOyw5CLP",
"nips_2021_x_JOyw5CLP"
] |
nips_2021_BdKxQp0iBi8 | Counterfactual Invariance to Spurious Correlations in Text Classification | Informally, a 'spurious correlation' is the dependence of a model on some aspect of the input data that an analyst thinks shouldn't matter. In machine learning, these have a know-it-when-you-see-it character; e.g., changing the gender of a sentence's subject changes a sentiment predictor's output. To check for spurious correlations, we can 'stress test' models by perturbing irrelevant parts of input data and seeing if model predictions change. In this paper, we study stress testing using the tools of causal inference. We introduce counterfactual invariance as a formalization of the requirement that changing irrelevant parts of the input shouldn't change model predictions. We connect counterfactual invariance to out-of-domain model performance, and provide practical schemes for learning (approximately) counterfactual invariant predictors (without access to counterfactual examples). It turns out that both the means and implications of counterfactual invariance depend fundamentally on the true underlying causal structure of the data---in particular, whether the label causes the features or the features cause the label. Distinct causal structures require distinct regularization schemes to induce counterfactual invariance. Similarly, counterfactual invariance implies different domain shift guarantees depending on the underlying causal structure. This theory is supported by empirical results on text classification.
| accept | All reviewers have given very positive reviews with positive revisions after the discussion.
Briefly, the notion of counterfactual invariance discussed, its relation to stability against spurious variation and different conditions/regularization depending on the DAG model that is needed to enforce it have been well appreciated. Empirical results on text have been satisfactory as well.
There remains some minor concerns which authors have promised to address in their response. I encourage the authors to add additional material that they have promised to various reviewer concerns.
| train | [
"Pfv64nlXkB",
"Hs6MJLIr73o",
"WESD0gMSECK",
"rJuHmSfvWib",
"TpxBqHAdC2A",
"tqZDFW1v3Tu",
"aWwHHSTN9F",
"SIvTcWQ87Rr",
"mL537oJ1DXk",
"txz_i1R1bvI",
"h64b1f4zb9",
"I3qbSRwaizC"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I would appreciate it if you add a discussion regarding comparisons with Robey et al. even if it is in the Supplement due to space constraints. I do not have further questions. \n",
" Do you have any remaining concerns or questions that we can address?",
"This paper presents condition... | [
-1,
-1,
7,
-1,
-1,
8,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
4,
-1,
-1,
5,
-1,
-1,
-1,
-1,
3,
5
] | [
"Hs6MJLIr73o",
"aWwHHSTN9F",
"nips_2021_BdKxQp0iBi8",
"mL537oJ1DXk",
"SIvTcWQ87Rr",
"nips_2021_BdKxQp0iBi8",
"I3qbSRwaizC",
"tqZDFW1v3Tu",
"WESD0gMSECK",
"h64b1f4zb9",
"nips_2021_BdKxQp0iBi8",
"nips_2021_BdKxQp0iBi8"
] |
nips_2021_I39u89067j | Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training | Lue Tao, Lei Feng, Jinfeng Yi, Sheng-Jun Huang, Songcan Chen | accept | The paper analyzes in a particular context how adversarial training can help with preventing delusive adversaries affect training. Delusive adversaries are defined to be those that may modify training data (feature, not labels) to make the resulting classifier have bad test error. The analysis is specifically in the case of Gaussian noise. There is also experimental evidence provided. This paper resulted in a long exchange between reviewers and authors and the reviewers turned increasingly positive about the paper. The authors must include all of the additional details that have been included in the discussion in the next iteration of the paper. | train | [
"TjSFp6nlhx8",
"6_X9-Qy3rS",
"81wjFFQI9zT",
"n3gtryJIq_T",
"Q5f0Ubs5YWd",
"4RT370g0tfh",
"rSrH1erIxJG",
"32WslZPzc2K",
"GeQW3UGLdc5",
"Zd1KOPW3wZV",
"DUrDyYNGLmG",
"s-JQHzR-Nhu",
"DFih0Nvm0M",
"8l-IwWU_oif",
"ELl-4EhHF_0",
"wDqDeUrD8LM",
"658xU-DE3Qv",
"qAR42DiYtsb",
"JGEkcXAsstz... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"... | [
" Thank you very much for reading our response and increasing your score. It is great to hear that the concerns have been addressed and that you appreciated our clarifications on our contributions. We like your suggestion about providing a concise summary of our contributions and a more detailed discussion of prev... | [
-1,
-1,
6,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"n3gtryJIq_T",
"4RT370g0tfh",
"nips_2021_I39u89067j",
"ehV87dozi8g",
"nips_2021_I39u89067j",
"32WslZPzc2K",
"Zd1KOPW3wZV",
"GeQW3UGLdc5",
"JGEkcXAsstz",
"DUrDyYNGLmG",
"0Z_rN8pVW6w",
"81wjFFQI9zT",
"ELl-4EhHF_0",
"nips_2021_I39u89067j",
"wDqDeUrD8LM",
"658xU-DE3Qv",
"UT04Y3c-5hV",
... |
nips_2021_QpRufbD4xdn | Determinantal point processes based on orthogonal polynomials for sampling minibatches in SGD | Rémi Bardenet, Subhroshekhar Ghosh, Meixia LIN | accept | This paper presents a convincing theoretical analysis of a technique for minibatch sampling using determinantal point processes, showing that it can lead to variance decaying more rapidly compared with uniform sampling. The result formalizes and refines a commonly felt intuition, and is compelling, significant, and a great fit for NeurIPS. After author feedback, all reviewers favored acceptance. | train | [
"cUT77dHdaym",
"A9MBYwEDr3u",
"V03Odg6b1Eh",
"OUKJQj8Dj3u",
"NtsButRXmuR",
"XeNihfISKN",
"a1yEX52cYXU",
"4dwlflQU8ch",
"ssPLZZ7f40C",
"o3z3jNpkiq",
"SmXeLZNHUA3"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your reply and your answers to my questions (as well as those of the other reviewers)! I am maintaining my score of 7.",
"This paper studies unbiased estimators of SGD minibatch via DPP with orthogonal polynomials. The estimator can be constructed by sampling DPP whose kernel consists of the ortho... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
8
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"XeNihfISKN",
"nips_2021_QpRufbD4xdn",
"a1yEX52cYXU",
"NtsButRXmuR",
"SmXeLZNHUA3",
"ssPLZZ7f40C",
"A9MBYwEDr3u",
"o3z3jNpkiq",
"nips_2021_QpRufbD4xdn",
"nips_2021_QpRufbD4xdn",
"nips_2021_QpRufbD4xdn"
] |
nips_2021_j2gshvolULz | Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations | Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection. However, current methods are still primarily applied to curated datasets like ImageNet. In this paper, we first study how biases in the dataset affect existing methods. Our results show that an approach like MoCo works surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets. Second, given the generality of the approach, we try to realize further gains with minor modifications. We show that learning additional invariances - through the use of multi-scale cropping, stronger augmentations and nearest neighbors - improves the representations. Finally, we observe that MoCo learns spatially structured representations when trained with a multi-crop strategy. The representations can be used for semantic segment retrieval and video instance segmentation without finetuning. Moreover, the results are on par with specialized models. We hope this work will serve as a useful study for other researchers.
| accept | This paper had thorough reviews, rebuttals, and discussion. The reviewers all agreed that the work presents a very interesting analysis and set of conclusions, along with some simple methods to improve results of MoCo-learned representations. While individual contributions are simple and not necessarily novel from a technical perspective, as agreed by the reviewers the set of narratives, experiments, interesting findings, and methods globally provide an interesting novel perspective to the community. Analysis of invariances and especially dataset types (e.g. object/scene-specific) is indeed interesting and one that may exist as common knowledge but has not been thoroughly and rigorously analyzed. In other words, while the paper is largely an empirical investigation it is well-executed in making claims/hypothesis and resulting analysis.
One of the main concerns expressed by multiple reviewers is the lack of a strong connection and holistic perspective offered across the two sections of the paper. In the rebuttal, the authors provided several arguments and expanded this narrative and connections, which the reviewers were satisfied with. The authors should comprehensively incorporate this (and other great suggestions made by the reviewers) into the final version. While not mandatory, one of the remaining weaknesses that have not been addressed is the applicability of the findings to other self-supervised learning methods. We encourage the authors to add this if at all possible, as it would significantly increase the impact of the paper. | test | [
"j6cIpIL1sw9",
"YyIVX-8VUlC",
"1YWnmK0FVWn",
"e79_tLvB0_",
"G6sRWsLFigK",
"39PjDKFtxP",
"qDwA5v7gh76",
"3KQGrrtOgr",
"3rm890ILurZ",
"6OIkXE9TqQY",
"2nWZbvU3V6_",
"OH8VwROfvHT",
"Sn4dcSGq9jA",
"QaBri6Q6Y73",
"t3rgDT3jKt2",
"jpeedP2Ibst"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the valuable feedback.",
" Thanks for your responses! The rebuttal adequately addresses my concerns. \n\nRegarding some of the other key issues that were raised :\n\n- Novelty : I do agree that individual parts of the proposed approach are not too novel but the combination is. Further,... | [
-1,
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"YyIVX-8VUlC",
"2nWZbvU3V6_",
"G6sRWsLFigK",
"nips_2021_j2gshvolULz",
"3rm890ILurZ",
"3KQGrrtOgr",
"nips_2021_j2gshvolULz",
"OH8VwROfvHT",
"6OIkXE9TqQY",
"QaBri6Q6Y73",
"t3rgDT3jKt2",
"qDwA5v7gh76",
"jpeedP2Ibst",
"e79_tLvB0_",
"nips_2021_j2gshvolULz",
"nips_2021_j2gshvolULz"
] |
nips_2021_Aw96fN64soV | Neural Analysis and Synthesis: Reconstructing Speech from Self-Supervised Representations | We present a neural analysis and synthesis (NANSY) framework that can manipulate the voice, pitch, and speed of an arbitrary speech signal. Most of the previous works have focused on using information bottleneck to disentangle analysis features for controllable synthesis, which usually results in poor reconstruction quality. We address this issue by proposing a novel training strategy based on information perturbation. The idea is to perturb information in the original input signal (e.g., formant, pitch, and frequency response), thereby letting synthesis networks selectively take essential attributes to reconstruct the input signal. Because NANSY does not need any bottleneck structures, it enjoys both high reconstruction quality and controllability. Furthermore, NANSY does not require any labels associated with speech data such as text and speaker information, but rather uses a new set of analysis features, i.e., wav2vec feature and newly proposed pitch feature, Yingram, which allows for fully self-supervised training. Taking advantage of fully self-supervised training, NANSY can be easily extended to a multilingual setting by simply training it with a multilingual dataset. The experiments show that NANSY can achieve significant improvement in performance in several applications such as zero-shot voice conversion, pitch shift, and time-scale modification.
| accept | UPDATE: The revision has been reviewed and the paper has been accepted. The authors are encouraged to address other potential harms (e.g., bias, privacy) in their final version.
---
It came to the attention of the program chairs and ethics review chairs late in the review process that this paper concerns voice conversion, a technology that can easily be used in applications that deceive people in ways that cause harm (see https://neurips.cc/public/EthicsGuidelines). This concern was mentioned by some of the reviewers. However, the paper had not been flagged for ethics review and therefore did not go through the formal ethics review process.
This paper was subsequently discussed between the ethics review chairs and program chairs. In the end, a decision was made to conditionally accept the paper. Ethics reviewers were brought in to help set conditions. The list of conditions for acceptance is as follows:
1. Meaningful broader impacts statement. Moving beyond superficial discussion of obvious harms into a more detailed and thorough reflection on ethical issues, especially possible broader impact and potential for misuse.
2. Restricted release of model through some type of licensing or form-restricted access (ie. private repo accessed via request, model code and data use restricted by licenses, etc.)
3. Discussion of possible theoretical and practical mitigation strategies for minimizing harm of such technologies in the future. If this is not possible to discuss, include a clear articulation of the limits of such models in the absence of mitigation approaches. It is not necessary to implement the mitigation strategies discussed though some current work in this area should be highlighted by authors in the main text.
The original meta-review from the AC follows.
---
This paper proposes a neural analysis and synthesis training framework, with a novel training strategy based on information perturbation and the help of wav2vec 2.0 and Yingram. All reviewers had good comments on the novelty and significance of the work. Detailed response also addressed the concerns in the initial review comments. I recommend to accept this paper. | train | [
"NrafU8-T8J",
"ML2MPX2ZEJ4",
"YGskJ-vZcaC",
"cUZopMtW6Cr",
"rV9YussE6k",
"QURqw4Ox7qI",
"bGZYnf-O-z",
"mSdNwd_tE44",
"fjMqn0JeyqB",
"tVjTWHvs8SY",
"i3H0M3zEa9"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank reviewer_C3ov for the constructive and detailed comments. \nBelow are the responses to each of your concerns.\n\n\n\\\nI do not very much agree with the classification between \"text-based\" and \"information bottleneck\" approaches. I find that there are many approaches, specially in voice conversion (V... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4,
4,
4
] | [
"i3H0M3zEa9",
"fjMqn0JeyqB",
"tVjTWHvs8SY",
"rV9YussE6k",
"bGZYnf-O-z",
"mSdNwd_tE44",
"nips_2021_Aw96fN64soV",
"nips_2021_Aw96fN64soV",
"nips_2021_Aw96fN64soV",
"nips_2021_Aw96fN64soV",
"nips_2021_Aw96fN64soV"
] |
nips_2021_nIL7Q-p7-Sh | Auto-Encoding Knowledge Graph for Unsupervised Medical Report Generation | Medical report generation, which aims to automatically generate a long and coherent report of a given medical image, has been receiving growing research interests. Existing approaches mainly adopt a supervised manner and heavily rely on coupled image-report pairs. However, in the medical domain, building a large-scale image-report paired dataset is both time-consuming and expensive. To relax the dependency on paired data, we propose an unsupervised model Knowledge Graph Auto-Encoder (KGAE) which accepts independent sets of images and reports in training. KGAE consists of a pre-constructed knowledge graph, a knowledge-driven encoder and a knowledge-driven decoder. The knowledge graph works as the shared latent space to bridge the visual and textual domains; The knowledge-driven encoder projects medical images and reports to the corresponding coordinates in this latent space and the knowledge-driven decoder generates a medical report given a coordinate in this space. Since the knowledge-driven encoder and decoder can be trained with independent sets of images and reports, KGAE is unsupervised. The experiments show that the unsupervised KGAE generates desirable medical reports without using any image-report training pairs. Moreover, KGAE can also work in both semi-supervised and supervised settings, and accept paired images and reports in training. By further fine-tuning with image-report pairs, KGAE consistently outperforms the current state-of-the-art models on two datasets.
| accept | Overview:
The paper proposes a novel model for generating medical reports using medical reports and medical images that are *not* paired. The coupling between the two domains is achieved via a shared latent space grounded in Knowledge Graph, a reasonable choice since medical reports incorporate substantial amount of prior knowledge from the medical domain. The authors demonstrate the utility of their model with extensive empirical results and improve performance over SOTA.
Review:
Overall, the reviewers appreciated the novelty of the unsupervised medical report generation model. The reviewers had minor concerns which were adequately clarified by the authors in their detailed and meticulous responses. A few illustrative examples include:
1) Concerns about sensitivity to domain on which the KG was trained. They provide detailed experimental results to show the impact.
2) Ablation study in Table 4 with higher B. The clarifications are consistent with the results in the paper.
3) Request for Cider metric. Provided.
4) Fairness of training KG on CheXpert. Addressed with KG trained w/o CheXpert.
5) Clarification of the motivation for using KG. Adequately addressed.
6) and so on.
| train | [
"RpzMwJyNKuw",
"5NeI928f67",
"4HoUqsgw3w9",
"bCap9qQ4ayC",
"u9THH7sFRc6",
"zqBDVqvALde",
"Eo9lmpz1cB3",
"EQzo3-uq7lc",
"rZRebqtrKNs",
"DkP9Pju88zO"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a knowledge graph based method for medical report generation which can be trained by the unsupervised、semi-supervised and supervised manner. The overall framework consists of a pre-constructed knowledge graph、a knowledge-driven encoder and a knowledge-driven decoder. The authors first construct... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
5
] | [
5,
-1,
5,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_nIL7Q-p7-Sh",
"zqBDVqvALde",
"nips_2021_nIL7Q-p7-Sh",
"DkP9Pju88zO",
"rZRebqtrKNs",
"RpzMwJyNKuw",
"4HoUqsgw3w9",
"nips_2021_nIL7Q-p7-Sh",
"nips_2021_nIL7Q-p7-Sh",
"nips_2021_nIL7Q-p7-Sh"
] |
nips_2021_x1Lp2bOlVIo | Diffusion Normalizing Flow | We present a novel generative modeling method called diffusion normalizing flow based on stochastic differential equations (SDEs). The algorithm consists of two neural SDEs: a forward SDE that gradually adds noise to the data to transform the data into Gaussian random noise, and a backward SDE that gradually removes the noise to sample from the data distribution. By jointly training the two neural SDEs to minimize a common cost function that quantifies the difference between the two, the backward SDE converges to a diffusion process the starts with a Gaussian distribution and ends with the desired data distribution. Our method is closely related to normalizing flow and diffusion probabilistic models, and can be viewed as a combination of the two. Compared with normalizing flow, diffusion normalizing flow is able to learn distributions with sharp boundaries. Compared with diffusion probabilistic models, diffusion normalizing flow requires fewer discretization steps and thus has better sampling efficiency. Our algorithm demonstrates competitive performance in both high-dimension data density estimation and image generation tasks.
| accept | There was significant discussion of this paper, with one reviewer increasing their score. The score range was unusually wide, even after discussion.
Reviewers felt that the idea presented was both novel and interesting. I agree with this assessment -- I think this paper draws a very nice and interesting link between normalizing flows and diffusion models. It's impressive to cast them into the same theoretical framework, and I think this will inspire followup work.
However, reviewers also had remaining concerns. Most seriously that:
- the computational tradeoffs during training that this approach required weren't clearly acknowledged or discussed. I believe this is a fair criticism. Please include discussion of this, and the training time tradeoffs you discussed in your rebuttal, in any camera ready.
- that the memory discussion around the adjoint method is misleading. I am unable to fully judge this concern. I do think something like logarithmic checkpointing should also be able to prevent memory overhead from being an issue though, so it's not clear to me that memory usage was ever a critical problem to start with.
- that the performance table left out a number of results with better FID and log likelihood scores than the presented method. This is definitely true, and the additional rows **must** be added for any camera ready. The paper only claimed "competitive performance", not SOTA, so I do not believe this is a fatal flaw.
- there were also some concerns about clarity, and I encourage the authors to address the specific aspects that reviewers found challenging.
Due to these legitimate concerns, I struggled with my recommendation for this paper, and read the paper myself to make a more informed decision. Because of the novelty of the approach, and because of my judgement of the relative importance of novelty to scientific progress, I am recommending acceptance. | train | [
"HLKDCR7vX6M",
"GAqdYsxLoXv",
"Zdw0uDaPVb0",
"McmOp-NXhXt",
"KQZUry37SvZ",
"tVJz4if7_nt",
"gwW4MrbPYcl",
"INQgZEElIDS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper under review introduce the Diffusion Normalizing Flow, which combines the diffusion model method and the normalizing flow method in their algorithm. The idea is clearly presented and the main part of the paper is focused on the experiment implementation of the algorithm. Positive.\n\n1. The idea seems ... | [
6,
8,
-1,
-1,
-1,
-1,
4,
4
] | [
3,
5,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_x1Lp2bOlVIo",
"nips_2021_x1Lp2bOlVIo",
"HLKDCR7vX6M",
"GAqdYsxLoXv",
"gwW4MrbPYcl",
"INQgZEElIDS",
"nips_2021_x1Lp2bOlVIo",
"nips_2021_x1Lp2bOlVIo"
] |
nips_2021_OBLl2xoDHPw | Introspective Distillation for Robust Question Answering | Question answering (QA) models are well-known to exploit data bias, e.g., the language prior in visual QA and the position bias in reading comprehension. Recent debiasing methods achieve good out-of-distribution (OOD) generalizability with a considerable sacrifice of the in-distribution (ID) performance. Therefore, they are only applicable in domains where the test distribution is known in advance. In this paper, we present a novel debiasing method called Introspective Distillation (IntroD) to make the best of both worlds for QA. Our key technical contribution is to blend the inductive bias of OOD and ID by introspecting whether a training sample fits in the factual ID world or the counterfactual OOD one. Experiments on visual QA datasets VQA v2, VQA-CP, and reading comprehension dataset SQuAD demonstrate that our proposed IntroD maintains the competitive OOD performance compared to other debiasing methods, while sacrificing little or even achieving better ID performance compared to the non-debiasing ones.
| accept | The reviewers appreciated the author response and recommend to accept the paper after the author response.
I agree this work provides a solid idea and approach to robust question answering. Adding the proposed approach on top of existing methods shows consistent significant improvements on both, visual question answering as well as reading comprehension (additional results for NLI are also added in the author response).
I recommend acceptance under the expectations that the authors will address the concerns of the reviewers in the camera ready version as discussed in the author response, including but not limited to
1) include clarifications provided in author response and improve discussion of novelty
2) ablation study ensembeling two teachers
3) additional results on NLI
[if any results don't fit in the main paper, please add them to supplement] | train | [
"90XIPyxkGv6",
"IzpWmrwZvvl",
"b3MLzosYgi",
"w6fw9wjyeUz",
"INLB5vHvb-",
"ZGR30Y8z8-d",
"MFw_zuMf32X",
"j4ocZY-uA-Y",
"wgW-3dEJb4",
"MSAFjgAFlqU",
"HSENmcbwSdo",
"wLYBrEg44Ut",
"xtVXT1rqrP0",
"eOcZjVh1kcq",
"y6xFbzuPFUK"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper propose a novel method on training VQA system that can achieve competitive performance both in-distribution and out-of-distribution. Specifically, the proposed system leverage a recent causality-based QA model to estimate the OOD distribution (via counterfactual reasoning), thus building two sub-modules... | [
6,
-1,
-1,
-1,
8,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
5,
-1,
-1,
-1,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_OBLl2xoDHPw",
"ZGR30Y8z8-d",
"MFw_zuMf32X",
"wgW-3dEJb4",
"nips_2021_OBLl2xoDHPw",
"HSENmcbwSdo",
"wLYBrEg44Ut",
"nips_2021_OBLl2xoDHPw",
"eOcZjVh1kcq",
"nips_2021_OBLl2xoDHPw",
"INLB5vHvb-",
"j4ocZY-uA-Y",
"90XIPyxkGv6",
"y6xFbzuPFUK",
"nips_2021_OBLl2xoDHPw"
] |
nips_2021_HL_4vjPTdtp | Rethinking the Pruning Criteria for Convolutional Neural Network | Zhongzhan Huang, Wenqi Shao, Xinjiang Wang, Liang Lin, Ping Luo | accept | The paper provides a study of pruning criteria by evaluating different methods and defining metrics that help understand the different properties of the various criteria out there. The topic of DNN pruning is quite rich today and I personally find it refreshing (as do some of the reviewers) to see a paper that tries to sort out the different methods. The paper is far from a complete solution to the problem, but it provides a good start. Given the importance and magnitude of the problem, I believe the paper is worthy of being published at NeurIPS and will likely motivate follow up papers that will improve the way we currently measure pruning criteria. | train | [
"wwz9Dp2o6K2",
"7fh4sR0zFZT",
"6YNFrf_wCM",
"B4PUoWcgoAC",
"YCa-8yyu-zh",
"P4aEHgjaOKp",
"RGe8cl8X3Kq",
"nhY_ESkHCXC",
"zmK0YkPw79z",
"tkTzwcB-pq",
"UAdLQXeCIgT",
"dqfUwSLk5n",
"8gZiFJA-iTK",
"ikai7yk94Up",
"uTEzQQrKDRo",
"y9cZJWJWjt6",
"GR_oHwXxbNq"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their time and effort in answering all of my questions. I agree that the paper has a novel approach to evaluating various pruning criteria. The authors address most of my concerns although I am still not fully convinced that the applicability is surprising for norm-based prun... | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
6
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"6YNFrf_wCM",
"nips_2021_HL_4vjPTdtp",
"RGe8cl8X3Kq",
"P4aEHgjaOKp",
"tkTzwcB-pq",
"8gZiFJA-iTK",
"nhY_ESkHCXC",
"7fh4sR0zFZT",
"7fh4sR0zFZT",
"GR_oHwXxbNq",
"uTEzQQrKDRo",
"uTEzQQrKDRo",
"uTEzQQrKDRo",
"y9cZJWJWjt6",
"nips_2021_HL_4vjPTdtp",
"nips_2021_HL_4vjPTdtp",
"nips_2021_HL_4v... |
nips_2021_Goz-qsH1F14 | Adaptive Machine Unlearning | Data deletion algorithms aim to remove the influence of deleted data points from trained models at a cheaper computational cost than fully retraining those models. However, for sequences of deletions, most prior work in the non-convex setting gives valid guarantees only for sequences that are chosen independently of the models that are published. If people choose to delete their data as a function of the published models (because they don’t like what the models reveal about them, for example), then the update sequence is adaptive. In this paper, we give a general reduction from deletion guarantees against adaptive sequences to deletion guarantees against non-adaptive sequences, using differential privacy and its connection to max information. Combined with ideas from prior work which give guarantees for non-adaptive deletion sequences, this leads to extremely flexible algorithms able to handle arbitrary model classes and training methodologies, giving strong provable deletion guarantees for adaptive deletion sequences. We show in theory how prior work for non-convex models fails against adaptive deletion sequences, and use this intuition to design a practical attack against the SISA algorithm of Bourtoule et al. [2021] on CIFAR-10, MNIST, Fashion-MNIST.
| accept | This paper identifies a new concern in machine unlearning, the role of adaptivity in the request sequence. Specifically, if removal requests may be adaptive, then the unlearning guarantees may be violated. The authors also give a differential privacy based method to mitigate this issue. This is an interesting new phenomenon, and the paper should be accepted.
The authors are suggested to pay attention to the presentation comments made by the reviewers: as machine unlearning is a relatively new field, it is important to make the early papers as well written as possible.
(As one minor personal comment, I disagree with the authors' response that a drop from 97% to 91% accuracy (MNIST k=6) is relatively small, I would say this is significant when the error rate is so small.) | test | [
"UCH5qAo6bv8",
"dqZxuB8l7X-",
"fMDc1wc1xCK",
"NJA-GPkFjHi",
"0S907g8FmO6",
"TQQHM0DM3uY",
"h4iNfnGfWTO",
"VkVqicUH4yk",
"lfMzPqtjlXl",
"eaUfzYDVfjb",
"Qf0pVQ-RhMx",
"jR-ii3-EQf",
"p3VjAZHOFH"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for writing back! To answer your questions:\n\n1) Composition vs group privacy: Note that the hypothesis to our main theorem (Thm 3.1) is that the sequence of publishing functions $f_1,\\ldots,f_t$ together satisfy $(\\epsilon,\\delta)$-differential privacy. The privacy of a sequence of individually differ... | [
-1,
-1,
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
-1,
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
4,
3
] | [
"fMDc1wc1xCK",
"NJA-GPkFjHi",
"Qf0pVQ-RhMx",
"TQQHM0DM3uY",
"nips_2021_Goz-qsH1F14",
"VkVqicUH4yk",
"nips_2021_Goz-qsH1F14",
"0S907g8FmO6",
"h4iNfnGfWTO",
"p3VjAZHOFH",
"jR-ii3-EQf",
"nips_2021_Goz-qsH1F14",
"nips_2021_Goz-qsH1F14"
] |
nips_2021_ppv5yqhpNyE | EditGAN: High-Precision Semantic Image Editing | Generative adversarial networks (GANs) have recently found applications in image editing. However, most GAN-based image editing methods often require large-scale datasets with semantic segmentation annotations for training, only provide high-level control, or merely interpolate between different images. Here, we propose EditGAN, a novel method for high-quality, high-precision semantic image editing, allowing users to edit images by modifying their highly detailed part segmentation masks, e.g., drawing a new mask for the headlight of a car. EditGAN builds on a GAN framework that jointly models images and their semantic segmentation, requiring only a handful of labeled examples – making it a scalable tool for editing. Specifically, we embed an image into the GAN’s latent space and perform conditional latent code optimization according to the segmentation edit, which effectively also modifies the image. To amortize optimization, we find “editing vectors” in latent space that realize the edits. The framework allows us to learn an arbitrary number of editing vectors, which can then be directly applied on other images at interactive rates. We experimentally show that EditGAN can manipulate images with an unprecedented level of detail and freedom while preserving full image quality. We can also easily combine multiple edits and perform plausible edits beyond EditGAN’s training data. We demonstrate EditGAN on a wide variety of image types and quantitatively outperform several previous editing methods on standard editing benchmark tasks.
| accept | This submission tackles the problem of semantic editing of images leveraging a generative model that modulates a joint distribution of images with segmentation masks. Thanks to this joint generation, the editing approach is based on the segmentation masks manipulation.
This allows highly localized editing that were difficult to obtain up to now.
Some issues and/or limitations have been raised by reviewers. Many clarifications have been done by authors during rebuttal and discussion period. Scores have been increased.
The task, the novel methodology, and the quality of the experiments with direct useful practical application lead me to propose to accept this paper. | train | [
"iJ7HBfP1rAh",
"Zb_Rzxpg65H",
"hEzUO8-4l65",
"sFnGNJIgoup",
"ZVFfQ05mrk3",
"9-wodODRpb2",
"b8_QwkK2OBK",
"F70yd3hv9xE",
"0En4G77o9lr",
"uenONwHVR9G"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors propose a novel approach to identify meaningful and well-localized images transformations based on a generative model that modulates a joint distribution of images with segmentation masks. The results look promising though there are concerns about the experimental section and baselines. The approach p... | [
6,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
5,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"nips_2021_ppv5yqhpNyE",
"iJ7HBfP1rAh",
"nips_2021_ppv5yqhpNyE",
"ZVFfQ05mrk3",
"0En4G77o9lr",
"b8_QwkK2OBK",
"iJ7HBfP1rAh",
"uenONwHVR9G",
"hEzUO8-4l65",
"nips_2021_ppv5yqhpNyE"
] |
nips_2021_Uxi7X1EqywV | Deep Molecular Representation Learning via Fusing Physical and Chemical Information | Molecular representation learning is the first yet vital step in combining deep learning and molecular science. To push the boundaries of molecular representation learning, we present PhysChem, a novel neural architecture that learns molecular representations via fusing physical and chemical information of molecules. PhysChem is composed of a physicist network (PhysNet) and a chemist network (ChemNet). PhysNet is a neural physical engine that learns molecular conformations through simulating molecular dynamics with parameterized forces; ChemNet implements geometry-aware deep message-passing to learn chemical / biomedical properties of molecules. Two networks specialize in their own tasks and cooperate by providing expertise to each other. By fusing physical and chemical information, PhysChem achieved state-of-the-art performances on MoleculeNet, a standard molecular machine learning benchmark. The effectiveness of PhysChem was further corroborated on cutting-edge datasets of SARS-CoV-2.
| accept | In this paper, the authors proposed a new neural network model called PhysChem for molecule representation. In PhysChem, a PhysNet and a ChemNet interact with each other to facilitate the learning process. The authors conduct a series of experiments to verify the practical effectiveness proposed algorithm. Overall speaking, the reviewers are positive on this paper. They like the intention of the authors to integrate physical and chemical knowledge in the design of the neural network. But on the other hand, the reviewers also raise some concerns, including the experimental setting, the role of PhysNet, etc. The authors did a good job in addressing most of the concerns, and the consensus made among the reviewers is more leaning towards the positive side. Therefore my recommendation is ACCEPT as a poster. | train | [
"ATW1KTO0f8Y",
"HZCTAcN0nc9",
"GP2qz2JqfpX",
"cRJ_cDLs_-u",
"sDsxCqB_gp",
"aan5iAddsKj",
"EjcseOyhuxF",
"MZHHDJeiNY_",
"mdprm5c_bEs",
"dAm_N70ka4M",
"ISCcMdC_hul",
"qGVBqAjFHG",
"SwMNj0vZu9O",
"Y7_zI6B7BGm",
"or5azUnHGnV",
"c1Kt9-GFskZ",
"AeprDxeiqFp",
"jZgMsaU1NBD",
"Rjn8gVlwNhi... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"a... | [
" Thank you for your further suggestions. We are sorry if we caused any confusion on the results and will improve the explanation on the experimental setup. About the limitations of testing mostly on small molecules and with single confs instead of conformer ensembles, we will add a section to our paper to describe... | [
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"EjcseOyhuxF",
"dAm_N70ka4M",
"cRJ_cDLs_-u",
"MZHHDJeiNY_",
"Rjn8gVlwNhi",
"nips_2021_Uxi7X1EqywV",
"jZgMsaU1NBD",
"ISCcMdC_hul",
"nips_2021_Uxi7X1EqywV",
"qGVBqAjFHG",
"SwMNj0vZu9O",
"or5azUnHGnV",
"Y7_zI6B7BGm",
"AeprDxeiqFp",
"c1Kt9-GFskZ",
"mdprm5c_bEs",
"WA2nWkN6AjS",
"9-7Ukom... |
nips_2021_hzioAx8g9x | Neural optimal feedback control with local learning rules | A major problem in motor control is understanding how the brain plans and executes proper movements in the face of delayed and noisy stimuli. A prominent framework for addressing such control problems is Optimal Feedback Control (OFC). OFC generates control actions that optimize behaviorally relevant criteria by integrating noisy sensory stimuli and the predictions of an internal model using the Kalman filter or its extensions. However, a satisfactory neural model of Kalman filtering and control is lacking because existing proposals have the following limitations: not considering the delay of sensory feedback, training in alternating phases, requiring knowledge of the noise covariance matrices, as well as that of systems dynamics. Moreover, the majority of these studies considered Kalman filtering in isolation, and not jointly with control. To address these shortcomings, we introduce a novel online algorithm which combines adaptive Kalman filtering with a model free control approach (i.e., policy gradient algorithm). We implement this algorithm in a biologically plausible neural network with local synaptic plasticity rules. This network, with local synaptic plasticity rules, performs system identification, Kalman filtering and control with delayed noisy sensory feedback. This network performs system identification and Kalman filtering, without the need for multiple phases with distinct update rules or the knowledge of the noise covariances. It can perform state estimation with delayed sensory feedback, with the help of an internal model. It learns the control policy without requiring any knowledge of the dynamics, thus avoiding the need for weight transport. In this way, our implementation of OFC solves the credit assignment problem needed to produce the appropriate sensory-motor control in the presence of stimulus delay.
| accept | This paper had uniform support from the reviewers, and is a clear accept. To improve the work, I highly suggest you:
1 . Add the additional experiment requested by a reviewer
2. More clearly outline the derivation for the update equations (we discussed this in our discussion and concluded that they were correct, but it was not immediately obvious just from the text)
3. Incorporate other suggestions by the reviewers. | train | [
"64PD5oLgP3m",
"xmX1595CBm",
"dGTjkbaGH8q",
"jOEqjjb76lO",
"L-d-fzv59pd",
"IzzYkAyOZTu",
"3cr_gf-i1h",
"4pwvb-U22P",
"E75EbSojN53",
"UrAey-MJM6Q",
"eoz-aIYQYFJ",
"DyP-4mvTIbB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for considering this aspect in the revised manuscript. After reading the comments of the other reviewers and the authors' responses, the reviewer will not change his evaluation. The approach is very interesting and the evidence provided supports the authors' claims. Is there room for improvement? Yes, b... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"IzzYkAyOZTu",
"nips_2021_hzioAx8g9x",
"jOEqjjb76lO",
"xmX1595CBm",
"DyP-4mvTIbB",
"eoz-aIYQYFJ",
"xmX1595CBm",
"UrAey-MJM6Q",
"nips_2021_hzioAx8g9x",
"nips_2021_hzioAx8g9x",
"nips_2021_hzioAx8g9x",
"nips_2021_hzioAx8g9x"
] |
nips_2021_bGVZ6_u08Jy | Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection | We study the role of the representation of state-action value functions in regret minimization in finite-horizon Markov Decision Processes (MDPs) with linear structure. We first derive a necessary condition on the representation, called universally spanning optimal features (UNISOFT), to achieve constant regret in any MDP with linear reward function. This result encompasses the well-known settings of low-rank MDPs and, more generally, zero inherent Bellman error (also known as the Bellman closure assumption). We then demonstrate that this condition is also sufficient for these classes of problems by deriving a constant regret bound for two optimistic algorithms (LSVI-UCB and ELEANOR). Finally, we propose an algorithm for representation selection and we prove that it achieves constant regret when one of the given representations, or a suitable combination of them, satisfies the UNISOFT condition.
| accept | In this work, the authors theoretically study representation learning on MDPs with low Bellman error and low-rank MDPs. The main concerns from the reviewers are about the polynomial dependence on $N$ in the main results and the lack of detailed comparisons with the existing works. We suggest that the authors can further improve their theoretical analysis and add more detailed discussions in their next version. | test | [
"nUh4YRgjQE7",
"MhuIgpZ9uPv",
"S_2r6Nelp2p",
"4ZvgU4YFvRw",
"-SK50qPjljN",
"L3KAng7YZWd",
"9yf_c5mQoF",
"Yg8dFegJByV",
"tenT6lfwJry",
"6iUZK3O4eIJ",
"ogb7u-jw9pF",
"qEGoMtldwVN",
"xEpCmZhIuEq",
"fLcftVofxu",
"wbqr6MCTBJ9",
"Rv6-xQs4d4F"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors study the constant regret bound in low-rank MDP setting [Jin et al., 2020] and zero inherent Bellman error setting [Zanette et al., 2020b]. On the upper bound side, this paper shows that ELEANOR and LSVI-UCB have constant regret bounds under the gap assumption (assumption 3) and UNISOFT assumption (ass... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_bGVZ6_u08Jy",
"Rv6-xQs4d4F",
"xEpCmZhIuEq",
"L3KAng7YZWd",
"9yf_c5mQoF",
"S_2r6Nelp2p",
"Yg8dFegJByV",
"qEGoMtldwVN",
"ogb7u-jw9pF",
"nips_2021_bGVZ6_u08Jy",
"fLcftVofxu",
"nUh4YRgjQE7",
"Rv6-xQs4d4F",
"wbqr6MCTBJ9",
"nips_2021_bGVZ6_u08Jy",
"nips_2021_bGVZ6_u08Jy"
] |
nips_2021__NOwVKCmSo | Noether Networks: meta-learning useful conserved quantities | Progress in machine learning (ML) stems from a combination of data availability, computational resources, and an appropriate encoding of inductive biases. Useful biases often exploit symmetries in the prediction problem, such as convolutional networks relying on translation equivariance. Automatically discovering these useful symmetries holds the potential to greatly improve the performance of ML systems, but still remains a challenge. In this work, we focus on sequential prediction problems and take inspiration from Noether's theorem to reduce the problem of finding inductive biases to meta-learning useful conserved quantities. We propose Noether Networks: a new type of architecture where a meta-learned conservation loss is optimized inside the prediction function. We show, theoretically and experimentally, that Noether Networks improve prediction quality, providing a general framework for discovering inductive biases in sequential problems.
| accept | All reviewers were happy in the end to accept this paper: I vote to accept. The biggest changes the reviewers wanted to see in the camera ready include: (i) improving the clarity, (ii) additional visualizations/experiments. Making these changes as well as others suggested by the reviewers will make a good paper a great one and a welcome addition to the conference. | train | [
"DpnwxB0h3Ej",
"RcO8kNTuNjL",
"VEQMb3-prkH",
"AoUoydV0n6",
"Z1s3lyn3BDr",
"H0rIyxwc-AU",
"NvGBhNwzFUQ",
"JHrDCHpfRI2",
"_mT66S22whP",
"jhUkW74MM6W",
"fb-wCZCEoHJ",
"OrJ35o6cM1x"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for providing the explanation and extended discussion. Based on these answers, I think their choices are sensible and justified. I will therefore stand by my original rating.",
" I thank the reviewers for their detailed comments.\nIn particular for the references regarding 'recent' discoveries of conserv... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
7,
5
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"_mT66S22whP",
"H0rIyxwc-AU",
"NvGBhNwzFUQ",
"nips_2021__NOwVKCmSo",
"JHrDCHpfRI2",
"OrJ35o6cM1x",
"jhUkW74MM6W",
"AoUoydV0n6",
"fb-wCZCEoHJ",
"nips_2021__NOwVKCmSo",
"nips_2021__NOwVKCmSo",
"nips_2021__NOwVKCmSo"
] |
nips_2021_MXmmuhJYPdU | Uncertainty-Driven Loss for Single Image Super-Resolution | In low-level vision such as single image super-resolution (SISR), traditional MSE or L1 loss function treats every pixel equally with the assumption that the importance of all pixels is the same. However, it has been long recognized that texture and edge areas carry more important visual information than smooth areas in photographic images. How to achieve such spatial adaptation in a principled manner has been an open problem in both traditional model-based and modern learning-based approaches toward SISR. In this paper, we propose a new adaptive weighted loss for SISR to train deep networks focusing on challenging situations such as textured and edge pixels with high uncertainty. Specifically, we introduce variance estimation characterizing the uncertainty on a pixel-by-pixel basis into SISR solutions so the targeted pixels in a high-resolution image (mean) and their corresponding uncertainty (variance) can be learned simultaneously. Moreover, uncertainty estimation allows us to leverage conventional wisdom such as sparsity prior for regularizing SISR solutions. Ultimately, pixels with large certainty (e.g., texture and edge pixels) will be prioritized for SISR according to their importance to visual quality. For the first time, we demonstrate that such uncertainty-driven loss can achieve better results than MSE or L1 loss for a wide range of network architectures. Experimental results on three popular SISR networks show that our proposed uncertainty-driven loss has achieved better PSNR performance than traditional loss functions without any increased computation during testing.
| accept | This work presents a method for single image super-resolution based with emphasis on challenging situations such as textured and edge pixels with high uncertainty. The proposed uncertainty-driven loss improves results over other common losses (MSE and L1). All reviewers liked the paper. They asked some questions such as connection to other works, computational complexity at training and inference time, the reason why linear scaling would be a natural option, and the meaning of uncertainty in the proposed scheme. The authors provided a strong rebuttal and all reviewers at the end lean toward accepting the paper. I concur with them and recommend accept. Please make sure to include items suggested by reviewers in the final revision (e.g. comparison with other types of the weighted loss function). | val | [
"Z__mrO9SoLA",
"0Pgc8lmfcwu",
"fM7lqvRokJZ",
"5tJVNOXPXLK",
"jLwSnNbXNSC",
"wk61oG7OPnZ",
"Q4n9q3_WJtN",
"dMF2nqYvfyj",
"mcExPWGa2ds"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your careful review and constructive suggestions. We found them helpful and will include them in the final version of this paper.",
" The rebuttal addresses all my concerns regarding this paper. I will improve my score. It will be good if this paper considers the following concerns in the final ve... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
4,
4
] | [
"0Pgc8lmfcwu",
"jLwSnNbXNSC",
"nips_2021_MXmmuhJYPdU",
"wk61oG7OPnZ",
"fM7lqvRokJZ",
"mcExPWGa2ds",
"dMF2nqYvfyj",
"nips_2021_MXmmuhJYPdU",
"nips_2021_MXmmuhJYPdU"
] |
nips_2021_eXlxB3aLOe | GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training | Innovations in neural architectures have fostered significant breakthroughs in language modeling and computer vision. Unfortunately, novel architectures often result in challenging hyper-parameter choices and training instability if the network parameters are not properly initialized. A number of architecture-specific initialization schemes have been proposed, but these schemes are not always portable to new architectures. This paper presents GradInit, an automated and architecture agnostic method for initializing neural networks. GradInit is based on a simple heuristic; the norm of each network layer is adjusted so that a single step of SGD or Adam with prescribed hyperparameters results in the smallest possible loss value. This adjustment is done by introducing a scalar multiplier variable in front of each parameter block, and then optimizing these variables using a simple numerical scheme. GradInit accelerates the convergence and test performance of many convolutional architectures, both with or without skip connections, and even without normalization layers. It also improves the stability of the original Transformer architecture for machine translation, enabling training it without learning rate warmup using either Adam or SGD under a wide range of learning rates and momentum coefficients. Code is available at https://github.com/zhuchen03/gradinit.
| accept | The reviewers found the paper propose a new initialization method that work better on a few datasets than prior works (metaInit, fixup, etc.). The AC agrees with these assessments, and encourage the authors to incorporate the comments of the reviewers upon acceptance. | train | [
"iQGxV87ozUw",
"5XD_0AERVlS",
"00q5S1ylmR",
"8Gc8jDO4x0u",
"bXy-ZYqrGn",
"qod_vvMvD3q",
"Hj1PbDRLP71",
"2g0VIB19tQe",
"HJv4u7pv09U",
"QEOE7E6bxe2"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the response that has addressed my concerns. I would like to keep my positive review. ",
" Thanks for the reply. I raise my score accordingly.",
"The paper proposes an initialization that can be applied on multiple architecture (ResNet, VGG, w/ or w/o BN, etc) and optimizers. \nTo get ... | [
-1,
-1,
6,
6,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
3,
4,
-1,
-1,
-1,
-1,
4,
3
] | [
"Hj1PbDRLP71",
"qod_vvMvD3q",
"nips_2021_eXlxB3aLOe",
"nips_2021_eXlxB3aLOe",
"8Gc8jDO4x0u",
"00q5S1ylmR",
"QEOE7E6bxe2",
"HJv4u7pv09U",
"nips_2021_eXlxB3aLOe",
"nips_2021_eXlxB3aLOe"
] |
nips_2021_0IqTX6FcZWv | Capacity and Bias of Learned Geometric Embeddings for Directed Graphs | A wide variety of machine learning tasks such as knowledge base completion, ontology alignment, and multi-label classification can benefit from incorporating into learning differentiable representations of graphs or taxonomies. While vectors in Euclidean space can theoretically represent any graph, much recent work shows that alternatives such as complex, hyperbolic, order, or box embeddings have geometric properties better suited to modeling real-world graphs. Experimentally these gains are seen only in lower dimensions, however, with performance benefits diminishing in higher dimensions. In this work, we introduce a novel variant of box embeddings that uses a learned smoothing parameter to achieve better representational capacity than vector models in low dimensions, while also avoiding performance saturation common to other geometric models in high dimensions. Further, we present theoretical results that prove box embeddings can represent any DAG. We perform rigorous empirical evaluations of vector, hyperbolic, and region-based geometric representations on several families of synthetic and real-world directed graphs. Analysis of these results exposes correlations between different families of graphs, graph characteristics, model size, and embedding geometry, providing useful insights into the inductive biases of various differentiable graph representations.
| accept | This paper has reached a consensus in that the authors are offering several useful items to the study of embeddings, including a new method that balances between the flavors of two popular classes of existing embedding methods. This is a relatively practical area with many different proposed methods, so the authors approach—to bridge known gaps and provide useful theoretical and empirical insights—is welcome. The reviewers have very little variance between them; post clarifications by authors almost all the scores are identical. I also largely agree with the paper’s strengths. I’m positive about the contributions offered here.
Perhaps the main argument against acceptance, as also noted by several reviewers, is that the paper is a bit of a grab-bag in a sense, in that there’s some theoretical analysis of existing methods, the introduction of a new method, along with a large empirical study (not necessarily focused on just evaluating the proposed method). This is not a major problem, since all of these parts are useful, provide interesting insights that are likely to be helpful to practitioners, and the authors’ writing quality in weaving together these concepts is pretty good. The additional discussion period helped clarify a number of things here as well.
I think the authors made a wise choice to study the basics and paint a fairly complete picture (e.g., the focus on representation quality rather than generalization, attempting to control all the factors involved in quality in the empirical results). This is a more well-grounded and scientific approach than is typically found in papers in this area. | val | [
"9o_bIr7Dhxl",
"P342lVuUihg",
"HyoMyNW6a06",
"hGwtCli--y2",
"KwWT8aoJDEr",
"rr3-zITuTm6",
"W1pj24nApeI",
"oI_RrlBM944",
"kpT546mn62x",
"nLpcwJ7_pu4",
"tm1tkhhtqQe",
"HbzW4asDzJ",
"xApzH1xmAPw",
"DDBaxJvdQa4",
"_kYZR_1z-lo",
"-cUk8ER8bCd",
"Ep_Crl9gtmM",
"r83kG34Yk9R"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors study geometric embeddings for directed graphs and propose a continuous relaxation of probabilistic box embeddings. They also analyze geometric embedding methods for modeling directed graphs by theoretical analysis and experiments. Update After Rebuttal\n---\nI have carefully read the authors' respons... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_0IqTX6FcZWv",
"kpT546mn62x",
"tm1tkhhtqQe",
"-cUk8ER8bCd",
"rr3-zITuTm6",
"oI_RrlBM944",
"nips_2021_0IqTX6FcZWv",
"HbzW4asDzJ",
"nLpcwJ7_pu4",
"xApzH1xmAPw",
"_kYZR_1z-lo",
"DDBaxJvdQa4",
"9o_bIr7Dhxl",
"W1pj24nApeI",
"r83kG34Yk9R",
"Ep_Crl9gtmM",
"nips_2021_0IqTX6FcZWv",
... |
nips_2021_nJUDGEc69a5 | Online Learning Of Neural Computations From Sparse Temporal Feedback | Neuronal computations depend on synaptic connectivity and intrinsic electrophysiological properties. Synaptic connectivity determines which inputs from presynaptic neurons are integrated, while cellular properties determine how inputs are filtered over time. Unlike their biological counterparts, most computational approaches to learning in simulated neural networks are limited to changes in synaptic connectivity. However, if intrinsic parameters change, neural computations are altered drastically. Here, we include the parameters that determine the intrinsic properties, e.g., time constants and reset potential, into the learning paradigm. Using sparse feedback signals that indicate target spike times, and gradient-based parameter updates, we show that the intrinsic parameters can be learned along with the synaptic weights to produce specific input-output functions. Specifically, we use a teacher-student paradigm in which a randomly initialised leaky integrate-and-fire or resonate-and-fire neuron must recover the parameters of a teacher neuron. We show that complex temporal functions can be learned online and without backpropagation through time, relying on event-based updates only. Our results are a step towards online learning of neural computations from ungraded and unsigned sparse feedback signals with a biologically inspired learning mechanism.
| accept | This paper presents an approach for optimizing the intrinsic parameters of spiking neurons, and tests it within a teacher-student paradigm. The initial reviews were generally positive, and reviewers agreed the paper was technically sound and interesting, though there was some concern about the paradigm used for evaluation. However, after the authors' rebuttals, and some discussion amongst reviewers, a consensus was reached that this paper merits acceptance at NeurIPS. | test | [
"6EVv5XfYR8D",
"Zvd6dUqiBxv",
"zvfTRzw8Da0",
"NIJeED_In0J",
"-jeKVtGqwEp",
"1MUKEcqn6z",
"Ea5qXx1ahD6"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I was originally concerned about the lack of generalization capability of the teacher-student paradigm, and this is what my marginal reject was based on. However, seeing that other reviewers are less concerned and, crucially, that this explanation of the teacher-student paradigm adds a lot to my knowledge and is ... | [
-1,
7,
-1,
-1,
-1,
7,
7
] | [
-1,
3,
-1,
-1,
-1,
2,
3
] | [
"zvfTRzw8Da0",
"nips_2021_nJUDGEc69a5",
"Zvd6dUqiBxv",
"1MUKEcqn6z",
"Ea5qXx1ahD6",
"nips_2021_nJUDGEc69a5",
"nips_2021_nJUDGEc69a5"
] |
nips_2021_4pf_pOo0Dt | Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style | Self-supervised representation learning has shown remarkable success in a number of domains. A common practice is to perform data augmentation via hand-crafted transformations intended to leave the semantics of the data invariant. We seek to understand the empirical success of this approach from a theoretical perspective. We formulate the augmentation process as a latent variable model by postulating a partition of the latent representation into a content component, which is assumed invariant to augmentation, and a style component, which is allowed to change. Unlike prior work on disentanglement and independent component analysis, we allow for both nontrivial statistical and causal dependencies in the latent space. We study the identifiability of the latent representation based on pairs of views of the observations and prove sufficient conditions that allow us to identify the invariant content partition up to an invertible mapping in both generative and discriminative settings. We find numerical simulations with dependent latent variables are consistent with our theory. Lastly, we introduce Causal3DIdent, a dataset of high-dimensional, visually complex images with rich causal dependencies, which we use to study the effect of data augmentations performed in practice.
| accept | The reviewers have reached a consensus that the paper is a good addition to the neurips. Please see the reviewers' discussion on the pros and cons for the paper. | train | [
"-oRDZ_VAtrX",
"u7wIAaAqIOJ",
"fym-iLj2dCD",
"7m9wk-k_HNe",
"qM6WvMsU2Fh",
"1fC3SD1dgZv",
"m7vAYZfELQT",
"p3QWsTE_Gxn",
"V-qPjwHi5-b",
"oATDQ_XV8jj",
"QpQTFLjR6QH",
"rjcUX48Qmf8",
"6NqD56wAa0W",
"dGyDWMfFLwi",
"ETSs3xNzwfB",
"-_27znWTVo8",
"kTfsPIF5pul",
"zUUu4pGOMI_",
"kqPeoihem... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"a... | [
" Thanks for the continued discussion. The reviewer is correct that it is indeed not expensive to evaluate the self-supervised model on shape and size. Our reasoning for only evaluating the self-supervised model on factors we could reliably decode was to avoid having additional uninformative columns which prevent t... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
-1,
8,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"fym-iLj2dCD",
"nips_2021_4pf_pOo0Dt",
"7m9wk-k_HNe",
"qM6WvMsU2Fh",
"m7vAYZfELQT",
"pMuzrwhGy2M",
"V-qPjwHi5-b",
"nips_2021_4pf_pOo0Dt",
"rjcUX48Qmf8",
"nips_2021_4pf_pOo0Dt",
"kTfsPIF5pul",
"6NqD56wAa0W",
"-_27znWTVo8",
"nips_2021_4pf_pOo0Dt",
"kqPeoihem9q",
"u7wIAaAqIOJ",
"oATDQ_X... |
nips_2021_k7aeAz4Vbb | Instance-Conditional Knowledge Distillation for Object Detection | Zijian Kang, Peizhen Zhang, Xiangyu Zhang, Jian Sun, Nanning Zheng | accept | Although the paper originally received slightly mixed ratings, with one reviewer recommending rejection and 3 acceptance, the authors' feedback eventually convinced the most negative reviewer to update their score. Altogether, the reviewers acknowledge the novelty of the proposed method and that the empirical results are convincing. The authors' feedback also provided additional results and clarifications, which we strongly encourage the authors to incorporate in the final version. | train | [
"dJpBchUxR0",
"4bI6W4-VjC",
"Dujqhk_a_Y",
"CH3kyTeF3v",
"iAcZqtJUcXc",
"vg7AXvPod9b",
"klcgYO6F4wy",
"tfEpBfr8yF",
"SplIBFwaSOr",
"Oc6Xz4D4CvZ",
"bLpMOFiXr8j",
"RRKBwSjyOQt",
"Z_cWRQbld3",
"xxYb9Oo9-zB",
"p4FzFmik9CX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a target-oriented solution towards knowledge distillation for object detection. Unlike prior works which focus on distilling the global representations or foreground-region representations, the paper aims to find relevant regions for distillation based on the instance-conditional information. Th... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
5,
5
] | [
"nips_2021_k7aeAz4Vbb",
"Oc6Xz4D4CvZ",
"iAcZqtJUcXc",
"Z_cWRQbld3",
"vg7AXvPod9b",
"RRKBwSjyOQt",
"tfEpBfr8yF",
"bLpMOFiXr8j",
"nips_2021_k7aeAz4Vbb",
"dJpBchUxR0",
"SplIBFwaSOr",
"p4FzFmik9CX",
"xxYb9Oo9-zB",
"nips_2021_k7aeAz4Vbb",
"nips_2021_k7aeAz4Vbb"
] |
nips_2021_F1D8buayXQT | Self-Supervised Representation Learning on Neural Network Weights for Model Characteristic Prediction | Self-Supervised Learning (SSL) has been shown to learn useful and information-preserving representations. Neural Networks (NNs) are widely applied, yet their weight space is still not fully understood. Therefore, we propose to use SSL to learn neural representations of the weights of populations of NNs. To that end, we introduce domain specific data augmentations and an adapted attention architecture. Our empirical evaluation demonstrates that self-supervised representation learning in this domain is able to recover diverse NN model characteristics. Further, we show that the proposed learned representations outperform prior work for predicting hyper-parameters, test accuracy, and generalization gap as well as transfer to out-of-distribution settings.
| accept | There was a robust discussion between the reviewers and the authors and some discussion amongst the reviewers themselves. The post-rebuttal consensus is that we should accept this work. From the discussion with reviewers, some of the constructive feedback for a final version would be to show more evidence of usefulness in real case scenarios. It would also be useful to explore the directions suggested by reviewer 6PDp in their conversation with the authors. | train | [
"N-1ftFgP3I",
"ui9zeJzKAzA",
"XkJ9led0yax",
"fZQ4Hj_0qob",
"qJ7O5z3Tsy",
"VV4Lu-m1Pco",
"kA1-6_9qq6s",
"8Y5Vk5jW9mv",
"gEcLHmpNh1K",
"ZHS0xuNtF_M",
"AKlBUxFIw-z",
"5osROluIZF0",
"GtGbWDIibgS",
"LluJFL6UhMQ",
"FCqXFGerQJW",
"CwGqJtODUa"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I like to thank the authors for putting in a lot of hard work to produce this wonderful piece of work. also thank for authors for carefully response to my reviews.",
"The authors propose three data augmentations and one attention module to learn a representation of the network weights to predict the network acc... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"XkJ9led0yax",
"nips_2021_F1D8buayXQT",
"fZQ4Hj_0qob",
"GtGbWDIibgS",
"kA1-6_9qq6s",
"8Y5Vk5jW9mv",
"5osROluIZF0",
"AKlBUxFIw-z",
"nips_2021_F1D8buayXQT",
"LluJFL6UhMQ",
"gEcLHmpNh1K",
"ui9zeJzKAzA",
"CwGqJtODUa",
"FCqXFGerQJW",
"nips_2021_F1D8buayXQT",
"nips_2021_F1D8buayXQT"
] |
nips_2021_8xoN9ZdSW8 | Multimodal Virtual Point 3D Detection | Tianwei Yin, Xingyi Zhou, Philipp Krähenbühl | accept | This paper approaches the problem off 3D object classification on multimodal sparse LIDAR + dense RGB data by suggesting to use 2D object and region detectors on RGB images, dense 3D “viirtual” point generation by picking 2D points from detected instances nearby sparse 3D points, assuming that the depth of these sampled neighbors is the same as the reference, and then building “dense” 3D “virtual” point clouds to be classified. The technique is evaluated on nuScenes, where it achieves state-of-the-art 3D detection.
Reviewers praised the simplicity of the method and clarity of the paper, as well as the good experimental results. Reviewers had many issues with the paper: computational requirements of the method (FiPr), generalisability to KITTI (FiPr, PYBU, r8dG), flexibility with other detectors (FiPr), performance on small objects (FiPr), literature on point cloud augmentation (r8dG). All these questions were addressed in the rebuttal with additional experimental results and explanations.
One reviewer gave a score of 7 and another one off 6. One reviewer (FiPr) gave a scores of 4 but did not update their reviews after considerable rebuttal and additional experimental work by the authors. While waiting on reviewers FiPr, I am therefore willing to challenge the reviewers and promote this paper to an acceptance.
| train | [
"U6kdV2EXKUr",
"wfFWnV_Jb-B",
"jZbSvEb0xwi",
"k-FEqy_miAU",
"BCPlUm70im7",
"ryBmtvDmas",
"qUAIStNq6Ge"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes a new method (MVP) to use multimodal (RGB+LiDAR) sensor data in 3D perception. MVP is simple in principle: compute the 2D segmentation mask, project LiDAR points on the mask, and generate the virtual points by sampling the points in the mask and attaching them with the closest LiDAR depth. Thou... | [
7,
6,
-1,
-1,
-1,
-1,
4
] | [
4,
5,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_8xoN9ZdSW8",
"nips_2021_8xoN9ZdSW8",
"k-FEqy_miAU",
"wfFWnV_Jb-B",
"qUAIStNq6Ge",
"U6kdV2EXKUr",
"nips_2021_8xoN9ZdSW8"
] |
nips_2021_rsd-9hCIit3 | On Joint Learning for Solving Placement and Routing in Chip Design | For its advantage in GPU acceleration and less dependency on human experts, machine learning has been an emerging tool for solving the placement and routing problems, as two critical steps in modern chip design flow. Being still in its early stage, there are several fundamental issues unresolved: scalability, reward design, and end-to-end learning paradigm etc. To achieve end-to-end placement learning, we first propose a joint learning method for the placement of macros and standard cells, by the integration of reinforcement learning with a gradient based optimization scheme. To further bridge the placement with the subsequent routing task, we also develop a joint learning approach via reinforcement learning. One key design in our (reinforcement) learning paradigm involves a multi-view embedding model to encode both global graph level and local node level information of the input macros. Moreover, the random network distillation is devised to encourage exploration. Experiments on public chip design benchmarks show that our method can effectively learn from experience and also provide high-quality intermediate placement for the post standard cell placement, within few hours for training.
| accept | There was a consensus among reviewers that this paper should be accepted. It is the first to propose an RL agent for the combined task of placement and routing in chip design. The main critique after initial reviews were insufficient experiments. However, this was addressed to the reviewers' satisfaction by extensive additional experiments in the rebuttal. Hence, I recommentd acceptance of this paper. | train | [
"viV-tx6nq9W",
"vafLTe4zGLY",
"rYDc8rFuC3l",
"bmbsUIM36_I",
"dD1w2kfRuix",
"rMsu0oNMHN",
"SNl89BWPTVG",
"00s0XlXNfGF",
"4JSuTLm8yZF",
"0dtIuh3Fnkl",
"viOUlda16q1"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a joint learning method to solve the placement and routing problems together. Such problems used to be studied separately. For the placement, this paper is also solving the placement of macro and standard cells together. Moreover, both CNN and GNN are adopted to provide different embedding mod... | [
6,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
5,
3
] | [
"nips_2021_rsd-9hCIit3",
"nips_2021_rsd-9hCIit3",
"viOUlda16q1",
"SNl89BWPTVG",
"nips_2021_rsd-9hCIit3",
"nips_2021_rsd-9hCIit3",
"dD1w2kfRuix",
"0dtIuh3Fnkl",
"viV-tx6nq9W",
"nips_2021_rsd-9hCIit3",
"nips_2021_rsd-9hCIit3"
] |
nips_2021_w0ZNeU5S-l | Learning with Algorithmic Supervision via Continuous Relaxations | The integration of algorithmic components into neural architectures has gained increased attention recently, as it allows training neural networks with new forms of supervision such as ordering constraints or silhouettes instead of using ground truth labels. Many approaches in the field focus on the continuous relaxation of a specific task and show promising results in this context. But the focus on single tasks also limits the applicability of the proposed concepts to a narrow range of applications. In this work, we build on those ideas to propose an approach that allows to integrate algorithms into end-to-end trainable neural network architectures based on a general approximation of discrete conditions. To this end, we relax these conditions in control structures such as conditional statements, loops, and indexing, so that resulting algorithms are smoothly differentiable. To obtain meaningful gradients, each relevant variable is perturbed via logistic distributions and the expectation value under this perturbation is approximated. We evaluate the proposed continuous relaxation model on four challenging tasks and show that it can keep up with relaxations specifically designed for each individual task.
| accept | In this paper, the authors propose an approach for smoothing algorithms. While existing works were typically specialised for specific tasks, this works attempts to be more general by smoothing low-level operations such as if-else branches, while loops, and array indexing. The authors successfully demonstrate their approach on four diverse algorithms : bubble sort, shortest path, rendering and levenshtein.
The paper received an average score of 6.25, which is slightly above acceptance and one of the reviewers is ready to back up the paper.
While the paper would benefit from more polishing, it is overall well written and we would therefore like to recommend acceptance, assuming that the the authors will comply to the following requests, to further improve the paper:
- While the authors did a pretty good with citing the relevant literature, one reviewer mentioned additional references on stick breaking processes and array indexing. It is important to add these references.
- The authors should discuss more the pros and cons of Monte-Carlo approaches vs. the proposed approach. Monte-Carlo approaches, such as the one of Berthet et al, can run the original program unmodified, which is a key benefit.
- Limitations: most importantly, the authors should add a section on limitations of the approach. Indeed, while the approach is general at first glance, it has restrictions. First, the authors should discuss computational cost in more depth. Because of the local averaging, structures like trees would take exponential time to evaluate since they need to be fully expanded. Second, it is not clear for what class of programs the proposed approach is gonna work. All implemented applications (bubble sort, shortest path, rendering and levenshtein) seem to essentially have a fixed computational graph. Algorithms like Dijkstra or Quick sort wouldn't work with the proposed approach because these algorithms have inherently boolean decision steps, and making these decisions probabilistic would break the algorithm. The authors should try to delineate what program characteristics are needed for the proposed approach to work. This will help people who want to use the proposed framework know the operations they can and can't do. Lastly, when a program includes smoothed while loops, it's not clear if the program is even guaranteed to halt. Some further clarifications are needed. | val | [
"JVOyUrlxARd",
"fYNJzuQzGWQ",
"s78UoLRAUD",
"LhuvaM9TFyn",
"CSQXmVnPB_j",
"5hgoOB6blUR",
"gso3S7Qbba5",
"TA6RJysJQY5",
"E4EBlIsmyrG",
"DNL2xToiF91",
"c2ivQNQ4cW2",
"RomgUBuZaT",
"1ypD3JrTLRx",
"mysEcdDup3N",
"2FG8d3IbF0X"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I will however maintain the score I gave.",
" Thank you, this does answer all of my points satisfactorily (especially the revised text), and I'm convinced that it is reasonable to leave out timing experiments and probabilistic programming. I will maintain my recommendation to accept the paper.",
" We thank yo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"s78UoLRAUD",
"CSQXmVnPB_j",
"TA6RJysJQY5",
"E4EBlIsmyrG",
"gso3S7Qbba5",
"c2ivQNQ4cW2",
"DNL2xToiF91",
"2FG8d3IbF0X",
"mysEcdDup3N",
"1ypD3JrTLRx",
"RomgUBuZaT",
"nips_2021_w0ZNeU5S-l",
"nips_2021_w0ZNeU5S-l",
"nips_2021_w0ZNeU5S-l",
"nips_2021_w0ZNeU5S-l"
] |
nips_2021_NKNjbKb5dK | Differentiable Multiple Shooting Layers | We detail a novel class of implicit neural models. Leveraging time-parallel methods for differential equations, Multiple Shooting Layers (MSLs) seek solutions of initial value problems via parallelizable root-finding algorithms. MSLs broadly serve as drop-in replacements for neural ordinary differential equations (Neural ODEs) with improved efficiency in number of function evaluations (NFEs) and wall-clock inference time. We develop the algorithmic framework of MSLs, analyzing the different choices of solution methods from a theoretical and computational perspective. MSLs are showcased in long horizon optimal control of ODEs and PDEs and as latent models for sequence generation. Finally, we investigate the speedups obtained through application of MSL inference in neural controlled differential equations (Neural CDEs) for time series classification of medical data.
| accept | Bringing multiple shooting, time–parallel methods for ODEs to bear on improving inference and training of deep architectures based on Neural-ODE is a strong technical and conceptually novel contribution. As such, the paper may be expected to generate interest in both ML and Numerical methods (Differential Equations) communities. The reviews do ask for improved clarity in presentation and problem formulation in the final version. | test | [
"A8NRyVlqL3z",
"A8xuC1DAMAV",
"laTCYYm0t51",
"TW7rycvge8-",
"vK0Lx9iDH-r",
"fylHQAcQl_c",
"DPrQtYXLFaR",
"w8Cqw5oNPuV",
"Ro_RFDBEkS5"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This work proposes a novel neural model (Multiple Shooting Layer) that is a significant improvement over Neural ODEs. MSLs seek solutions of initial value problems via parallelizable root-finding algorithms which lead to - \n* significant speedups in wall-clock inference time;\n* significantly less Number of Funct... | [
7,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
2,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_NKNjbKb5dK",
"nips_2021_NKNjbKb5dK",
"nips_2021_NKNjbKb5dK",
"vK0Lx9iDH-r",
"fylHQAcQl_c",
"laTCYYm0t51",
"fylHQAcQl_c",
"A8xuC1DAMAV",
"A8NRyVlqL3z"
] |
nips_2021_NvdzzasFiGr | Global-aware Beam Search for Neural Abstractive Summarization | This study develops a calibrated beam-based algorithm with awareness of the global attention distribution for neural abstractive summarization, aiming to improve the local optimality problem of the original beam search in a rigorous way. Specifically, a novel global protocol is proposed based on the attention distribution to stipulate how a global optimal hypothesis should attend to the source. A global scoring mechanism is then developed to regulate beam search to generate summaries in a near-global optimal fashion. This novel design enjoys a distinctive property, i.e., the global attention distribution could be predicted before inference, enabling step-wise improvements on the beam search through the global scoring mechanism. Extensive experiments on nine datasets show that the global (attention)-aware inference significantly improves state-of-the-art summarization models even using empirical hyper-parameters. The algorithm is also proven robust as it remains to generate meaningful texts with corrupted attention distributions. The codes and a comprehensive set of examples are available.
| accept | The paper seeks to improve beam search for neural abstractive summarization. It deals with the local optimality problem of the original beam search: at each local step, the assumption is that the optimal hypothesis is within a top-k kept by the beam, but this is often not true. To alleviate this problem, the authors present a global attention mechanism that keeps track of cumulative local attention on the source tokens during decoding while penalizing deviation from the predicted global attention. Experiments on 9 summarization datasets show consistent gains.
The reviewers found many strong points, including a good motivation, a model that is principled and quite novel, and strong empirical results. The main negative points are that the work was not applied to other generation tasks, but extending the approach to, e.g., MT would require several changes to the model (as explained in the author response) and new experiments that would probably not fit in a single paper. Some of the reviewers found some aspects of the presentation confusing, but this could be fixed in the camera-ready version (I agree with the reviewer who suggested adding running example). Related work is a bit short and could mention attempts to mitigate exposure bias (which is not a completely unrelated problem, and its relatedness came up in the discussion) and other ways to get around limitations caused by token-level objectives.
Minor note: The paper makes it sound like beam search was invented in 2016, but it is fundamental algorithm of AI that goes back to much earlier than that. If they want to put a citation, the authors might be better off citing an AI textbook such as Russell & Norvig (or find its earliest use there). If the authors wanted to refer to the first use of beam search in a neural seq2seq setting, I think [Graves, 2012] would be a more appropriate citation (it is used with RNNs, but the core algorithm remains the same). Prior to seq2seq, beam search was used extensively in NLG, including machine translation [Koehn, 2004].
Koehn, 2004: http://homepages.inf.ed.ac.uk/pkoehn/publications/pharaoh-amta2004.pdf
Graves, 2012: https://arxiv.org/abs/1211.3711
| train | [
"-kD6XuA5ys",
"Yp78YjtrMbB",
"MSruIesc6Ea",
"0PMd_-aQpYx",
"MiUJOSnizdx",
"JJcq8kZ6_X9",
"i4YjMjNYITY",
"AThYB8r2ACu",
"uRFOy6CIY5M",
"Zw_YKZmVWJ",
"cQEIsnpe68-",
"XL36HMQs11h",
"mol7hbKF5_v",
"d8ZQzQd5B4E",
"zNTf-MLwrlf",
"vEIYC4Aq_W"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a generalization of the attention coverage penalty in seq2seq models (Wu et al 2016), where instead of encouraging a uniform coverage of the source, this work first predicts the expected coverage of each source word. During beam search, beam hypotheses that violate the expected attention covera... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
3
] | [
"nips_2021_NvdzzasFiGr",
"MSruIesc6Ea",
"0PMd_-aQpYx",
"Zw_YKZmVWJ",
"JJcq8kZ6_X9",
"XL36HMQs11h",
"-kD6XuA5ys",
"zNTf-MLwrlf",
"mol7hbKF5_v",
"-kD6XuA5ys",
"vEIYC4Aq_W",
"d8ZQzQd5B4E",
"nips_2021_NvdzzasFiGr",
"nips_2021_NvdzzasFiGr",
"nips_2021_NvdzzasFiGr",
"nips_2021_NvdzzasFiGr"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.