paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2021_KvjtYlrmAj | Stateful ODE-Nets using Basis Function Expansions | The recently-introduced class of ordinary differential equation networks (ODE-Nets) establishes a fruitful connection between deep learning and dynamical systems. In this work, we reconsider formulations of the weights as continuous-in-depth functions using linear combinations of basis functions which enables us to leverage parameter transformations such as function projections. In turn, this view allows us to formulate a novel stateful ODE-Block that handles stateful layers. The benefits of this new ODE-Block are twofold: first, it enables incorporating meaningful continuous-in-depth batch normalization layers to achieve state-of-the-art performance; second, it enables compressing the weights through a change of basis, without retraining, while maintaining near state-of-the-art performance and reducing both inference time and memory footprint. Performance is demonstrated by applying our stateful ODE-Block to (a) image classification tasks using convolutional units and (b) sentence-tagging tasks using transformer encoder units.
| accept | This paper presents a compressed representation of ODE-based neural networks, and analyzes its computational and implicit regularization properties. Experiments cover a good range of architectures, including both convolutional and transformer-based.
The reviewers believe there are worthwhile ideas in the paper and haven't identified anything that looks like a critical flaw. They raised some concerns about clarity and identified numerous ways in which the work could be extended. Reviewers also expressed skepticism of the usefulness, either because compression might not translate into practical gains, or because this architecture might lose various benefits associated with neural ODEs.
My own impression is that the submission is pretty well written by the standards of a NeurIPS paper, and the methods are described clearly even if the motivation could sometimes be made more explicit. The reviewers' suggestions seem to me (as a non-expert on this topic) to be sensible ones which would indeed improve the paper, but any paper that tried to do all of them would be unreasonably expansive. Regarding significance, the proposed architecture seems like a natural one to explore, and a paper which does it carefully (as this one does) is making a worthwhile contribution. Therefore, I recommend acceptance despite the slightly-low scores.
| train | [
"Jjl6YS7ghY",
"BNMr6aFFG6",
"RGjE9uioxGM",
"AHKrNE8wInL",
"HFgWcCutkjR",
"BUuYuCxVant",
"OJ8vW1S0d8",
"SF9HasEIdAi",
"hr0abQydkeQ",
"g9RGSmXdT-S",
"GE7mgsnrfZ5",
"KNwtzDfU60O",
"p5mvUh8PSp",
"5nXLzErdp-f",
"t_dJ4aP6PfO",
"EFSMk_Kgku2",
"lpr1i-p_KIy",
"32Gv2IDN3LK"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
" We would like to thank you for the great discussion and for your consideration to change your score to 6.\n\nWe agree with your suggestions for further exploring basis functions -- our only concern is that we are not able to include so many directions, while also addressing the more important concerns of devoting... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"RGjE9uioxGM",
"nips_2021_KvjtYlrmAj",
"BNMr6aFFG6",
"OJ8vW1S0d8",
"KNwtzDfU60O",
"hr0abQydkeQ",
"p5mvUh8PSp",
"nips_2021_KvjtYlrmAj",
"lpr1i-p_KIy",
"SF9HasEIdAi",
"p5mvUh8PSp",
"nips_2021_KvjtYlrmAj",
"BNMr6aFFG6",
"BNMr6aFFG6",
"nips_2021_KvjtYlrmAj",
"5nXLzErdp-f",
"SF9HasEIdAi",... |
nips_2021_rkA36z2plsI | Beyond the Signs: Nonparametric Tensor Completion via Sign Series | We consider the problem of tensor estimation from noisy observations with possibly missing entries. A nonparametric approach to tensor completion is developed based on a new model which we coin as sign representable tensors. The model represents the signal tensor of interest using a series of structured sign tensors. Unlike earlier methods, the sign series representation effectively addresses both low- and high-rank signals, while encompassing many existing tensor models---including CP models, Tucker models, single index models, structured tensors with repeating entries---as special cases. We provably reduce the tensor estimation problem to a series of structured classification tasks, and we develop a learning reduction machinery to empower existing low-rank tensor algorithms for more challenging high-rank estimation. Excess risk bounds, estimation errors, and sample complexities are established. We demonstrate the outperformance of our approach over previous methods on two datasets, one on human brain connectivity networks and the other on topic data mining.
| accept | This paper presents an interesting and innovative new notion of tensor rank
that can model high rank tensors with low complexity,
and show how to use this idea to solve tensor completion problems.
While the reviewers differ in their beliefs about the impact this method will have,
none dispute that the paper is clear, interesting, correct, and even (mostly) complete,
after the new experiments the authors added in the discussion.
This paper is a novel and solid contribution that deserves to be published. | train | [
"TfOnPg8mUv",
"Kgmfhob6A6f",
"6PFfVE13SL",
"IZpm4B6ABeq",
"05EiyL7b-Cn",
"A5f84IvFhL",
"fvV8qsG7mLY",
"3cDnykTlGY",
"iz2Cs-mPjvQ",
"dJA10QyEJP3"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > Hypergraphons: Please note that your model is indeed a special case of general hypergraphons. Indeed, as it depends only on $K$ coordinates, it is referred to as simple hypergraphons (see e.g., https://link.springer.com/article/10.1023/A:1021692202530). And you require uniform (and non-uniform) sampling for obs... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"Kgmfhob6A6f",
"IZpm4B6ABeq",
"iz2Cs-mPjvQ",
"3cDnykTlGY",
"fvV8qsG7mLY",
"dJA10QyEJP3",
"nips_2021_rkA36z2plsI",
"nips_2021_rkA36z2plsI",
"nips_2021_rkA36z2plsI",
"nips_2021_rkA36z2plsI"
] |
nips_2021_KLILoGYuOfw | Functional Variational Inference based on Stochastic Process Generators | Bayesian inference in the space of functions has been an important topic for Bayesian modeling in the past. In this paper, we propose a new solution to this problem called Functional Variational Inference (FVI). In FVI, we minimize a divergence in function space between the variational distribution and the posterior process. This is done by using as functional variational family a new class of flexible distributions called Stochastic Process Generators (SPGs), which are cleverly designed so that the functional ELBO can be estimated efficiently using analytic solutions and mini-batch sampling. FVI can be applied to stochastic process priors when random function samples from those priors are available. Our experiments show that FVI consistently outperforms weight-space and function space VI methods on several tasks, which validates the effectiveness of our approach.
| accept | This paper proposes a method for approximate Bayesian inference in function space using a novel form of the KL divergence between stochastic processes, the grid-functional KL divergence, where the number of measurement points is a random variable. This divergence has better properties than those used in previous work. All the reviewers recommend acceptance and I agree. | train | [
"17p2s7lxcyg",
"S5Iqa6UW5LP",
"O63uEUhOkbh",
"R9Awk-G9GIy",
"AqHL7rjIg5E",
"XxSaODxqw8X",
"xVoEP6w_73",
"Qm6xUKqnFJF",
"CMzuKCt466y"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your answers. My concerns are addressed. My recommendation remains the same.",
"The authors propose \"Functional Variational Inference\" (FVI), a novel variational approach to approximate Bayesian inference in function space. The approach is based on the \"grid-functional KL divergence\", a novel ... | [
-1,
6,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
3,
-1,
-1,
-1,
-1,
5,
4,
2
] | [
"AqHL7rjIg5E",
"nips_2021_KLILoGYuOfw",
"xVoEP6w_73",
"S5Iqa6UW5LP",
"CMzuKCt466y",
"Qm6xUKqnFJF",
"nips_2021_KLILoGYuOfw",
"nips_2021_KLILoGYuOfw",
"nips_2021_KLILoGYuOfw"
] |
nips_2021_86NHK__yFDl | TTT++: When Does Self-Supervised Test-Time Training Fail or Thrive? | Test-time training (TTT) through self-supervised learning (SSL) is an emerging paradigm to tackle distributional shifts. Despite encouraging results, it remains unclear when this approach thrives or fails. In this work, we first provide an in-depth look at its limitations and show that TTT can possibly deteriorate, instead of improving, the test-time performance in the presence of severe distribution shifts. To address this issue, we introduce a test-time feature alignment strategy utilizing offline feature summarization and online moment matching, which regularizes adaptation without revisiting training data. We further scale this strategy in the online setting through batch-queue decoupling to enable robust moment estimates even with limited batch size. Given aligned feature distributions, we shed light on the strong potential of TTT by theoretically analyzing its performance post adaptation. This analysis motivates our use of more informative self-supervision in the form of contrastive learning. We empirically demonstrate that our modified version of test-time training, termed TTT++, outperforms state-of-the-art methods by a significant margin on multiple vision benchmarks. Our result indicates that exploiting extra information stored in a compact form, such as related SSL tasks and feature distribution moments, can be critical to the design of test-time algorithms.
| accept | The work proposed a test time training method to address the distribution shift between training and testing using online feature alignment and contrastive loss based SSL. Reviewers overall see the paper well written and are content with the empirical evaluation. The rebuttal was able to convince some reviewers to raise their scores. I do agree with reviewer qJr7 that the contrastive loss based SSL task does not specifically address test time training, but rather is a generic approach to improve representation learning/generalization, which is well established. Focusing more of the discussion around the online feature alignment might shed more light on the new contribution of the work. As suggested by another reviewer, on-the-fly adaptation might be the more interesting setup to further distinguish the TTT setup from other setups facing distribution shift, such as domain adaptation. | test | [
"iGrWb_iVHH1",
"wxJLlHX9Mg",
"b8mafoMaK5M",
"dH-LBBTUey1",
"LAAU08QVx4g",
"HOBtZcsriX",
"ScLmmICKZes",
"UKsaTmClzhR",
"gSa7zdnVJeB",
"OSdzoJ-97Vw",
"E0ugjKGyWzC",
"tFAcQwpWHLK",
"U_LzjP6COcF",
"H9HVlD-BsqP",
"HOMxEsOQSGQ",
"g1sgasY7W-j",
"dbxU2Nq6J_",
"4WbJ7FpwQO",
"fIGRxlsdS83",... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"... | [
" Thanks for the comments! We will be glad to look more into the online setting in our future work. ",
" Thanks for the comments! We appreciate the feedback on the limitations of our paper and would like to share that the `decoupling` trick mentioned in our earlier response can help address these limitations, as ... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1
] | [
"ScLmmICKZes",
"E0ugjKGyWzC",
"HOBtZcsriX",
"nips_2021_86NHK__yFDl",
"nips_2021_86NHK__yFDl",
"4WbJ7FpwQO",
"UKsaTmClzhR",
"gSa7zdnVJeB",
"QmGuaXO3aGC",
"U_LzjP6COcF",
"fIGRxlsdS83",
"nips_2021_86NHK__yFDl",
"H9HVlD-BsqP",
"HOMxEsOQSGQ",
"dbxU2Nq6J_",
"nips_2021_86NHK__yFDl",
"nips_2... |
nips_2021_mSuBvrUJFsF | Double Machine Learning Density Estimation for Local Treatment Effects with Instruments | It is common to quantify causal effects with mean values, which, however, may fail to capture significant distribution differences of the outcome under different treatments. We study the problem of estimating the density of the causal effect of a binary treatment on a continuous outcome given a binary instrumental variable in the presence of covariates. Specifically, we consider the local treatment effect, which measures the effect of treatment among those who comply with the assignment under the assumption of monotonicity (only the ones who were offered the treatment take it). We develop two families of methods for this task, kernel-smoothing and model-based approximations -- the former smoothes the density by convoluting with a smooth kernel function; the latter projects the density onto a finite-dimensional density class. For both approaches, we derive double/debiased machine learning (DML) based estimators. We study the asymptotic convergence rates of the estimators and show that they are robust to the biases in nuisance function estimation. We illustrate the proposed methods on synthetic data and a real dataset called 401(k).
| accept | The reviewers unanimously found your paper to be compelling. I expect you will make the changes discussed during the review process. | train | [
"_i35h9ejNBY",
"1tO9dvFE5YC",
"I8r8WIVxqus",
"jV6GTIKtXMu",
"NHAfM5yPDpB",
"SAWkzEOC0TW",
"aFB8BcM-yQX",
"wzOdiBakPOa"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes two methods for estimating the density of the outcome variable for the compliers in the potential outcomes framework. The first is a kernel-smoothing based approach, while the second is a model-based approach, and in both cases, the double machine learning approach based on data splitting is em... | [
7,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
3,
2,
3
] | [
"nips_2021_mSuBvrUJFsF",
"_i35h9ejNBY",
"aFB8BcM-yQX",
"wzOdiBakPOa",
"SAWkzEOC0TW",
"nips_2021_mSuBvrUJFsF",
"nips_2021_mSuBvrUJFsF",
"nips_2021_mSuBvrUJFsF"
] |
nips_2021_6YL_BntJrz6 | Dirichlet Energy Constrained Learning for Deep Graph Neural Networks | Graph neural networks (GNNs) integrate deep architectures and topological structure modeling in an effective way. However, the performance of existing GNNs would decrease significantly when they stack many layers, because of the over-smoothing issue. Node embeddings tend to converge to similar vectors when GNNs keep recursively aggregating the representations of neighbors. To enable deep GNNs, several methods have been explored recently. But they are developed from either techniques in convolutional neural networks or heuristic strategies. There is no generalizable and theoretical principle to guide the design of deep GNNs. To this end, we analyze the bottleneck of deep GNNs by leveraging the Dirichlet energy of node embeddings, and propose a generalizable principle to guide the training of deep GNNs. Based on it, a novel deep GNN framework -- Energetic Graph Neural Networks (EGNN) is designed. It could provide lower and upper constraints in terms of Dirichlet energy at each layer to avoid over-smoothing. Experimental results demonstrate that EGNN achieves state-of-the-art performance by using deep layers.
| accept | Congratulations! Your paper is accepted to NeurIPS 2021.
Please incorporate the edits and corrections as discussed in the rebuttal. | train | [
"LwVPji8GRLK",
"ENF2l_0LGTU",
"F07vZdTmxqf",
"d-E-MgzvC_8",
"xB-TFLoKxBh",
"yJj4pDxESw0",
"DIUARR-nKlf",
"Q2Nk6vE9f7A",
"F60Hrys_KqY",
"acl6pJG1QR",
"b5G8ZXTLgmg",
"T4RipLnVaj4",
"vfuHRzgcUC9",
"R_c_r-nIZSK",
"-C5EiCUOW8"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you so much for your clear response. It is more clear about the effect of the regularization and weight initialization. ",
"This paper proposes Dirichlet energy constrained learning as a generalizable principle to guide the training of deep GNNs through regularizing Dirichlet energy at each layer. Follow... | [
-1,
7,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"vfuHRzgcUC9",
"nips_2021_6YL_BntJrz6",
"d-E-MgzvC_8",
"T4RipLnVaj4",
"nips_2021_6YL_BntJrz6",
"T4RipLnVaj4",
"nips_2021_6YL_BntJrz6",
"b5G8ZXTLgmg",
"T4RipLnVaj4",
"nips_2021_6YL_BntJrz6",
"DIUARR-nKlf",
"xB-TFLoKxBh",
"-C5EiCUOW8",
"ENF2l_0LGTU",
"nips_2021_6YL_BntJrz6"
] |
nips_2021_48uzkHOKMfz | Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives | Despite the potential of reinforcement learning (RL) for building general-purpose robotic systems, training RL agents to solve robotics tasks still remains challenging due to the difficulty of exploration in purely continuous action spaces. Addressing this problem is an active area of research with the majority of focus on improving RL methods via better optimization or more efficient exploration. An alternate but important component to consider improving is the interface of the RL algorithm with the robot. In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy. These parameterized primitives are expressive, simple to implement, enable efficient exploration and can be transferred across robots, tasks and environments. We perform a thorough empirical study across challenging tasks in three distinct domains with image input and a sparse terminal reward. We find that our simple change to the action interface substantially improves both the learning efficiency and task performance irrespective of the underlying RL algorithm, significantly outperforming prior methods which learn skills from offline expert data.
| accept | The reviewers had some concerns that were all addressed by the authors. Especially, the authors did new experiments and provided a lot of new results. All these new results are extremely useful to support the claims of the authors and *should be included* in the final version of the paper. For this reason, the paper will have to be significantly revised but, on the basis of the discussion, the reviewers are confident that this final version will describe a strong enough contribution. | train | [
"GLckDBRi5lR",
"QdfN8tgPQ1U",
"yOqJMh7Gxqx",
"9pqMXK-d1e7",
"nWL36fgAeIF",
"FPu3XzRqjuz",
"VcPAn1V8Tvf",
"rl727xQKRg",
"A-5LfiVCo91",
"f8LQ2nePhS2",
"E4vX7HOLaRl",
"z1-EupQyXo0",
"92cY0LzfnIn",
"QHKND8mq4R"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarification and additional experiments. It addresses all my concerns and I have no more questions. I think the paper will be much more convincing with new results and anlaysis, and the clarification.",
" We thank the reviewer for their detailed response. We have run the additional experiment... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"QdfN8tgPQ1U",
"nWL36fgAeIF",
"QHKND8mq4R",
"VcPAn1V8Tvf",
"yOqJMh7Gxqx",
"nips_2021_48uzkHOKMfz",
"rl727xQKRg",
"A-5LfiVCo91",
"z1-EupQyXo0",
"92cY0LzfnIn",
"nips_2021_48uzkHOKMfz",
"FPu3XzRqjuz",
"nips_2021_48uzkHOKMfz",
"nips_2021_48uzkHOKMfz"
] |
nips_2021_INsYqFjBWnF | Boosted CVaR Classification | Runtian Zhai, Chen Dan, Arun Suggala, J. Zico Kolter, Pradeep Ravikumar | accept | The paper showed in Proposition 1 that optimizing CVaR 0/1-lossis equivalent to minimizing average 0/1 loss when one is restricted to using deterministic classifiers which motivates the usage of randomized classifiers. Although the observation is interesting but not entirely new in light of the related work [HNSS18] and other ones in learning theory, the proof given here is simpler and easier to follow. Motivated by this, the authors proposed an efficient LP boost algorithm for optimizing CVaR with theoretical convergence analysis.
Overall, the paper presented an interesting framework based on LP-boost algorithms to build more fair classifiers with theoretical supports, although the key observation on the superior performance of randomized classifiers over deterministic is not surprising. The revised version should take into account sensitivity analysis of hyper-parameters, the precise derivation of dual formulation, and the detailed discussion of deriving generalization bounds, and other comments made by the reviewers. | train | [
"RZefl4Tb_Y8",
"xE1NN2QQ4SS",
"8gnP9P5mnh",
"PG2UvrprZfi",
"rlLyvalGOsk",
"wrtcESQW2vf",
"-ELW-U-AaeG",
"0ANkPGbVO-3",
"zC31QPtuVvW",
"Q3ZFSYEDuoP",
"1j7MIJv1G5A",
"Pa_HryIKon",
"arlNL1o-of9",
"amt4gBwrGSd",
"6hGG8Pdjyb",
"P6ZayYqzqrq"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper first shows that for deterministic models, optimizing for tail performance (in terms of 0/1 loss) is in fact equivalent to optimizing for average performance, and thus doesn't actually improve fairness of a model. As a mitigation, it is proposed to switch to non-deterministic classifiers, for which this ... | [
6,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5
] | [
3,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"nips_2021_INsYqFjBWnF",
"8gnP9P5mnh",
"PG2UvrprZfi",
"1j7MIJv1G5A",
"-ELW-U-AaeG",
"nips_2021_INsYqFjBWnF",
"zC31QPtuVvW",
"Pa_HryIKon",
"P6ZayYqzqrq",
"6hGG8Pdjyb",
"RZefl4Tb_Y8",
"amt4gBwrGSd",
"wrtcESQW2vf",
"nips_2021_INsYqFjBWnF",
"nips_2021_INsYqFjBWnF",
"nips_2021_INsYqFjBWnF"
... |
nips_2021_rJq1SdaNPX4 | Disentangled Contrastive Learning on Graphs | Recently, self-supervised learning for graph neural networks (GNNs) has attracted considerable attention because of their notable successes in learning the representation of graph-structure data. However, the formation of a real-world graph typically arises from the highly complex interaction of many latent factors. The existing self-supervised learning methods for GNNs are inherently holistic and neglect the entanglement of the latent factors, resulting in the learned representations suboptimal for downstream tasks and difficult to be interpreted. Learning disentangled graph representations with self-supervised learning poses great challenges and remains largely ignored by the existing literature. In this paper, we introduce the Disentangled Graph Contrastive Learning (DGCL) method, which is able to learn disentangled graph-level representations with self-supervision. In particular, we first identify the latent factors of the input graph and derive its factorized representations. Each of the factorized representations describes a latent and disentangled aspect pertinent to a specific latent factor of the graph. Then we propose a novel factor-wise discrimination objective in a contrastive learning manner, which can force the factorized representations to independently reflect the expressive information from different latent factors. Extensive experiments on both synthetic and real-world datasets demonstrate the superiority of our method against several state-of-the-art baselines.
| accept | The manuscript has been reviewed by four experienced reviewers, among whom three voted for acceptance and one reviewer (vaM6) voted for borderline reject. The borderline reviewer mainly complained about novelty, some missing experiments, as well as the discussion on scalability and efficiency. Regarding the novelty, in fact, other reviewers indeed rated the proposed approach as being novel; regarding the missing experiments, the authors claimed that the requested experiments were already in the manuscript; regarding the scalability and efficiency, the authors provided a discussion in the rebuttal.
Since reviewer vaM6 only mildly rejects the submission while all other reviewers tend to accept, the AC will follow the majority vote and accept the submission. Still, the authors are strongly recommended to account for the comments from all reviewers in the revised version.
| train | [
"3vKbnnUflcg",
"OJpyRfrjRP",
"1TrXcLyD8TT",
"oT2DItMuGt",
"89-qbemAka8",
"5C5x-lCxWhz",
"wGeWwcg-Ahy",
"-d-EQrO4P--",
"BJQZNxAAuaO",
"KErXq2qWvy",
"kUsf14C9GjH",
"fc0D1R26i6s",
"JHiXPSDRh4_"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer vaM6,\n\nWe thank you again for the valuable and detailed comments. \n\nWe hope that we have adequately addressed your concerns on the clarification of novelty, unsupervised ablation of GIN, and the efficiency and scalability of the proposed method. We deeply appreciate your feedbacks that help to f... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"KErXq2qWvy",
"JHiXPSDRh4_",
"fc0D1R26i6s",
"kUsf14C9GjH",
"KErXq2qWvy",
"KErXq2qWvy",
"JHiXPSDRh4_",
"fc0D1R26i6s",
"kUsf14C9GjH",
"nips_2021_rJq1SdaNPX4",
"nips_2021_rJq1SdaNPX4",
"nips_2021_rJq1SdaNPX4",
"nips_2021_rJq1SdaNPX4"
] |
nips_2021_SEz-FQltAYN | Widening the Pipeline in Human-Guided Reinforcement Learning with Explanation and Context-Aware Data Augmentation | Human explanation (e.g., in terms of feature importance) has been recently used to extend the communication channel between human and agent in interactive machine learning. Under this setting, human trainers provide not only the ground truth but also some form of explanation. However, this kind of human guidance was only investigated in supervised learning tasks, and it remains unclear how to best incorporate this type of human knowledge into deep reinforcement learning. In this paper, we present the first study of using human visual explanations in human-in-the-loop reinforcement learning (HIRL). We focus on the task of learning from feedback, in which the human trainer not only gives binary evaluative "good" or "bad" feedback for queried state-action pairs, but also provides a visual explanation by annotating relevant features in images. We propose EXPAND (EXPlanation AugmeNted feeDback) to encourage the model to encode task-relevant features through a context-aware data augmentation that only perturbs irrelevant features in human salient information. We choose five tasks, namely Pixel-Taxi and four Atari games, to evaluate the performance and sample efficiency of this approach. We show that our method significantly outperforms methods leveraging human explanation that are adapted from supervised learning, and Human-in-the-loop RL baselines that only utilize evaluative feedback.
| accept | The paper presents a method for incorporating richer feedback than a reward signal from humans for training RL agents. Humans provide feedback by (i) indicating which actions are good/bad and (ii) highlighting salient areas in visual observations. All reviewers unanimously agree that the paper should be accepted. The experiments provided by the authors during the rebuttal phase further strengthen the paper. At the same time, I recommend the authors to improve the clarity of the paper, explanation of prior work (e.g., DQN-TAMER) and address other minor concerns raised by the reviewers in the camera-ready version.
If I were to simply go by the technical contributions of the paper, I would recommend it as a Poster. However, the line of research on providing human feedback beyond reward functions is worth highlighting to the research community. I therefore recommend the paper to be presented as a spotlight presentation.
| train | [
"oj-sHXfojFo",
"ClEDlMAR8eP",
"ly8KsbhQhC_",
"F67pnVQssIO",
"I9DKpHw10Ih",
"bAJEQgxU63",
"J6i-PVeFKCk",
"59fTDXV_n3c"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes EXPAND - a method for allowing human input and semantic understanding to guide policy learning for RL agents. Humans are asked to provide binary feedback (good/bad) on actions taken by the agent, as well as highlight visual features which helped guide their decision (e.g. stop sign tells human ... | [
7,
7,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_SEz-FQltAYN",
"nips_2021_SEz-FQltAYN",
"ClEDlMAR8eP",
"oj-sHXfojFo",
"59fTDXV_n3c",
"nips_2021_SEz-FQltAYN",
"ly8KsbhQhC_",
"nips_2021_SEz-FQltAYN"
] |
nips_2021_78GFU9e56Dq | SOLQ: Segmenting Objects by Learning Queries | In this paper, we propose an end-to-end framework for instance segmentation. Based on the recently introduced DETR, our method, termed SOLQ, segments objects by learning unified queries. In SOLQ, each query represents one object and has multiple representations: class, location and mask. The object queries learned perform classification, box regression and mask encoding simultaneously in an unified vector form. During training phase, the mask vectors encoded are supervised by the compression coding of raw spatial masks. In inference time,mask vectors produced can be directly transformed to spatial masks by the inverse process of compression coding. Experimental results show that SOLQ can achieve state-of-the-art performance, surpassing most of existing approaches. Moreover, the joint learning of unified query representation can greatly improve the detection performance of DETR. We hope our SOLQ can serve as a strong baseline for the Transformer-based instance segmentation.
| accept | The paper presents a direct way to do instance segmentation in DETR: instead of producing segmentation masks with an FPN, it regresses segmentation masks from compressed representations as queries. This allows training DETR for segmentation end-to-end (while the original DETR and follow-ups have segmentation trained as a second step to detection).
This paper is roughly the result of the combination of (DF-)DETR with mask encoding (DCT), which is criticized by reviewer 94Kv and f92m. I believe there is nonetheless originality in making it work so well that the approach reaches very strong numbers in segmentation on COCO (39.7 APseg with a ResNet-50 backbone and 45.9 with a Swin-L).
Reviewer f92m constructively points out that the gap between APbox and APseg is bigger for this approach than for others, which could mean that it requires a stronger detector. Another interpretation can be that training the DF-DETR model with this additional segmentation loss end-to-end boosts the APbox.
Overall, I believe the paper presents a significant contribution, and the authors answered some of the concerns of the reviewers. This is suitable for publication at NeurIPS. | val | [
"tVDy42JAYKW",
"ez6KW50r45",
"hS6cCLTdkiC",
"KZ-sWs0s8n4",
"tLLrl7mvlXI",
"VfHcZy5BW52",
"JiP2TQ2Qs5",
"dofnSrlpkG1",
"8ds3pFkGdwy",
"gmnSlICweRf",
"mOAGxkLeDi",
"EsYHNOtPBRg",
"qs19pKKZUnH",
"_PXbQUaPSPQ",
"rUm4nadQsnf",
"UT9jrzx10ck",
"kt7I-rQ71Fl"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a different approach to instance segmentation using DETR-like\narchitecture. Instead of the FPN-based approach adopted in the original DETR, they opt for\nregressing directly an encoded version of the mask.\n\nThis approach allows to perform segmentation in a cheaper manner, and also allows to u... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9,
5,
3
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
"nips_2021_78GFU9e56Dq",
"gmnSlICweRf",
"JiP2TQ2Qs5",
"JiP2TQ2Qs5",
"VfHcZy5BW52",
"mOAGxkLeDi",
"dofnSrlpkG1",
"kt7I-rQ71Fl",
"rUm4nadQsnf",
"tVDy42JAYKW",
"EsYHNOtPBRg",
"qs19pKKZUnH",
"UT9jrzx10ck",
"nips_2021_78GFU9e56Dq",
"nips_2021_78GFU9e56Dq",
"nips_2021_78GFU9e56Dq",
"nips_2... |
nips_2021_pZQrKCkbas | Extending Lagrangian and Hamiltonian Neural Networks with Differentiable Contact Models | The incorporation of appropriate inductive bias plays a critical role in learning dynamics from data. A growing body of work has been exploring ways to enforce energy conservation in the learned dynamics by encoding Lagrangian or Hamiltonian dynamics into the neural network architecture. These existing approaches are based on differential equations, which do not allow discontinuity in the states and thereby limit the class of systems one can learn. However, in reality, most physical systems, such as legged robots and robotic manipulators, involve contacts and collisions, which introduce discontinuities in the states. In this paper, we introduce a differentiable contact model, which can capture contact mechanics: frictionless/frictional, as well as elastic/inelastic. This model can also accommodate inequality constraints, such as limits on the joint angles. The proposed contact model extends the scope of Lagrangian and Hamiltonian neural networks by allowing simultaneous learning of contact and system properties. We demonstrate this framework on a series of challenging 2D and 3D physical systems with different coefficients of restitution and friction. The learned dynamics can be used as a differentiable physics simulator for downstream gradient-based optimization tasks, such as planning and control.
| accept | This paper proposes a differentiable contact model that can be used together with previously proposed Lagrangian and Hamiltonian neural networks, enabling gradient-based system identification and model-based control for hybrid dynamical systems. All reviewers recommend acceptance and by the end of the rebuttal process became satisfied with the additional experiments and clarifications by the authors. Some limitations on the scalability of the method in contact-rich (and high-dimensional state) scenarios were also illustrated through additional experiments, and reviewers were satisfied. I am recommending this paper for acceptance. | train | [
"2S0xlhcp8s",
"bfNtrzao1tW",
"ONCp93fAn-G",
"_4ppTvwqueV",
"QDwxFD-iGc",
"OjG0EL9I-Te",
"hRdS6h61S-t",
"zObjXb1alMH",
"wWsBRJgrANf",
"fiyXhkWuDCr",
"ZtLYTr7XeBU",
"-2qV10x4DGh",
"nNioDz_YPwR",
"nlxu-U-aD9X",
"JeOQQuq14ej",
"S2BAOX5l1L",
"kO0HSPTcAa4",
"bS2Yrkur8S",
"UOXrpZETZs",
... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_rev... | [
" Thank you very much for your update. After reading your rebuttal, I maintain my score to 7, conditioned on that the following requests can be fulfilled:\n\n- I am not very familiar with NeuralSim codebase. The pseudo-code you showed seems to be an inefficient implementation of backpropagation. If so, please state... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
8
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"fiyXhkWuDCr",
"nips_2021_pZQrKCkbas",
"QDwxFD-iGc",
"hRdS6h61S-t",
"OjG0EL9I-Te",
"wWsBRJgrANf",
"-2qV10x4DGh",
"wWsBRJgrANf",
"nNioDz_YPwR",
"S2BAOX5l1L",
"JeOQQuq14ej",
"kO0HSPTcAa4",
"bS2Yrkur8S",
"nips_2021_pZQrKCkbas",
"sndj_L6P88f",
"YO2oX6UE71g",
"PWtVZBBo6Pa",
"3mpf9mu6-9"... |
nips_2021_uZJJFpFl60W | Best-case lower bounds in online learning | Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In this work, we initiate the study of best-case lower bounds in online convex optimization, wherein we bound the largest \emph{improvement} an algorithm can obtain relative to the single best action in hindsight. This problem is motivated by the goal of better understanding the adaptivity of a learning algorithm. Another motivation comes from fairness: it is known that best-case lower bounds are instrumental in obtaining algorithms for decision-theoretic online learning (DTOL) that satisfy a notion of group fairness. Our contributions are a general method to provide best-case lower bounds in Follow The Regularized Leader (FTRL) algorithms with time-varying regularizers, which we use to show that best-case lower bounds are of the same order as existing upper regret bounds: this includes situations with a fixed learning rate, decreasing learning rates, timeless methods, and adaptive gradient methods. In stark contrast, we show that the linearized version of FTRL can attain negative linear regret. Finally, in DTOL with two experts and binary losses, we fully characterize the best-case sequences, which provides a finer understanding of the best-case lower bounds.
| accept | Reviewers all liked the paper, which seems to provide an interesting new take on analysis of adaptive online learning.
| val | [
"7kW4sqXDXvy",
"NnsvBrxkqMY",
"NINhENmwq31",
"lCzJfrCklAW",
"jzeUCOGn4G3",
"TcLXchXM48",
"asXscjt3Xi",
"cjYX3sTpXnS",
"jsJEchchsnV"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the comments. I think this is an interesting paper that should be published at this venue, my score of 7 is unchanged.",
"This paper studies the best-case lower bound, which characterizes the minimum regret that a specific algorithm can achieve in the most benign environments. The authors provide the... | [
-1,
7,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
3,
-1,
-1,
-1,
-1,
4,
2,
4
] | [
"jzeUCOGn4G3",
"nips_2021_uZJJFpFl60W",
"jsJEchchsnV",
"NnsvBrxkqMY",
"asXscjt3Xi",
"cjYX3sTpXnS",
"nips_2021_uZJJFpFl60W",
"nips_2021_uZJJFpFl60W",
"nips_2021_uZJJFpFl60W"
] |
nips_2021_1W2WuCYbz_C | A Comprehensively Tight Analysis of Gradient Descent for PCA | Zhiqiang Xu, Ping Li | accept | This paper studies an important problem and improves upon existing quantitative results. The reviewers agree that this is a solid contribution and in particular appreciate that both gap-dependent and gap-free bounds are obtained.
The reviewers also recognize as a major limitation of this work that the results only apply to finding the top eigenvectors. It is unclear how the current techniques can be generalized to top-k eigenvectors.
The paper can be made stronger if the authors can add (1) a table for quantitative comparison with existing work, (2) discussion of generalization to top-k, (3) discussion on the choice of initial vectors and the use of random initialization, and (4) discussion on practical implementation. These discussions will be useful to those who want to apply the results in this paper. | train | [
"fNym8atOiVy",
"B2zhunshzfl",
"1XHoxJtZ97g",
"IfDfZ0oo5On",
"bjVoKrz5S2M",
"x97vmc31yB0",
"OUIT_pDRrVa",
"OdyDpUYOSey",
"9tj5x6hs7nx",
"SQqyo97d31",
"Cab9rUfTN7f",
"5q8E7IhCFx"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors considered Riemanninan gradient descent for solving the top eigenvector/value problem for real-symmetric matrices. Rates in two regimes are given, depending on the gap of the top eigenvalues compared to the desired accuracy. The analysis is based on polynomial recursion derived from the ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
2
] | [
"nips_2021_1W2WuCYbz_C",
"1XHoxJtZ97g",
"x97vmc31yB0",
"9tj5x6hs7nx",
"fNym8atOiVy",
"SQqyo97d31",
"5q8E7IhCFx",
"Cab9rUfTN7f",
"nips_2021_1W2WuCYbz_C",
"nips_2021_1W2WuCYbz_C",
"nips_2021_1W2WuCYbz_C",
"nips_2021_1W2WuCYbz_C"
] |
nips_2021_xRLT28nnlFV | On Robust Optimal Transport: Computational Complexity and Barycenter Computation | Khang Le, Huy Nguyen, Quang Nguyen, Tung Pham, Hung Bui, Nhat Ho | accept | Most of the reviewers and the AC agree that the submission makes a worthwhile theoretical contribution in designing and analyzing new algorithms for RSOT and RIBP.
The reviewers highlight the following main concerns:
- The algorithm appears to be a simple normalized version of the Sinkhorn algorithm for UOT.
- the technical novelty appears to be somewhat incremental, as the necessary techniques already appear in the OT and UOT literature.
In the rebuttal, the authors explain that the obstacle to overcome in the analysis of the new algorithms. They also stress the fact that no simple, very efficient algorithms were known for RSOT and RIBP before this work. The AC believes that the simplicity of the algorithm and the lack of other algorithms besides standard convex solvers makes this paper close to the acceptance threshold for NeurIPS.
A number of issues spotted by the reviewers and the AC could be addressed by the authors in a final version:
- the discussion of the significance and novelty of the convergence analysis of RSOT should appear in the paper,
- the experimental section would ideally include more varied experiments. In particular, larger examples would be nice to test the scalability of the method, even if it may not be possible to run cvxpy on them to obtain the ground-truth optimum. | test | [
"ozrKt8Zbr3Z",
"qAVty2P4si_",
"eE0ZXKG4qkI",
"_LNkpHvLBW",
"IkI8fHumTi",
"FfLTxqpoACt",
"35vV8RM_BWz",
"W7bK33oplDx",
"ByVaLotEk_R",
"Cc5QofqGh-Y",
"pLPMbHHHz18",
"L0M4lGmOQES",
"pP6zQKyvcUE",
"0edIeV2WMbH",
"yJySXhXg1xe",
"BiNbSfR_TAD",
"GUVQhkLu8fO",
"03ZHTX4tN0N"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. I will keep my score unchanged.",
" Dear Reviewer aBfM,\n\nWe would like to thank you again for spending your time evaluating our paper.\n\nAs the discussion period is expected to conclude early this week, we look forward to hearing your feedback about whether we have addressed your ... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
4
] | [
"0edIeV2WMbH",
"GUVQhkLu8fO",
"BiNbSfR_TAD",
"pP6zQKyvcUE",
"nips_2021_xRLT28nnlFV",
"W7bK33oplDx",
"L0M4lGmOQES",
"ByVaLotEk_R",
"IkI8fHumTi",
"GUVQhkLu8fO",
"nips_2021_xRLT28nnlFV",
"03ZHTX4tN0N",
"yJySXhXg1xe",
"BiNbSfR_TAD",
"nips_2021_xRLT28nnlFV",
"nips_2021_xRLT28nnlFV",
"nips... |
nips_2021_zqo2sqixxbE | Asymptotically Best Causal Effect Identification with Multi-Armed Bandits | This paper considers the problem of selecting a formula for identifying a causal quantity of interest among a set of available formulas. We assume an online setting in which the investigator may alter the data collection mechanism in a data-dependent way with the aim of identifying the formula with lowest asymptotic variance in as few samples as possible. We formalize this setting by using the best-arm-identification bandit framework where the standard goal of learning the arm with the lowest loss is replaced with the goal of learning the arm that will produce the best estimate. We introduce new tools for constructing finite-sample confidence bounds on estimates of the asymptotic variance that account for the estimation of potentially complex nuisance functions, and adapt the best-arm-identification algorithms of LUCB and Successive Elimination to use these bounds. We validate our method by providing upper bounds on the sample complexity and an empirical study on artificially generated data.
| accept | This paper used best-arm identifications to select the best estimator for identifying a causal effect. While the techniques are not new, the reviewers and the AC believe the result in this paper is an important first step.
The AC also recommends that the authors add discussions on recent theoretical results (e.g., finite-sample regret analysis) of causal bandits. | train | [
"CVK4M7nFuoe",
"cvrL3nQKqtc",
"RtCvfInzFLC",
"7Y42bj8rLnj",
"q8iPammPicB",
"6258goLnisn",
"azEkAO_yigB",
"FfHUkBwC5rS",
"QNKG3ZChB6Z",
"DbhfkDYt4Tk",
"zPbg8WDgHYE",
"iT1pQzuvT5",
"a2duEo6D1U",
"5re91jztOBm",
"gklQ1iJ0ka",
"nls2Ktw69Qo"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposed a novel BAI framework. In the proposed framework, the goal is to identify an estimator with the lowest asymptotic variance. ## Concerns:\n### Asymptotic optimality\nThis study insists that \"Asymptotically Best Casual Effect Identification,\" but there is not proof the asymptotic optimality. ... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_zqo2sqixxbE",
"RtCvfInzFLC",
"7Y42bj8rLnj",
"6258goLnisn",
"5re91jztOBm",
"azEkAO_yigB",
"q8iPammPicB",
"DbhfkDYt4Tk",
"nips_2021_zqo2sqixxbE",
"zPbg8WDgHYE",
"iT1pQzuvT5",
"gklQ1iJ0ka",
"nls2Ktw69Qo",
"CVK4M7nFuoe",
"QNKG3ZChB6Z",
"nips_2021_zqo2sqixxbE"
] |
nips_2021_Bix4uw5GcbE | Learning rule influences recurrent network representations but not attractor structure in decision-making tasks | Recurrent neural networks (RNNs) are popular tools for studying computational dynamics in neurobiological circuits. However, due to the dizzying array of design choices, it is unclear if computational dynamics unearthed from RNNs provide reliable neurobiological inferences. Understanding the effects of design choices on RNN computation is valuable in two ways. First, invariant properties that persist in RNNs across a wide range of design choices are more likely to be candidate neurobiological mechanisms. Second, understanding what design choices lead to similar dynamical solutions reduces the burden of imposing that all design choices be totally faithful replications of biology. We focus our investigation on how RNN learning rule and task design affect RNN computation. We trained large populations of RNNs with different, but commonly used, learning rules on decision-making tasks inspired by neuroscience literature. For relatively complex tasks, we find that attractor topology is invariant to the choice of learning rule, but representational geometry is not. For simple tasks, we find that attractor topology depends on task input noise. However, when a task becomes increasingly complex, RNN attractor topology becomes invariant to input noise. Together, our results suggest that RNN dynamics are robust across learning rules but can be sensitive to the training task design, especially for simpler tasks.
| accept | This paper is an empirical study of the impact of learning rule and input noise on the RNN solutions in a small family of tasks. The reviewers agree that writing is clear, and showed excitement in seeing the results, but concerned with the narrow range of tasks. Reviewers made excellent constructive suggestions that I strongly encourage the authors to incorporate in the final, stronger manuscript. | train | [
"OEaEB5AU51s",
"bNBS_6vYoKq",
"IZ0e-plmR3K",
"OsF7D_CSGjt",
"32rqsn6_bPW",
"gMfl5_B-MEf",
"tDu_jenzO6m",
"FXmR_e9NuN8",
"0nfTJW2sCE1",
"5LrU8zpN2mY",
"tiJUAKsE36B",
"357yMgDmVwO",
"SCwruRPgCdU",
"zDkoySoXV9E",
"ewbxwzsqh8p"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you again for your feedback. \n\nYes, we have more results in progress using the delayed non-match to sample (DNMS) task as described in [1]. We are now able to successfully train RNNs with all four learning rules (BPTT, GA, H, and FF) on the DNMS task. We are currently training more and plan to have at le... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
5
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"IZ0e-plmR3K",
"nips_2021_Bix4uw5GcbE",
"0nfTJW2sCE1",
"32rqsn6_bPW",
"tiJUAKsE36B",
"357yMgDmVwO",
"FXmR_e9NuN8",
"SCwruRPgCdU",
"bNBS_6vYoKq",
"ewbxwzsqh8p",
"5LrU8zpN2mY",
"zDkoySoXV9E",
"nips_2021_Bix4uw5GcbE",
"nips_2021_Bix4uw5GcbE",
"nips_2021_Bix4uw5GcbE"
] |
nips_2021_LWH-C1HoQG_ | Few-Shot Segmentation via Cycle-Consistent Transformer | Few-shot segmentation aims to train a segmentation model that can fast adapt to novel classes with few exemplars. The conventional training paradigm is to learn to make predictions on query images conditioned on the features from support images. Previous methods only utilized the semantic-level prototypes of support images as the conditional information. These methods cannot utilize all pixel-wise support information for the query predictions, which is however critical for the segmentation task. In this paper, we focus on utilizing pixel-wise relationships between support and target images to facilitate the few-shot semantic segmentation task. We design a novel Cycle-Consistent Transformer (CyCTR) module to aggregate pixel-wise support features into query ones. CyCTR performs cross-attention between features from different images, i.e. support and query images. We observe that there may exist unexpected irrelevant pixel-level support features. Directly performing cross-attention may aggregate these features from support to query and bias the query features. Thus, we propose using a novel cycle-consistent attention mechanism to filter out possible harmful support features and encourage query features to attend to the most informative pixels from support images. Experiments on all few-shot segmentation benchmarks demonstrate that our proposed CyCTR leads to remarkable improvement compared to previous state-of-the-art methods. Specifically, on Pascal-5^i and COCO-20^i datasets, we achieve 66.6% and 45.6% mIoU for 5-shot segmentation, outperforming previous state-of-the-art by 4.6% and 7.1% respectively.
| accept | This paper proposes to do few-shot segmentation by applying a transformer to the few-shot support examples. The transformer’s attention is constrained so that only pixels that meet the proposed “cycle-consistency” constraint may be used for the prediction task. While the submission isn’t particularly groundbreaking methodologically, the approach seems to be novel overall and the quantitative results are quite strong across two standard benchmarks: for example on COCO, the proposed approach outperforms previous SotA by 40% to 32% for 1-shot, or 45% to 37% for 5-shot. The method is ablated to demonstrate the benefit of various parts of the model, e.g. the use of the cycle-consistency constraint vs. a baseline transformer with no cycle-consistency masking.
Reviewers pointed out a number of clarity and presentation issues with the submission. Please take these suggestions into account and revise the text accordingly for the camera-ready. Overall the text can be a bit difficult to parse at many points, and this caused confusion among reviewers and myself -- please try to revise and clarify wherever possible to phrase things more straightforwardly, especially in the core method section.
Other few-shot segmentation papers reported similar results to those in the submission [1, 2], but as they were roughly concurrent with this submission, the submission can’t be penalized for them. However, the authors are strongly encouraged to include comparisons with these recent works in the camera-ready version, and provide context comparing their proposed approach to these methods.
Given the novelty of the approach and the strength of the results, I recommend accepting the paper to NeurIPS.
[1] https://arxiv.org/abs/2104.01538
[2] https://arxiv.org/abs/2103.15402 | train | [
"-qNDvhvfbcn",
"_qJ4LNZwAH",
"dwMgCIKaicH",
"MZQd-d21Fdh",
"JxWIs80v74L",
"Yzz6YOA6_C",
"EQ9K2oI4ugz",
"5Yag4aYld9K",
"xFX6jSdjL1",
"XmmDE3WA3lX",
"jOEJnNZZVbr"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for the response. After careful review of feedback from all reviewers and answers by the authors, I would like to keep the initial rating.",
" We sincerely thank for the reviewer's valuable feedback. There may exist a misunderstanding of our attention mechanism and we clarify this as follo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"JxWIs80v74L",
"dwMgCIKaicH",
"MZQd-d21Fdh",
"XmmDE3WA3lX",
"5Yag4aYld9K",
"xFX6jSdjL1",
"jOEJnNZZVbr",
"nips_2021_LWH-C1HoQG_",
"nips_2021_LWH-C1HoQG_",
"nips_2021_LWH-C1HoQG_",
"nips_2021_LWH-C1HoQG_"
] |
nips_2021_fpQojkIV5q8 | DropGNN: Random Dropouts Increase the Expressiveness of Graph Neural Networks | This paper studies Dropout Graph Neural Networks (DropGNNs), a new approach that aims to overcome the limitations of standard GNN frameworks. In DropGNNs, we execute multiple runs of a GNN on the input graph, with some of the nodes randomly and independently dropped in each of these runs. Then, we combine the results of these runs to obtain the final result. We prove that DropGNNs can distinguish various graph neighborhoods that cannot be separated by message passing GNNs. We derive theoretical bounds for the number of runs required to ensure a reliable distribution of dropouts, and we prove several properties regarding the expressive capabilities and limits of DropGNNs. We experimentally validate our theoretical findings on expressiveness. Furthermore, we show that DropGNNs perform competitively on established GNN benchmarks.
| accept | The paper proposes a simple idea for improving the expressive power of standard graph neural networks at the expense of speed/memory.
* The reviewers agreed that the paper is well-written and that the proposed idea is elegant and simple to implement.
* The evaluation also shows that DropGNN can improve upon baselines relying upon feature augmentation.
Overall, this is a very good work that meaningfully contributes to the growing GNN literature. | train | [
"t2xnEJiB3u",
"meUY2AaOVF8",
"nk5uxYo_2j",
"X79HiM-dZAt",
"PaXfMVMHoY6",
"Bk4UFM1ZGUD",
"UmRxlQMomv-",
"PcNRu9VCy2Q",
"kJiEG7d6Zny",
"Y5AXjmevKxR",
"1mvPUSYE4ii",
"ELWXQUHSN94"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" I think it could be useful to incorporate parts of the author responses regarding the experimental results either in the main text or in the supplement, since they can help the reader understand in what tasks this method could be expected to perform well. \n\nOverall, this is a good submission so I have updated m... | [
-1,
7,
-1,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
3,
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"Y5AXjmevKxR",
"nips_2021_fpQojkIV5q8",
"UmRxlQMomv-",
"nips_2021_fpQojkIV5q8",
"nips_2021_fpQojkIV5q8",
"PcNRu9VCy2Q",
"ELWXQUHSN94",
"PaXfMVMHoY6",
"X79HiM-dZAt",
"meUY2AaOVF8",
"nips_2021_fpQojkIV5q8",
"nips_2021_fpQojkIV5q8"
] |
nips_2021_OgtWS4bkNO8 | Photonic Differential Privacy with Direct Feedback Alignment | Optical Processing Units (OPUs) -- low-power photonic chips dedicated to large scale random projections -- have been used in previous work to train deep neural networks using Direct Feedback Alignment (DFA), an effective alternative to backpropagation. Here, we demonstrate how to leverage the intrinsic noise of optical random projections to build a differentially private DFA mechanism, making OPUs a solution of choice to provide a \emph{private-by-design} training. We provide a theoretical analysis of our adaptive privacy mechanism, carefully measuring how the noise of optical random projections propagates in the process and gives rise to provable Differential Privacy. Finally, we conduct experiments demonstrating the ability of our learning procedure to achieve solid end-task performance.
| accept | This paper draws a connection between differentially private machine learning and noise injected by specialty analog hardware - specifically, optical processing units. This noise is normally a downside, but this submission shows that it may, in principle, be useful for privacy.
The reviewers agree that this is an interesting connection. There are some reservations about how well-developed this approach is - e.g. limited experiments, which are simulations rather than actual OPU hardware. However, the results seem acceptable as an initial proof-of-concept for this connection. | train | [
"zlB1w79LkWV",
"fTa8377Cj6d",
"8Sakxhfr42",
"MGW2nCmjvMf",
"4l298zueoy8",
"akWiMvk_T2P",
"0IGhs1zaXJ_",
"exhMuLYyhQ3",
"nfJ37-lv4Az",
"DoMVbpNOmn",
"c-C2RGbNrTm",
"p621af-gN1r",
"FDa4j76lFu",
"5oIxz_6fbZ",
"aBze5l6E7i",
"fnRySEM6O0N",
"6t7LsY0ipk1",
"fxhwTyFH6V0"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_rev... | [
" The $\\varepsilon$ values should still be reported for the experiments, even if they end up being large, as it helps compare against prior (and future) work. ",
" > The experimental results state the noise and clipping parameters ($\\sigma$ and $\\tau$), but not the privacy parameters ( $\\varepsilon$ and $\\de... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"fTa8377Cj6d",
"MGW2nCmjvMf",
"4l298zueoy8",
"akWiMvk_T2P",
"akWiMvk_T2P",
"0IGhs1zaXJ_",
"nips_2021_OgtWS4bkNO8",
"aBze5l6E7i",
"c-C2RGbNrTm",
"FDa4j76lFu",
"nips_2021_OgtWS4bkNO8",
"5oIxz_6fbZ",
"fnRySEM6O0N",
"fxhwTyFH6V0",
"c-C2RGbNrTm",
"6t7LsY0ipk1",
"nips_2021_OgtWS4bkNO8",
... |
nips_2021_hLTZCN7f3M- | Searching Parameterized AP Loss for Object Detection | Loss functions play an important role in training deep-network-based object detectors. The most widely used evaluation metric for object detection is Average Precision (AP), which captures the performance of localization and classification sub-tasks simultaneously. However, due to the non-differentiable nature of the AP metric, traditional object detectors adopt separate differentiable losses for the two sub-tasks. Such a mis-alignment issue may well lead to performance degradation. To address this, existing works seek to design surrogate losses for the AP metric manually, which requires expertise and may still be sub-optimal. In this paper, we propose Parameterized AP Loss, where parameterized functions are introduced to substitute the non-differentiable components in the AP calculation. Different AP approximations are thus represented by a family of parameterized functions in a unified formula. Automatic parameter search algorithm is then employed to search for the optimal parameters. Extensive experiments on the COCO benchmark with three different object detectors (i.e., RetinaNet, Faster R-CNN, and Deformable DETR) demonstrate that the proposed Parameterized AP Loss consistently outperforms existing handcrafted losses. Code shall be released.
| accept | All the reviewers have supported the acceptance of this paper, so I recommend accepting as well. | train | [
"xELgG7rFWE",
"6hr_-07mgeJ",
"8fCjFDrUqe6",
"7sVRjCkP7sp",
"A2C8qUznv8k",
"-HxVwgHjfQb",
"tbmLrz4_-2H",
"zjizazBJPL9",
"XNQYQxwQxo"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper describes a technique for estimating a loss for object detection that is well-aligned with the AP loss used at evaluation time. The work builds on a prior formulation of the AP loss as a function on classification scores, but converts the formulation to account for per-bounding box results, and also rep... | [
7,
6,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"nips_2021_hLTZCN7f3M-",
"nips_2021_hLTZCN7f3M-",
"7sVRjCkP7sp",
"XNQYQxwQxo",
"zjizazBJPL9",
"6hr_-07mgeJ",
"xELgG7rFWE",
"nips_2021_hLTZCN7f3M-",
"nips_2021_hLTZCN7f3M-"
] |
nips_2021_GEKTIKvslP | Fair Exploration via Axiomatic Bargaining | Motivated by the consideration of fairly sharing the cost of exploration between multiple groups in learning problems, we develop the Nash bargaining solution in the context of multi-armed bandits. Specifically, the 'grouped' bandit associated with any multi-armed bandit problem associates, with each time step, a single group from some finite set of groups. The utility gained by a given group under some learning policy is naturally viewed as the reduction in that group's regret relative to the regret that group would have incurred 'on its own'. We derive policies that yield the Nash bargaining solution relative to the set of incremental utilities possible under any policy. We show that on the one hand, the 'price of fairness' under such policies is limited, while on the other hand, regret optimal policies are arbitrarily unfair under generic conditions. Our theoretical development is complemented by a case study on contextual bandits for warfarin dosing where we are concerned with the cost of exploration across multiple races and age groups.
| accept | Reviewers were unanimous in their assessment of the paper: they all thought the paper addresses a practically well-motivated question (i.e., fairness considerations in allocating the costs of exploration in grouped MAB settings); it offers a compelling formulation using ideas from the axiomatic bargaining literature; and finally, the subsequent analysis leads to intriguing insights. Reviewers offered several suggestions for further improvement, including expanding the discussion of asymptotic analysis of regret and contrasting it with finite time horizon. I am confident that reviewers can incorporate these suggestions in their next revision of the work, and for those reasons, I suggest acceptance. | test | [
"WcBS0jVEdT",
"yWR9EkHoxL",
"xgJzWjQKfCA",
"Rad6D4upot1",
"sG_bkg6k39s",
"0hGSpO4uDsv",
"Ywx5Rb-0mp0",
"QaR7kAD3fM",
"NVJl3JdghOH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers what the burden of exploration looks across groups in the bandit setting. With tools from axiomatic bargaining work, they show that regret optimal policies actually explore arms in a way that the utility of some specific group from collaborating with other groups is 0; utility is measured in te... | [
8,
-1,
7,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_GEKTIKvslP",
"sG_bkg6k39s",
"nips_2021_GEKTIKvslP",
"xgJzWjQKfCA",
"WcBS0jVEdT",
"QaR7kAD3fM",
"NVJl3JdghOH",
"nips_2021_GEKTIKvslP",
"nips_2021_GEKTIKvslP"
] |
nips_2021_DfWL8kIb0eF | Unifying lower bounds on prediction dimension of convex surrogates | Jessica Finocchiaro, Rafael Frongillo, Bo Waggoner | accept | This paper considers the problem of constructing convex surrogates for supervised learning tasks, and provides lower bounds for the convex consistency dimension, which is the lowest prediction dimension with a consistent convex surrogate. Reviewers felt that paper was well-written, and that the techniques are novel and generally interesting. In particular, the authors use their framework to resolve open questions of Frongillo and Kash (2015) and Dearborn and Frongillo (2020). However, given that the topic is somewhat niche for the NeurIPS community, the authors are encouraged to incorporate the reviewers' suggestions to better convey the significance of their results to the broader community. | test | [
"fN8_b9p1G3I",
"GJNBxB7xKVW",
"PKknTUNEHce",
"F1bvVQzfyiq",
"1H6RpqItZR3",
"HGsNWuQDVJ",
"p1ahFYL3j5U",
"RmR7tzeE-nH",
"DxYs6MjHGYL",
"hriczxVWcSU",
"2a3jO56ADWT"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I still think the proofs of this paper's Theorem 1, [33, Theorem 16] and [2, Theorem 9] are quite similar; they all say that the level set of a property (for 33, it is the set of probability distributions whose optimal prediction is t, but I think it is not hard to abstract \"optimal pred... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
1,
3,
2
] | [
"GJNBxB7xKVW",
"PKknTUNEHce",
"F1bvVQzfyiq",
"hriczxVWcSU",
"2a3jO56ADWT",
"DxYs6MjHGYL",
"RmR7tzeE-nH",
"nips_2021_DfWL8kIb0eF",
"nips_2021_DfWL8kIb0eF",
"nips_2021_DfWL8kIb0eF",
"nips_2021_DfWL8kIb0eF"
] |
nips_2021_sf2BxJNXC3K | Ultrahyperbolic Neural Networks | Riemannian space forms, such as the Euclidean space, sphere and hyperbolic space, are popular and powerful representation spaces in machine learning. For instance, hyperbolic geometry is appropriate to represent graphs without cycles and has been used to extend Graph Neural Networks. Recently, some pseudo-Riemannian space forms that generalize both hyperbolic and spherical geometries have been exploited to learn a specific type of nonparametric embedding called ultrahyperbolic. The lack of geodesic between every pair of ultrahyperbolic points makes the task of learning parametric models (e.g., neural networks) difficult. This paper introduces a method to learn parametric models in ultrahyperbolic space. We experimentally show the relevance of our approach in the tasks of graph and node classification.
| accept | There is a lot of support for this paper, which proposes a mathematical elegant and convincing method for embedding graphs in non-Euclidean spaces, namely ultrahyperbolic spaces. The experiments - despite some caveats mentioned - provide sufficient evidence for the validity of the approach. | train | [
"aYo0GQwPylz",
"wt2HxADm_Nz",
"6PiJC9kHl6g",
"9kN3rvw_REW",
"sZNBLmclE3t",
"GRilTUgAz8M",
"psxG5vIOt2",
"SQBTdXBBuWu",
"3eGxAD8sd2F",
"ILUKQ8Aksvm",
"S7-QxPibF4r",
"foNRqKeFH76",
"KyWXanIoCKR",
"Da_odlL8kRB",
"WJJmW6oTd_"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper presents a (graph) neural network in ultrahyperbolic (semi-Riemannian) space that can be used to model hierarchical graphs with cycles. The motivation is that ultrahyperbolic manifold generalizes hyperbolic and spherical manifolds, thus providing inductive bias to those geometries. In order to avoid bro... | [
5,
-1,
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
-1,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1
] | [
"nips_2021_sf2BxJNXC3K",
"Da_odlL8kRB",
"nips_2021_sf2BxJNXC3K",
"3eGxAD8sd2F",
"Da_odlL8kRB",
"nips_2021_sf2BxJNXC3K",
"KyWXanIoCKR",
"aYo0GQwPylz",
"aYo0GQwPylz",
"nips_2021_sf2BxJNXC3K",
"WJJmW6oTd_",
"WJJmW6oTd_",
"GRilTUgAz8M",
"6PiJC9kHl6g",
"nips_2021_sf2BxJNXC3K"
] |
nips_2021_Sl0WX9H6ZJg | NeuroMLR: Robust & Reliable Route Recommendation on Road Networks | Predicting the most likely route from a source location to a destination is a core functionality in mapping services. Although the problem has been studied in the literature, two key limitations remain to be addressed. First, our study reveals that a significant portion of the routes recommended by existing methods fail to reach the destination. Second, existing techniques are transductive in nature; hence, they fail to recommend routes if unseen roads are encountered at inference time. In this paper, we address these limitations through an inductive algorithm called NeuroMLR. NeuroMLR learns a generative model from historical trajectories by conditioning on three explanatory factors: the current location, the destination, and real-time traffic conditions. The conditional distributions are learned through a novel combination of Lipschitz embedding with Graph Convolutional Networks (GCN) using historical trajectory data. Through in-depth experiments on real-world datasets, we establish that NeuroMLR imparts significant improvement in accuracy over the state of the art. More importantly, NeuroMLR generalizes dramatically better to unseen data and the recommended routes reach the destination with much higher likelihood than existing techniques.
| accept | This paper proposes a method for route recommendation using graph neural networks and Lipschitz embedding. The idea of the proposed method for route recommendation is interesting. The proposed architecture is reasonable. The paper is well written, and experiments are well-designed. As commented by the reviewers, the novelty needs to be clearly explained in the revised paper. The experimental results showed in the author response would strengthen the paper. | train | [
"6dsajNa0GSf",
"RIrd-lRJIX",
"iKge-hC6rp",
"w6w4XhGy9Yt",
"4LwQpeLikg3",
"inUaXHwxsmz",
"wcbfJKP4H0"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the thorough response --- they resolved all my concerns. This is a great work. I would like to raise my rating. ",
"This paper presents a new approach for route recommendation. The key idea of the approach is to model the mobility pattern as a Markov process and learn the local transition probability... | [
-1,
8,
-1,
-1,
-1,
6,
6
] | [
-1,
3,
-1,
-1,
-1,
4,
2
] | [
"4LwQpeLikg3",
"nips_2021_Sl0WX9H6ZJg",
"wcbfJKP4H0",
"inUaXHwxsmz",
"RIrd-lRJIX",
"nips_2021_Sl0WX9H6ZJg",
"nips_2021_Sl0WX9H6ZJg"
] |
nips_2021_pSitk34qYit | Risk Bounds and Calibration for a Smart Predict-then-Optimize Method | Heyuan Liu, Paul Grigas | accept | This paper builds upon a prior work on Smart Predict-Then-Optimize (SPO) framework. Many practical problems can be described as 2 step procedures: 1. predict some quantity 2. solve optimization problem with predictions from step 1 being inputs. Elmachtoub and Grigas earlier proposed SPO where we take into account step 2 during step 1 by setting up an appropriate surrogate loss. Prior work has only established asymptotic guarantees for this learning problem and this paper presents first finite sample guarantees. The reviewers agree that this paper is well written and presents an important contribution that is of interest to a broader Neurips community. | train | [
"ofi2DwWjT33",
"sxxwQNuSbSg",
"umwHRYCcM4S",
"T0LTKd85kQt",
"wbB7LNbrq2Q",
"9GkfNWeI8n",
"wLC7RjI-RNO"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper studies the excess SPO risk of the empirical SPO+ risk minimizers. When the feasible region of the downstream planning task is a bounded polyhedron, the excess risk is on the order of O(n^{-1/4}). When the feasible region is the level set of a strongly convex and smooth function, the excess risk is on th... | [
7,
-1,
6,
-1,
-1,
-1,
7
] | [
4,
-1,
4,
-1,
-1,
-1,
2
] | [
"nips_2021_pSitk34qYit",
"9GkfNWeI8n",
"nips_2021_pSitk34qYit",
"umwHRYCcM4S",
"ofi2DwWjT33",
"wLC7RjI-RNO",
"nips_2021_pSitk34qYit"
] |
nips_2021_ohfi44BZPC4 | Three-dimensional spike localization and improved motion correction for Neuropixels recordings | Neuropixels (NP) probes are dense linear multi-electrode arrays that have rapidly become essential tools for studying the electrophysiology of large neural populations. Unfortunately, a number of challenges remain in analyzing the large datasets output by these probes. Here we introduce several new methods for extracting useful spiking information from NP probes. First, we use a simple point neuron model, together with a neural-network denoiser, to efficiently map spikes detected on the probe into three-dimensional localizations. Previous methods localized spikes in two dimensions only; we show that the new localization approach is significantly more robust and provides an improved feature set for clustering spikes according to neural identity (``spike sorting"). Next, we apply a Poisson denoising method to the resulting three-dimensional point-cloud representation of the data, and show that the resulting 3D images can be accurately registered over time, leading to improved tracking of time-varying neural activity over the probe, and in turn, crisper estimates of neural clusters over time. The code to reproduce our results and an example neuropixels dataset is provided in the supplementary material.
| accept | This work brings together a number of algorithms to provide a method to detect the 3D location of cells from data collected with new high-density electrode arrays: Neuropixels. All reviewers appreciated the analysis developed by the authors in this work. The main discussion centered around the scope of NeurIPS with respect to engineered solutions to neural imaging problems. The authors, as well as some reviewers, pointed out the historical presence of such papers at NeurIPS, as well as the stated scope. Thus I am happy to recommend this paper for acceptance at NeurIPS. | test | [
"PlWOJOnTys2",
"rbCJHmJCwch",
"1wqAYVmv__7",
"e8ydo3nINti",
"N7rKiCR0Mv9",
"JYmZBTg2sNS",
"nIK-DWn0mUV",
"fhfeFtJjCh",
"njHSkKiQzj",
"fK9pIkBvr0x"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors describe a pipeline for processing recordings from neuropixel probes which are the current state of the art in in-vivo recordings from single neurons in the brain. The authors describe a pipeline for processing recordings from neuropixel probes which are the current state of the art in in-vivo record... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
10,
8
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_ohfi44BZPC4",
"nips_2021_ohfi44BZPC4",
"fhfeFtJjCh",
"nips_2021_ohfi44BZPC4",
"fK9pIkBvr0x",
"PlWOJOnTys2",
"njHSkKiQzj",
"rbCJHmJCwch",
"nips_2021_ohfi44BZPC4",
"nips_2021_ohfi44BZPC4"
] |
nips_2021_5CGPY2VeEGb | Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning | Due to the limited and even imbalanced data, semi-supervised semantic segmentation tends to have poor performance on some certain categories, e.g., tailed categories in Cityscapes dataset which exhibits a long-tailed label distribution. Existing approaches almost all neglect this problem, and treat categories equally. Some popular approaches such as consistency regularization or pseudo-labeling may even harm the learning of under-performing categories, that the predictions or pseudo labels of these categories could be too inaccurate to guide the learning on the unlabeled data. In this paper, we look into this problem, and propose a novel framework for semi-supervised semantic segmentation, named adaptive equalization learning (AEL). AEL adaptively balances the training of well and badly performed categories, with a confidence bank to dynamically track category-wise performance during training. The confidence bank is leveraged as an indicator to tilt training towards under-performing categories, instantiated in three strategies: 1) adaptive Copy-Paste and CutMix data augmentation approaches which give more chance for under-performing categories to be copied or cut; 2) an adaptive data sampling approach to encourage pixels from under-performing category to be sampled; 3) a simple yet effective re-weighting method to alleviate the training noise raised by pseudo-labeling. Experimentally, AEL outperforms the state-of-the-art methods by a large margin on the Cityscapes and Pascal VOC benchmarks under various data partition protocols. Code is available at https://github.com/hzhupku/SemiSeg-AEL.
| accept | This paper offers an adaptive equalization strategy for semantic segmentation to improve performance of underperforming or underrepresented categories. All reviewers found positive strengths in the paper, and recommended acceptance. The reviewers rebuttal provided additional information that were compelling and confirmed that this paper is acceptable, preferably as a spotlight. | test | [
"k74DHwIRip-",
"1vKckNQ8_HL",
"HyWSR0DvNzP",
"1MfYzqnniDb",
"gbrGSyV3yFM",
"YoucbIN78Bc",
"X_Oo7HeuF2a",
"VoA-st9_o5U"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the rebuttal. I am satisfied with the new experiments. And thus I am willing to increase my score.",
"This paper tackles the challenging semi-supervised semantic segmentation task, with a focus on dealing with the imbalanced/long-tailed distribution. The authors propose two adaptive data augmentation... | [
-1,
8,
-1,
-1,
-1,
-1,
7,
8
] | [
-1,
4,
-1,
-1,
-1,
-1,
5,
5
] | [
"1MfYzqnniDb",
"nips_2021_5CGPY2VeEGb",
"gbrGSyV3yFM",
"1vKckNQ8_HL",
"VoA-st9_o5U",
"X_Oo7HeuF2a",
"nips_2021_5CGPY2VeEGb",
"nips_2021_5CGPY2VeEGb"
] |
nips_2021_I5f4e3udn2 | On the Bias-Variance-Cost Tradeoff of Stochastic Optimization | We consider stochastic optimization when one only has access to biased stochastic oracles of the objective, and obtaining stochastic gradients with low biases comes at high costs. This setting captures a variety of optimization paradigms widely used in machine learning, such as conditional stochastic optimization, bilevel optimization, and distributionally robust optimization. We examine a family of multi-level Monte Carlo (MLMC) gradient methods that exploit a delicate trade-off among the bias, the variance, and the oracle cost. We provide a systematic study of their convergences and total computation complexities for strongly convex, convex, and nonconvex objectives, and demonstrate their superiority over the naive biased stochastic gradient method. Moreover, when applied to conditional stochastic optimization, the MLMC gradient methods significantly improve the best-known sample complexity in the literature.
| accept | The paper provides a systematic description and analysis of multilevel Monte-Carlo (MLMC) gradient estimators for stochastic convex optimization and describes application of these estimators to conditional stochastic optimization. Despite some reservations regarding the novelty of some of the developments in the paper, the reviewers appreciated its generality and agreed it could be of value to the NeurIPS community. Furthermore, the reviewers agreed that the experimental results outlined in the author response would be a useful combination to the paper.
In light of this, I recommend that the paper be accepted to NeurIPS and encourage the authors to carefully revise the paper to address the issues brought up during the review. In particular, the authors should include a comprehensive report of their experimental results, consulting the NeurIPS submission checklist to ensure that all the pertinent information is included. Moreover, in cases where experiments indicate the negative finding, that MLMC does not outperform standard bias SGD in practice, I encourage the authors to clearly and transparently highlight this conclusion, as it is as valuable to researchers as a positive finding.
| train | [
"HvL4JnrpNd",
"auXAR4ZmT3f",
"zYjUJ9LNB76",
"auKdX9yPCt-",
"V_CsNPgh_x5",
"_83RelMOKE5",
"IeRneWxKy_",
"ItP0_EoVjTx",
"950GwxKD8_",
"STTiWgOUG-g"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Reference:\n\nChen, Tianyi, Yuejiao Sun, and Wotao Yin. \"Tighter Analysis of Alternating Stochastic Gradient Method for Stochastic Nested Problems.\" arXiv preprint arXiv:2106.13781 (2021).\n\nYossi Arjevani, Yair Carmon, John C Duchi, Dylan J Foster, Nathan Srebro, and Blake Wood-\n354 worth. Lower bounds for n... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
2
] | [
"zYjUJ9LNB76",
"nips_2021_I5f4e3udn2",
"STTiWgOUG-g",
"IeRneWxKy_",
"950GwxKD8_",
"ItP0_EoVjTx",
"nips_2021_I5f4e3udn2",
"nips_2021_I5f4e3udn2",
"nips_2021_I5f4e3udn2",
"nips_2021_I5f4e3udn2"
] |
nips_2021_YV3uoawS5KK | Averaging on the Bures-Wasserstein manifold: dimension-free convergence of gradient descent | We study first-order optimization algorithms for computing the barycenter of Gaussian distributions with respect to the optimal transport metric. Although the objective is geodesically non-convex, Riemannian gradient descent empirically converges rapidly, in fact faster than off-the-shelf methods such as Euclidean gradient descent and SDP solvers. This stands in stark contrast to the best-known theoretical results, which depend exponentially on the dimension. In this work, we prove new geodesic convexity results which provide stronger control of the iterates, yielding a dimension-free convergence rate. Our techniques also enable the analysis of two related notions of averaging, the entropically-regularized barycenter and the geometric median, providing the first convergence guarantees for these problems.
| accept | The paper proves dimension-free convergence rates for the gradient descent algorithm for the Bures-Wasserstein Barycenter computation. It is a significantly novel result that provides a theoretical basis to something that is practically observed and as such should be of interest to many.
| train | [
"qEttPg8YzcY",
"3LuwG3WqSVA",
"XJx_hRasFDf",
"SCuKuoKd-Uo",
"RyZlnIT1Xc6",
"EW_5mMXgxyJ",
"nwc28EMnEhc",
"Eyx6T72bINK",
"yE_xiN2GmJh",
"oWn5sHsj9p1"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" After reading the author's rebuttal, I am completely satisfied by the answers given and pretty sure the authors will make the necessary job to implement them. Therefore, I maintain my score.",
" Thanks to the authors for answering my comments. I have read all reviews and their responses. I think this is a great... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
3,
8,
8
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
1,
5,
3
] | [
"RyZlnIT1Xc6",
"SCuKuoKd-Uo",
"nips_2021_YV3uoawS5KK",
"oWn5sHsj9p1",
"yE_xiN2GmJh",
"Eyx6T72bINK",
"XJx_hRasFDf",
"nips_2021_YV3uoawS5KK",
"nips_2021_YV3uoawS5KK",
"nips_2021_YV3uoawS5KK"
] |
nips_2021_cx2q4cOBnne | Reinforcement Learning in Newcomblike Environments | Newcomblike decision problems have been studied extensively in the decision theory literature, but they have so far been largely absent in the reinforcement learning literature. In this paper we study value-based reinforcement learning algorithms in the Newcomblike setting, and answer some of the fundamental theoretical questions about the behaviour of such algorithms in these environments. We show that a value-based reinforcement learning agent cannot converge to a policy that is not \emph{ratifiable}, i.e., does not only choose actions that are optimal given that policy. This gives us a powerful tool for reasoning about the limit behaviour of agents -- for example, it lets us show that there are Newcomblike environments in which a reinforcement learning agent cannot converge to any optimal policy. We show that a ratifiable policy always exists in our setting, but that there are cases in which a reinforcement learning agent normally cannot converge to it (and hence cannot converge at all). We also prove several results about the possible limit behaviours of agents in cases where they do not converge to any policy.
| accept | The reviewers agree that the formalism introduced in the paper is both interesting and potentially useful. Newcomblike decision processes (NDPs) are a generalization of Markov decision processes (MDP). As such, they augment the space of environments we can model and can potentially lead to agents that are better equipped to tackle real-world problems. Since reinforcement learning algorithms used out-of-the-box do not always yield the most desirable behavior, the NDP formalism also invites future research on the design of new algorithms.
We encourage the authors to elaborate on the connections between NDPs and both game theory and multi-agent reinforcement learning, since this point has come up in multiple reviews and was one of the main themes of the discussion phase. A discussion on the societal impact of the proposed approach also seems particularly pertinent. Finally, we also ask the authors to carefully consider the reviewers’ suggestions to improve the presentation.
| train | [
"Hpopxxp_9Pz",
"cyU7zxVtwX5",
"lO_8UQiB1dt",
"oJBO9Wl1bkx",
"YTQblVps4lu",
"pUSyf5GkOY",
"AgDZwU-MyK_",
"lW6kpZFTV3u",
"Er3t8ZDR7cS",
"scQmOVksLK1",
"u5_AGaf4_PB",
"lqRhZ6RFsN",
"eaZLvrvz2ii",
"PdjiBRElKcP",
"19vm5dz1DiM",
"I_L8qHOPjyc",
"Ap_sgVBe1L"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer"
] | [
" Thanks a lot for this suggestion! We agree that this would be helpful. We will add a brief discussion to the introduction, with a forward reference to the planned section on related MARL work.",
"The paper formalizes Newcomblike decision processes (NDPs), a generalization of Markov decision processes in which r... | [
-1,
7,
-1,
7,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
3,
-1,
3,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"lO_8UQiB1dt",
"nips_2021_cx2q4cOBnne",
"lqRhZ6RFsN",
"nips_2021_cx2q4cOBnne",
"pUSyf5GkOY",
"scQmOVksLK1",
"PdjiBRElKcP",
"nips_2021_cx2q4cOBnne",
"I_L8qHOPjyc",
"u5_AGaf4_PB",
"Ap_sgVBe1L",
"oJBO9Wl1bkx",
"cyU7zxVtwX5",
"lW6kpZFTV3u",
"I_L8qHOPjyc",
"nips_2021_cx2q4cOBnne",
"nips_2... |
nips_2021_ch9qlCdrHD7 | Comprehensive Knowledge Distillation with Causal Intervention | Knowledge distillation (KD) addresses model compression by distilling knowledge from a large model (teacher) to a smaller one (student). The existing distillation approaches mainly focus on using different criteria to align the sample representations learned by the student and the teacher, while they fail to transfer the class representations. Good class representations can benefit the sample representation learning by shaping the sample representation distribution. On the other hand, the existing approaches enforce the student to fully imitate the teacher while ignoring the fact that the teacher is typically not perfect. Although the teacher has learned rich and powerful representations, it also contains unignorable bias knowledge which is usually induced by the context prior (e.g., background) in the training data. To address these two issues, in this paper, we propose comprehensive, interventional distillation (CID) that captures both sample and class representations from the teacher while removing the bias with causal intervention. Different from the existing literature that uses the softened logits of the teacher as the training targets, CID considers the softened logits as the context information of an image, which is further used to remove the biased knowledge based on causal inference. Keeping the good representations while removing the bad bias enables CID to have a better generalization ability on test data and a better transferability across different datasets against the existing state-of-the-art approaches, which is demonstrated by extensive experiments on several benchmark datasets.
| accept | The paper is concerned with the distillation / compression of a large teacher model. Besides the transfer of the sample representation, the originality of the approach lies in the transfer of the class representation, and the use of a causal intervention (CID) aimed to prevent the transfer of the teacher's biases to the student (that can indeed be harmful even when the training and test distributions are the same).
Formally, CID proceeds by aligning $P(outcome | do (input))$ for the teacher and student, using the backdoor adjustment formula to account for the confounders (context of the different classes).
The merits of the approach are supported by extensive empirical comparisons with the state of the art, considering various architectures and datasets, and the relative importance of the three aspects (transferring the sample representation, transferring the class representation and using interventions to prevent the transfer of the biases) is shown using lesion studies. It is a bit disappointing that the interventional term, that is the most innovative and theoretically grounded one, is found to be the less important one after the lesion studies.
The authors did a very good job of answering the reviews, justifying the choice of the considered architectures, examining the impact of noise, clarifying the difference w.r.t. related works (e.g., Improving Knowledge Distillation via Category Structure) and explaining the discrepancies with the results reported for the baselines in previous papers, depending on the use of pre-trained models.
In summary, this paper tackles a relevant issue in an innovative way, and it is viewed as above the acceptance threshold by all reviewers. | train | [
"oHqDdPgxxKl",
"PXgd7sJdOYd",
"65CkFbUoVWo",
"Pejr-mg6LhR",
"_g3HXOOEayq",
"eWFQA18LiCf",
"ACM_GAqVPbF",
"c0QypZ6wOA3",
"X0qxvJYexGa",
"YLhpS2ueuXm",
"aBeaFxw7yvt"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper focuses on comprehensive knowledge distillation to transfer the class representation. The proposed method aims to improve the performance of the student model by considering the transferring of class representation. It is a novel view, especially with a causal intervention approach. The experiments are ... | [
6,
6,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
6
] | [
5,
3,
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_ch9qlCdrHD7",
"nips_2021_ch9qlCdrHD7",
"PXgd7sJdOYd",
"nips_2021_ch9qlCdrHD7",
"nips_2021_ch9qlCdrHD7",
"_g3HXOOEayq",
"PXgd7sJdOYd",
"aBeaFxw7yvt",
"Pejr-mg6LhR",
"oHqDdPgxxKl",
"nips_2021_ch9qlCdrHD7"
] |
nips_2021_YadmOcMC9aa | Reinforcement Learning with Latent Flow | Temporal information is essential to learning effective policies with Reinforcement Learning (RL). However, current state-of-the-art RL algorithms either assume that such information is given as part of the state space or, when learning from pixels, use the simple heuristic of frame-stacking to implicitly capture temporal information present in the image observations. This heuristic is in contrast to the current paradigm in video classification architectures, which utilize explicit encodings of temporal information through methods such as optical flow and two-stream architectures to achieve state-of-the-art performance. Inspired by leading video classification architectures, we introduce the Flow of Latents for Reinforcement Learning (Flare), a network architecture for RL that explicitly encodes temporal information through latent vector differences. We show that Flare recovers optimal performance in state-based RL without explicit access to the state velocity, solely with positional state information. Flare is the most sample efficient model-free pixel-based RL algorithm on the DeepMind Control suite when evaluated on the 500k and 1M step benchmarks across 5 challenging control tasks, and, when used with Rainbow DQN, outperforms the competitive baseline on Atari games at 100M time step benchmark across 8 challenging games.
| accept | The paper proposes a simple but effective method to increase sample efficiency of RL in environments with pixel-based observations, by capturing temporal information via differences of consecutive image observation encodings. While, like framestacking, the method is expected to be needed and effective only in environments where capturing a non-trivial observation history is necessary for good performance and despite the performance advantage over alternatives being moderate, the experiments contain an informative ablation study that makes this work useful to other researchers from this field. | train | [
"VYx2C58q7v0",
"TllFIqXx6j",
"4sgciRUDcHB",
"FcIzr7huxOe",
"esQ2aWLhFQF",
"xrLfel5kMOj",
"Bnh8624Hq0t",
"9Tipp1_cfUw",
"8nV2hmAtiS",
"GkpaYsWaPUq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The rebuttal addresses my minor concerns. After having also read the other reviews and all answers from the authors, I keep my score of 6 (marginally above the acceptance threshold) and recommend acceptance: even though the contribution might be considered as \"simple\", I think that the paper does not have major... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"Bnh8624Hq0t",
"nips_2021_YadmOcMC9aa",
"FcIzr7huxOe",
"TllFIqXx6j",
"8nV2hmAtiS",
"9Tipp1_cfUw",
"GkpaYsWaPUq",
"nips_2021_YadmOcMC9aa",
"nips_2021_YadmOcMC9aa",
"nips_2021_YadmOcMC9aa"
] |
nips_2021_503UwCYEe5 | Understanding How Encoder-Decoder Architectures Attend | Encoder-decoder networks with attention have proven to be a powerful way to solve many sequence-to-sequence tasks. In these networks, attention aligns encoder and decoder states and is often used for visualizing network behavior. However, the mechanisms used by networks to generate appropriate attention matrices are still mysterious. Moreover, how these mechanisms vary depending on the particular architecture used for the encoder and decoder (recurrent, feed-forward, etc.) are also not well understood. In this work, we investigate how encoder-decoder networks solve different sequence-to-sequence tasks. We introduce a way of decomposing hidden states over a sequence into temporal (independent of input) and input-driven (independent of sequence position) components. This reveals how attention matrices are formed: depending on the task requirements, networks rely more heavily on either the temporal or input-driven components. These findings hold across both recurrent and feed-forward architectures despite their differences in forming the temporal components. Overall, our results provide new insight into the inner workings of attention-based encoder-decoder networks.
| accept | Three of four reviewers rated this paper as a 6, one reviewer still assesses this paper as being below threshold.
The concerns that remain relate to the general applicability of the method, but I think the work is in good enough shape for a poster.
The AC recommends accept as poster.
| train | [
"tmWdIMSatkI",
"5jOeoj3-Vt1",
"148xU4BVS2",
"x8Dvcue7b-Z",
"xVPlbbgEq1o",
"-vGWELr_d0",
"gC3CYXwgr4r",
"wt1Lb7Ivcuq",
"ielxfAAFvRK",
"ZYDU7W8eV4t",
"i_Ib_OeXqfl",
"djXnYkp2VTi",
"RXnFUKNEYWz",
"RcZPoqersL2",
"PXQTsAoFARs",
"qEWev2_hdw0"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your responses, I have no further questions!",
" Dear Authors and AC,\n\nI would like to keep my score (5) unchanged. Overall, I think the proposed decomposition method is interesting but the response does not provide enough evidence for alleviating my concern about the real-world applicability. I... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"x8Dvcue7b-Z",
"148xU4BVS2",
"xVPlbbgEq1o",
"ielxfAAFvRK",
"ZYDU7W8eV4t",
"nips_2021_503UwCYEe5",
"RXnFUKNEYWz",
"nips_2021_503UwCYEe5",
"i_Ib_OeXqfl",
"djXnYkp2VTi",
"wt1Lb7Ivcuq",
"qEWev2_hdw0",
"-vGWELr_d0",
"PXQTsAoFARs",
"nips_2021_503UwCYEe5",
"nips_2021_503UwCYEe5"
] |
nips_2021__nRSyha2SP | Latent Execution for Neural Program Synthesis Beyond Domain-Specific Languages | Program synthesis from input-output (IO) examples has been a long-standing challenge. While recent works demonstrated limited success on domain-specific languages (DSL), it remains highly challenging to apply them to real-world programming languages, such as C. Due to complicated syntax and token variation, there are three major challenges: (1) unlike many DSLs, programs in languages like C need to compile first and are not executed via interpreters; (2) the program search space grows exponentially when the syntax and semantics of the programming language become more complex; and (3) collecting a large-scale dataset of real-world programs is non-trivial. As a first step to address these challenges, we propose LaSynth and show its efficacy in a restricted-C domain (i.e., C code with tens of tokens, with sequential, branching, loop and simple arithmetic operations but no library call). More specifically, LaSynth learns the latent representation to approximate the execution of partially generated programs, even if they are incomplete in syntax (addressing (1)). The learned execution significantly improves the performance of next token prediction over existing approaches, facilitating search (addressing (2)). Finally, once trained with randomly generated ground-truth programs and their IO pairs, LaSynth can synthesize more concise programs that resemble human-written code. Furthermore, retraining our model with these synthesized programs yields better performance with fewer samples for both Karel and C program synthesis, indicating the promise of leveraging the learned program synthesizer to improve the dataset quality for input-output program synthesis (addressing (3)). When evaluating on whether the program execution outputs match the IO pairs, LaSynth achieves 55.2% accuracy on generating simple C code with tens of tokens including loops and branches, outperforming existing approaches without executors by around 20%.
| accept | OK, the situation with this paper is a bit unusual, owing to the fact that NeurIPS doesn't allow revisions during the reviewing period.
Basically all of the reviewers agreed that the core idea is good, but many of them had serious concerns with scoping and clarity.
I am going to recommend acceptance here, but I want to really strongly urge the authors to address the issues brought up by the reviewers in the final version of the paper. I don't actually have any authority to compel that outcome, so I'm relying on trust here - please do make these changes! | train | [
"GfotAYtGKai",
"WMIMyxIEdHK",
"eADfCTnZarN",
"s7__2-NkO5",
"uVIcB6euBzH",
"um2PBEToZON",
"nnzUqzHNbmS",
"xkTtnNNzrhs",
"OP_UD0qqPCz",
"_p95WeAiRAb",
"3mc1pvaA5Go",
"3mzkPQBeqf",
"KrrydTD2iM",
"GVJo-9gwnhT",
"90B-r9Z3GLy",
"wZE8vUKyiyX"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the follow-up! For Karel, the analogous sum is over all possible numbers of markers $m$ in each grid location, i.e., $m \\in [0, 10]$. While the number of markers in each grid is independent from other grids, the 16-dim vector representations of grid locations are not entirely independent from others. ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"WMIMyxIEdHK",
"3mzkPQBeqf",
"um2PBEToZON",
"OP_UD0qqPCz",
"eADfCTnZarN",
"nnzUqzHNbmS",
"KrrydTD2iM",
"GVJo-9gwnhT",
"90B-r9Z3GLy",
"3mc1pvaA5Go",
"nips_2021__nRSyha2SP",
"wZE8vUKyiyX",
"nips_2021__nRSyha2SP",
"nips_2021__nRSyha2SP",
"nips_2021__nRSyha2SP",
"nips_2021__nRSyha2SP"
] |
nips_2021_HnLDt9v6Q-j | Two steps to risk sensitivity | Distributional reinforcement learning (RL) – in which agents learn about all the possible long-term consequences of their actions, and not just the expected value – is of great recent interest. One of the most important affordances of a distributional view is facilitating a modern, measured, approach to risk when outcomes are not completely certain. By contrast, psychological and neuroscientific investigations into decision making under risk have utilized a variety of more venerable theoretical models such as prospect theory that lack axiomatically desirable properties such as coherence. Here, we consider a particularly relevant risk measure for modeling human and animal planning, called conditional value-at-risk (CVaR), which quantifies worst-case outcomes (e.g., vehicle accidents or predation). We first adopt a conventional distributional approach to CVaR in a sequential setting and reanalyze the choices of human decision-makers in the well-known two-step task, revealing substantial risk aversion that had been lurking under stickiness and perseveration. We then consider a further critical property of risk sensitivity, namely time consistency, showing alternatives to this form of CVaR that enjoy this desirable characteristic. We use simulations to examine settings in which the various forms differ in ways that have implications for human and animal planning and behavior.
| accept | There was extensive discussion on this paper. Perhaps the largest concern among reviewers was the significance of the model fit though I believe the author response on this issue is satisfactory. While this paper ended up as a split decision among reviewers, some strong support and comments like the following particularly stood out to me: "[the paper shows] how a different kind of model can provide a substantially different interpretation than numerous other studies" and "[the paper could] encourage other researchers to study how the brain implements distributional RL to make everyday decisions". Given that this paper proposes and defends a novel perspective in an interdisciplinary area highly germane to current NeurIPS discourse (also with high potential impact), I believe the paper has passed the bar for NeurIPS acceptance. | test | [
"XpOyzOT_soD",
"kb6pmCdteWK",
"DdGueuy2t-g",
"U-4m533X0WQ",
"OH5KmqJusr",
"T3_RomFvpIs",
"rTWfY-6ipGd",
"9PDRuAqz7xb",
"y2XKyMCXhTe",
"S6WOWuJVnqM"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their efforts to improve the paper in line with my suggestions.\nI would also suggest them to focus a bit more on interpretation of parameter values, especially with regard to differences from learning rate-based interpretation of the experimental results. I see that, rather than the techn... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
2
] | [
"OH5KmqJusr",
"DdGueuy2t-g",
"9PDRuAqz7xb",
"S6WOWuJVnqM",
"y2XKyMCXhTe",
"rTWfY-6ipGd",
"nips_2021_HnLDt9v6Q-j",
"nips_2021_HnLDt9v6Q-j",
"nips_2021_HnLDt9v6Q-j",
"nips_2021_HnLDt9v6Q-j"
] |
nips_2021_XN1M27T6uux | DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks | Machine learning models have been criticized for reflecting unfair biases in the training data. Instead of solving for this by introducing fair learning algorithms directly, we focus on generating fair synthetic data, such that any downstream learner is fair. Generating fair synthetic data from unfair data - while remaining truthful to the underlying data-generating process (DGP) - is non-trivial. In this paper, we introduce DECAF: a GAN-based fair synthetic data generator for tabular data. With DECAF we embed the DGP explicitly as a structural causal model in the input layers of the generator, allowing each variable to be reconstructed conditioned on its causal parents. This procedure enables inference time debiasing, where biased edges can be strategically removed for satisfying user-defined fairness requirements. The DECAF framework is versatile and compatible with several popular definitions of fairness. In our experiments, we show that DECAF successfully removes undesired bias and - in contrast to existing methods - is capable of generating high-quality synthetic data. Furthermore, we provide theoretical guarantees on the generator's convergence and the fairness of downstream models.
| accept | While the paper has initially some disagreement in score, it now arrives at a unanimous consensus by properly addressing all of the main concerns, in particular, regarding the limitation of the underlying assumption made in the paper, as well as providing more experimental results w.r.t. suggested baselines. I believe the paper is worth being published as a poster. | train | [
"DACyQUcHxuC",
"2PNfR-NG_F",
"BfchUhiZtB8",
"eCR2Ifs4TOJ",
"jL_JaGbMEWT",
"cXhMU7f54HP",
"-P94ttUla-0",
"qe6hCMj9vf",
"m72DU1T3adU",
"KvP-sRtVHfo",
"TkjaQZu40vb",
"b_nnTzJdFs",
"nK4Yurkgs8",
"kbLIUPsFK6f",
"-pH63z4_Oxs",
"esXnm-Hv4Xb",
"zAhI919YY6",
"5tEQi16gYNt",
"qKJKhg1gSoK",
... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes DECAF, a fair data generation method using causal DAG structures and GANs. The DAG is used to remove any causal dependencies that may have a negative effect on the fairness and is general enough to model various fairness measures: FTU, DP, and CF. Then synthetic data is generated according to t... | [
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"nips_2021_XN1M27T6uux",
"nips_2021_XN1M27T6uux",
"eCR2Ifs4TOJ",
"jL_JaGbMEWT",
"nips_2021_XN1M27T6uux",
"DACyQUcHxuC",
"tkO6bortmUF",
"jL_JaGbMEWT",
"DACyQUcHxuC",
"tkO6bortmUF",
"jL_JaGbMEWT",
"nK4Yurkgs8",
"5tEQi16gYNt",
"nips_2021_XN1M27T6uux",
"nips_2021_XN1M27T6uux",
"DACyQUcHxuC... |
nips_2021_YT_hOa02tqO | EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization | Gradient-based meta-learning and hyperparameter optimization have seen significant progress recently, enabling practical end-to-end training of neural networks together with many hyperparameters. Nevertheless, existing approaches are relatively expensive as they need to compute second-order derivatives and store a longer computational graph. This cost prevents scaling them to larger network architectures. We present EvoGrad, a new approach to meta-learning that draws upon evolutionary techniques to more efficiently compute hypergradients. EvoGrad estimates hypergradient with respect to hyperparameters without calculating second-order gradients, or storing a longer computational graph, leading to significant improvements in efficiency. We evaluate EvoGrad on three substantial recent meta-learning applications, namely cross-domain few-shot learning with feature-wise transformations, noisy label learning with Meta-Weight-Net and low-resource cross-lingual learning with meta representation transformation. The results show that EvoGrad significantly improves efficiency and enables scaling meta-learning to bigger architectures such as from ResNet10 to ResNet34.
| accept | All the reviewers agree on acceptance. This is a clear decision. The authors should take into account the reviewers' feedback to improve the paper for the camera-ready submission. | train | [
"bPYqPnPUM4Y",
"45ht6c6oJ3l",
"zjG_u3yErY8",
"dwPschLPmS",
"rT9HPhGC8DF",
"nOZC7-rqIIn",
"3NEmEDFtbpb",
"ZEv93GQYfif",
"XjKzJX3vB1y",
"zrGwzN3Qwp",
"lP1tZFi0qzL",
"bnxTnH3NHXo",
"auTj9KiZGqL",
"Orsg2_w5e6r",
"fU_TyB_d4Lx",
"7x_B3aRj02",
"ARZI0x0OIXo",
"6GEG1S6mO4D"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the further comments, we respond to them below.\n\nQ5: We agree that each method makes some assumptions, especially the following: \n* IFT: Converged inner loop (definitely violated in practice in real applications, which use it in a short-horizon way), and acceptable accuracy of the Neumann Series/... | [
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"45ht6c6oJ3l",
"7x_B3aRj02",
"Orsg2_w5e6r",
"nips_2021_YT_hOa02tqO",
"3NEmEDFtbpb",
"6GEG1S6mO4D",
"dwPschLPmS",
"nips_2021_YT_hOa02tqO",
"nips_2021_YT_hOa02tqO",
"lP1tZFi0qzL",
"bnxTnH3NHXo",
"auTj9KiZGqL",
"XjKzJX3vB1y",
"6GEG1S6mO4D",
"dwPschLPmS",
"ARZI0x0OIXo",
"nips_2021_YT_hOa... |
nips_2021_sMRdrUIrZbT | Biological key-value memory networks | In neuroscience, classical Hopfield networks are the standard biologically plausible model of long-term memory, relying on Hebbian plasticity for storage and attractor dynamics for recall. In contrast, memory-augmented neural networks in machine learning commonly use a key-value mechanism to store and read out memories in a single step. Such augmented networks achieve impressive feats of memory compared to traditional variants, yet their biological relevance is unclear. We propose an implementation of basic key-value memory that stores inputs using a combination of biologically plausible three-factor plasticity rules. The same rules are recovered when network parameters are meta-learned. Our network performs on par with classical Hopfield networks on autoassociative memory tasks and can be naturally extended to continual recall, heteroassociative memory, and sequence learning. Our results suggest a compelling alternative to the classical Hopfield network as a model of biological long-term memory.
| accept | The significance of the proposal was not satisfactorily established for acceptance. While the committee accepts that the authors' burden is to prove their proposal is an interesting and valid model for biological memory (rather than being competitive with state-of-the-art non-biological memory networks), the experiments were still deemed inadequate in a few different dimensions (see reviewer comments). The metrics for a proposal like this are a bit unclear, but some on the committee would have liked to see scientific merit as evidenced by the proposal being more consistent with actual biological experimental data than past proposals. That was not made clear, and the issue was convoluted by the incomplete analytical comparisons to prior work. For example, it would have helped to place this in context better compared to the sparse distributed memory models of Kanerva et al. from twenty years ago, which seemed very similar, though this proposal uses a more biologically complex soft-max than a step function. Further, the sequential KEVIN method was not compelling biologically. Overall, this proposal seemed to fall in the middle of biological plausibility and computationally efficiency, making it not clear how to judge its overall merits and significance, especially with the lack of standard experimental evidence. | train | [
"jAFAGeAF_J0",
"uKv93is8mAo",
"87l1pAflhML",
"WzUABtARkN",
"vx4jmkcd0uK",
"WM-BuYTMURa",
"bsnFeCu1GV_",
"bzRyY6D6qEc",
"tOXhH-ColmF",
"f1UcxZi9vaY",
"JHBenUZf_Nm",
"xSBnifyLMwt",
"HsHRheqOUmo"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their detailed response.\n\nThe reframing suggested by the authors appears clearer to me and would dispel my concerns about the placement of the proposed method. While not mentioned in the author response explicitly, I still believe that it would be helpful for the reader to also mention t... | [
-1,
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"f1UcxZi9vaY",
"nips_2021_sMRdrUIrZbT",
"nips_2021_sMRdrUIrZbT",
"vx4jmkcd0uK",
"bsnFeCu1GV_",
"87l1pAflhML",
"WM-BuYTMURa",
"JHBenUZf_Nm",
"HsHRheqOUmo",
"uKv93is8mAo",
"xSBnifyLMwt",
"nips_2021_sMRdrUIrZbT",
"nips_2021_sMRdrUIrZbT"
] |
nips_2021_hg0s8od-jd | Correlated Stochastic Block Models: Exact Graph Matching with Applications to Recovering Communities | We consider the task of learning latent community structure from multiple correlated networks. First, we study the problem of learning the latent vertex correspondence between two edge-correlated stochastic block models, focusing on the regime where the average degree is logarithmic in the number of vertices. We derive the precise information-theoretic threshold for exact recovery: above the threshold there exists an estimator that outputs the true correspondence with probability close to 1, while below it no estimator can recover the true correspondence with probability bounded away from 0. As an application of our results, we show how one can exactly recover the latent communities using \emph{multiple} correlated graphs in parameter regimes where it is information-theoretically impossible to do so using just a single graph.
| accept | This paper presents a fine theoretical contribution showing how an initial step of graph matching of correlated graphs yields improved results to subsequently perform community detection, assuming graphs sampled from a correlated stochastic block model. The reviews are quite positive. Indeed joint consideration of multiple graphs to perform tasks such as community detection is a very natural strategy, and does not seem to have received much attention in the past. This makes the paper's contribution all the more valuable, hence the recommendation to accept. | train | [
"iqqIZdh-WxL",
"L4C5Rm70_0z",
"qn19UH3Tg3F",
"96qFm3gbEwQ",
"-4sZRYWpmKS",
"ut8HpkyykX",
"xDjFeRyKil-",
"wiMFJoDNCkX"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your review!",
" Thank you for your review!",
" Thank you for your review! The question of approximate community recovery is an interesting one. A related question of possible interest to you is whether approximate recovery of the vertex correspondence is possible. Our results imply that in the ... | [
-1,
-1,
-1,
-1,
7,
8,
8,
7
] | [
-1,
-1,
-1,
-1,
3,
3,
5,
4
] | [
"xDjFeRyKil-",
"ut8HpkyykX",
"wiMFJoDNCkX",
"-4sZRYWpmKS",
"nips_2021_hg0s8od-jd",
"nips_2021_hg0s8od-jd",
"nips_2021_hg0s8od-jd",
"nips_2021_hg0s8od-jd"
] |
nips_2021_x00mCNwbH8Q | Twice regularized MDPs and the equivalence between robustness and regularization | Esther Derman, Matthieu Geist, Shie Mannor | accept | All but one reviewer recommend publication. The one who does not is concerned about the lack of empirical results, but I think this is a topic of interest to the community and its reasonable to have a purely theoretical paper on this. | train | [
"YbmNk0Kssj",
"f0pTWd64-Yl",
"uZC2TpeS_u1",
"m0NRseDOsBO",
"_I9zp4J2yhE",
"S2k7Yxpuqa0",
"niPv9rG06SA",
"ucqq1cNZYf",
"-HASe4NUgf_",
"rhsyrDy5Wvk",
"KKtcVKvvG1K",
"syz8GOUDlJz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies the connection between robust and regularized MDPs. The authors first show that solving robust MDPs with uncertain rewards is equivalent to solving regularized MDPs with a suitable policy-dependent regularization. Then, they extend this result to MDPs with uncertain transitions, obtaining the equ... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5
] | [
3,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"nips_2021_x00mCNwbH8Q",
"niPv9rG06SA",
"nips_2021_x00mCNwbH8Q",
"_I9zp4J2yhE",
"S2k7Yxpuqa0",
"-HASe4NUgf_",
"YbmNk0Kssj",
"syz8GOUDlJz",
"uZC2TpeS_u1",
"KKtcVKvvG1K",
"nips_2021_x00mCNwbH8Q",
"nips_2021_x00mCNwbH8Q"
] |
nips_2021_9lwprXiGdR4 | Nearly Minimax Optimal Reinforcement Learning for Discounted MDPs | jiafan he, Dongruo Zhou, Quanquan Gu | accept | This paper studies the sample complexity of tabular (finite state/action) reinforcement learning in the infinite-horizon discounted setting, which is a fundamental problem in reinforcement learning theory. The main contribution is to provide improved upper and lower bounds on the optimal sample complexity (w.r.t, the dependence on the horizon parameter (1-gamma)^(-1)) which resolve the optimal sample complexity. The reviewers found these results to be interesting and technically non-trivial and recommend acceptance, but the authors are encouraged to take their suggestions into account and update the paper to better emphasize what the key analysis ideas and why they are novel. | train | [
"bIZ-hyK7x6t",
"gebPzvGSDM",
"06iCoOTz88s",
"nSIY3tXrUBv",
"1B96KY7uaYB",
"M-Jq202HBXE",
"4XjfBy3aq7w",
"L52vEcGtr5U",
"Ttnw-OrU4h",
"Qyvk5l5u9q4",
"kSlL6EoFeJT",
"6GjInPm6fJF"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for your positive feedback and for increasing the score. We will prepare the final version by incorporating all the revisions following your comments and suggestions.",
"The authors study the problem of learning a discounted tabular-MDP. They adopt a notion of regret that is closely related to policy ... | [
-1,
7,
-1,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
3,
-1,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"06iCoOTz88s",
"nips_2021_9lwprXiGdR4",
"L52vEcGtr5U",
"nips_2021_9lwprXiGdR4",
"nips_2021_9lwprXiGdR4",
"4XjfBy3aq7w",
"Ttnw-OrU4h",
"gebPzvGSDM",
"nSIY3tXrUBv",
"1B96KY7uaYB",
"6GjInPm6fJF",
"nips_2021_9lwprXiGdR4"
] |
nips_2021_VMAfyuC3uXP | Sparse Deep Learning: A New Framework Immune to Local Traps and Miscalibration | Deep learning has powered recent successes of artificial intelligence (AI). However, the deep neural network, as the basic model of deep learning, has suffered from issues such as local traps and miscalibration. In this paper, we provide a new framework for sparse deep learning, which has the above issues addressed in a coherent way. In particular, we lay down a theoretical foundation for sparse deep learning and propose prior annealing algorithms for learning sparse neural networks. The former has successfully tamed the sparse deep neural network into the framework of statistical modeling, enabling prediction uncertainty correctly quantified. The latter can be asymptotically guaranteed to converge to the global optimum, enabling the validity of the down-stream statistical inference. Numerical result indicates the superiority of the proposed method compared to the existing ones.
| accept | This paper proposed a framework for sparse deep learning, with the primary goals as addressing mis-calibration problems. The paper contains both theoretical analysis as well as some experiments to demonstrate the theoretical claims on Bayesian ResNets.
This paper sits at the intersection of network pruning and uncertainty calibration so it might benefit both communities. However, the paper heavily assumes prior knowledge presented in Sun et al. (2021) which is a very recent JASA paper. I would expect this paper to be very difficult to read for deep learning practitioners interested in network pruning.
The paper stated that "This work, together with Sun et al. (2021), has built a solid theoretical foundation for sparse deep learning". Therefore in my view, acceptance decision critically relies on how significant the contribution of this paper is on top of Sun et al. (2021).
- From my understanding after a brief read, the key theoretical result in this paper is the asymptotic consistency/normally estimate for the defined "ground truth" parameters.
- While Sun et al. (2021) established posterior consistency and structure identification consistency, that is regarding the estimation of the data distribution, rather than the underlying parameters.
- Therefore I think the two paper's theoretical results are different aspects of the same framework. This makes me think that in some sense this is a project split into two publications?
Experimentally, the proposed approach and the approach of Sun et al. (2021) perform quite close in many of the experiments.
- This also raises a practical question in that, to what extent it is important to ensure "consistency in parameter estimation"?
- It is even more confusing that the "ground truth" parameters are defined using "universal approximation of neural network", so I would think "consistency in function estimation" might be more important here, which is already discussed in Sun et al. (2021) to certain extent. | train | [
"5DZY5WFVV5F",
"62YRRcyj7fB",
"lb5n7Xrgzlb",
"a7hQF8qM6ef",
"ZTgNgLTa7sc",
"olagynU4zmX",
"3Js9EhDoEK",
"PmCS8gcjiFw",
"ySQab0mD1yY",
"yIZ7Hgt8FXy",
"aNrYGaSI71",
"PZSXdzzAhf"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your reply.\n\nOur intention is to emphasis that we can make uncertainty quantification for deep learning under the sparse network and provide theoretical justification for constructing faithful confidence region, while the theoretical property of some methods aiming at improving calibration(e.g. tempe... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
2,
3
] | [
"a7hQF8qM6ef",
"lb5n7Xrgzlb",
"aNrYGaSI71",
"olagynU4zmX",
"yIZ7Hgt8FXy",
"aNrYGaSI71",
"PZSXdzzAhf",
"ySQab0mD1yY",
"nips_2021_VMAfyuC3uXP",
"nips_2021_VMAfyuC3uXP",
"nips_2021_VMAfyuC3uXP",
"nips_2021_VMAfyuC3uXP"
] |
nips_2021_iFF-zKCgzS | Calibrating Predictions to Decisions: A Novel Approach to Multi-Class Calibration | When facing uncertainty, decision-makers want predictions they can trust. A machine learning provider can convey confidence to decision-makers by guaranteeing their predictions are distribution calibrated--- amongst the inputs that receive a predicted vector of class probabilities q, the actual distribution over classes is given by q. For multi-class prediction problems, however, directly optimizing predictions under distribution calibration tends to be infeasible, requiring sample complexity that grows exponentially in the number of classes C. In this work, we introduce a new notion---decision calibration---that requires the predicted distribution and true distribution over classes to be ``indistinguishable'' to downstream decision-makers. This perspective gives a new characterization of distribution calibration: a predictor is distribution calibrated if and only if it is decision calibrated with respect to all decision-makers. Our main result shows that under a mild restriction, unlike distribution calibration, decision calibration is actually feasible. We design a recalibration algorithm that provably achieves decision calibration efficiently, provided that the decision-makers have a bounded number of actions (e.g., polynomial in C). We validate our recalibration algorithm empirically: compared to existing methods, decision calibration improves decision-making on skin lesion and ImageNet classification with modern neural network predictors.
| accept | While there was not complete consensus on this paper, two reviewers found the work to be valuable -- especially after the author responses, that included clarifications and the additional empirical results to address key concerns raised around ablations and robustness. Additionally, I found that the most negative review was also one that provided almost no detail or reasoning, which leads me to discount this review significantly.
I will quote reviewer TRQr's private response within discussion amongst reviewers as a useful summary of the paper's strengths:
"I think that the authors have done a very good job answering all concerns raised by all reviewers and have even run very relevant additional experiments. I am definitely supporting acceptance of this paper because it fills the conceptual gap between confidence calibration and classwise calibration on one side and distribution calibration on the other side. It is not just some arbitrary mathematical generalization, it is exactly the right one in my opinion - taking the perspective of the decision maker is the best thing one can do, because that is what calibration is usually meant to serve."
I do expect that the authors will include the additional results in the final paper, and that they will use the reviewer comments to further strengthen the paper. | train | [
"mg-4-b9wXET",
"_jDUiAjaQB",
"ZS1OP373y7Q",
"Tnb6IRQ5o8c",
"HKKjTmGU7lG",
"WMsZ2I1CjWn",
"zz9irpgBNQh",
"ZvFhwWJHD2N",
"zXqRPX57C2a"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper defines a new notion of decision calibration and demonstrates that approximate $\\mathcal{L}^K$ decision calibration is verifiable and achievable in practice, providing a practical algorithm for it. This is a result with practical calibration guarantees in multi-class calibration, which to my knowledge i... | [
8,
-1,
7,
-1,
-1,
-1,
-1,
-1,
4
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
2
] | [
"nips_2021_iFF-zKCgzS",
"Tnb6IRQ5o8c",
"nips_2021_iFF-zKCgzS",
"HKKjTmGU7lG",
"ZS1OP373y7Q",
"zXqRPX57C2a",
"mg-4-b9wXET",
"ZS1OP373y7Q",
"nips_2021_iFF-zKCgzS"
] |
nips_2021_L8-54wkift | Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks | Dmitry Kovalev, Elnur Gasanov, Alexander Gasnikov, Peter Richtarik | accept | The paper presents a significant contribution to distributed optimization over time-varying networks. In particular it establishes a lower-bound on complexity by identifying a specific pattern of time-varying networks, and proposes two schemes (one, ADOM+, requiring access to dual gradients, and another one avoiding the need to access such dual gradients) that are proven to match this lower bound. This thus gives a rather complete treatment of the problem of distributed convex optimization for time-varying networks as those considered in the paper.
While the paper's analysis is made for specific assumptions (eg on the type of time varying networks) and the lower bound is of a worst-case nature, the authors clarified these points, and proposed to develop discussions of these and other points in the final version which should put the paper in a correct perspective. | val | [
"a_XzIhxVGy",
"-dFWG5lbYBP",
"D9_kRV-1_G6",
"dvlXvDPxlWg",
"rlVuczilo7",
"P6UT3ZB2Cix",
"_NU6XniDrix",
"75hn6wn-TSQ",
"JEefI7Aou_a",
"pPGWTyh9NCA",
"kMofUgublZT",
"Q1BRqSEUXa9",
"JiNMVdH0qtO",
"NqNl0Qot3Bd",
"DJWewDenVzG",
"d0guZGmkMFb",
"JsRILbz6F8k",
"98_Z4BMC7Qf",
"9zq02plMsGq... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_... | [
" Dear reviewers,\n\nWe did not receive any feedback on our author response yet from any reviewer.\n\nPlease can you let us know whether you've read our rebuttal and whether we addressed your concerns? \n\nIf we did not, please let us know what we failed to address appropriately.\n\nThanks!!\n\nAuthors of Paper 975... | [
-1,
5,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8
] | [
-1,
1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_L8-54wkift",
"nips_2021_L8-54wkift",
"dvlXvDPxlWg",
"P6UT3ZB2Cix",
"nips_2021_L8-54wkift",
"JEefI7Aou_a",
"iQDtwQr2Q1o",
"rlVuczilo7",
"-dFWG5lbYBP",
"kMofUgublZT",
"9zq02plMsGq",
"JiNMVdH0qtO",
"nips_2021_L8-54wkift",
"-dFWG5lbYBP",
"iQDtwQr2Q1o",
"rlVuczilo7",
"rlVuczilo... |
nips_2021_sHu8-ux9VH | Testing Probabilistic Circuits | Probabilistic circuits (PCs) are a powerful modeling framework for representing tractable probability distributions over combinatorial spaces. In machine learning and probabilistic programming, one is often interested in understanding whether the distributions learned using PCs are close to the desired distribution. Thus, given two probabilistic circuits, a fundamental problem of interest is to determine whether their distributions are close to each other.The primary contribution of this paper is a closeness test for PCs with respect to the total variation distance metric. Our algorithm utilizes two common PC queries, counting and sampling. In particular, we provide a poly-time probabilistic algorithm to check the closeness of two PCs, when the PCs support tractable approximate counting and sampling. We demonstrate the practical efficiency of our algorithmic framework via a detailed experimental evaluation of a prototype implementation against a set of 375 PC benchmarks. We find that our test correctly decides the closeness of all 375 PCs within 3600 seconds.
| accept | All reviewers recognized the value of having an approximate testing scheme to compute the total variation distance between two weighted d-DNNF formulas.
During the discussion, a number of suggestions to strengthen the work before being published has been highlighted. Namely, these include
- fixing the presentation by properly comparing general probabilistic circuits (PCs, as defined in Choi et al.) and the weighted d-DNNF formulas used in this work, e.g., by discussing how the proposed methods can be extended from such weighted boolean circuits to general PCs
- adding synthetic experiments with small-scale d-DNNF to compare against a ground truth
- adding real-world experiments with PCs learned from data to show generality
- performing a grid search over the several hyperparameters
- refactoring the proofs as to provide a more high level introduction
As the authors promised some content/results but they did not provide it during the rebuttal phase, the paper is accepted subject to incorporating this content. | train | [
"Q6HUsih7asg",
"mlk5L19Ordo",
"EhsAHq0jvo",
"PS3WjNqzIl8",
"AgdNfFGuLlL",
"a6kFyt9dla7",
"RuawZeGAYUH",
"jx1r5r8Qqsf",
"8Gg_HBjsk1T",
"VXCtSkBfVtH"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n \n Please let us know if there are any remaining concerns that we can address. ",
" Yes, it's a good idea to have a bit more on applications.",
" Thank you for your careful review. We are glad that you appreciated the technical strengths of the work. We will provide a high-level sketch of ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"a6kFyt9dla7",
"PS3WjNqzIl8",
"8Gg_HBjsk1T",
"VXCtSkBfVtH",
"RuawZeGAYUH",
"jx1r5r8Qqsf",
"nips_2021_sHu8-ux9VH",
"nips_2021_sHu8-ux9VH",
"nips_2021_sHu8-ux9VH",
"nips_2021_sHu8-ux9VH"
] |
nips_2021_8qa6hkGYDJk | Pseudo-Spherical Contrastive Divergence | Lantao Yu, Jiaming Song, Yang Song, Stefano Ermon | accept | This paper introduced a class of divergence to learn energy-based models. The proposed divergence class has an auxiliary loss parameter that can balance the trade-off between diversity and quality. The method was applied to image benchmark datasets. The reviewers agree that this is an interesting paper with nice contributions. We conclude that the paper is worthy of acceptance. | train | [
"vmuvA9m2Edt",
"DF-gY5uPA5-",
"54jIpH_Olsn",
"4-wv1cwaqc",
"hp0Rt0Ux_Yt",
"R9tHZwx4N-J",
"C78HoX0jxoW",
"7Ru5kEQeqpu",
"2SeWur0hUsm",
"OsgzYpL1n7G",
"ic8FJ5FAUG5",
"O1pwKy9eMjM",
"kQob708U08g",
"8YUNLwpZAz",
"LhFsqnBgSiw"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the feedback and we really appreciate your support for our work!\n\nAbout the observations on the added experimental results:\n- Yes, in the synthetic data (and real data) experiments, we found that traditional CD/MLE method is vulnerable to even a small amount of data contamination (e.g. a ratio of 0.... | [
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"54jIpH_Olsn",
"C78HoX0jxoW",
"ic8FJ5FAUG5",
"hp0Rt0Ux_Yt",
"7Ru5kEQeqpu",
"nips_2021_8qa6hkGYDJk",
"2SeWur0hUsm",
"LhFsqnBgSiw",
"R9tHZwx4N-J",
"8YUNLwpZAz",
"kQob708U08g",
"nips_2021_8qa6hkGYDJk",
"nips_2021_8qa6hkGYDJk",
"nips_2021_8qa6hkGYDJk",
"nips_2021_8qa6hkGYDJk"
] |
nips_2021_RwASmRpLp- | NORESQA: A Framework for Speech Quality Assessment using Non-Matching References | The perceptual task of speech quality assessment (SQA) is a challenging task for machines to do. Objective SQA methods that rely on the availability of the corresponding clean reference have been the primary go-to approaches for SQA. Clearly, these methods fail in real-world scenarios where the ground truth clean references are not available. In recent years, non-intrusive methods that train neural networks to predict ratings or scores have attracted much attention, but they suffer from several shortcomings such as lack of robustness, reliance on labeled data for training and so on. In this work, we propose a new direction for speech quality assessment. Inspired by human's innate ability to compare and assess the quality of speech signals even when they have non-matching contents, we propose a novel framework that predicts a subjective relative quality score for the given speech signal with respect to any provided reference without using any subjective data. We show that neural networks trained using our framework produce scores that correlate well with subjective mean opinion scores (MOS) and are also competitive to methods such as DNSMOS, which explicitly relies on MOS from humans for training networks. Moreover, our method also provides a natural way to embed quality-related information in neural networks, which we show is helpful for downstream tasks such as speech enhancement.
| accept | Overview: The paper presents a novel and simple method to measure quality of speech. They compare the input against a small set of examples. The comparison is performed used measures such as SNR and SI-SDR which do not require matching references. This is a big advantage for this sort of measurement. The experimental results bolster the claims in the paper.
Reviews: The reviewers were consistent in appreciating the novelty and simplicity of the proposed approach. The few minor nits and reservations that reviewers shared were adequately addressed by the authors in their response. I would urge the authors to incorporate them into the revised version of the paper. | val | [
"2lidhWhJ1Gz",
"t3Rd6xO46T1",
"6UMpgWfDPhE",
"O382WHI7QPN",
"FueQujnUxD8",
"8lUJPsWk4LM",
"9Lt0Gs0AALM",
"mQpsnm6Fa_o",
"G9HDykf0C-i"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the response. Below, we would like to reply to each point:\n\n1. ***Regarding DNSMOS Dataset***: Please note that the dataset of MOS ratings used in the DNSMOS paper is not open source and we don’t know if it will be released at all. To the best of our knowledge, there is no large-scale ... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"t3Rd6xO46T1",
"O382WHI7QPN",
"O382WHI7QPN",
"G9HDykf0C-i",
"mQpsnm6Fa_o",
"9Lt0Gs0AALM",
"nips_2021_RwASmRpLp-",
"nips_2021_RwASmRpLp-",
"nips_2021_RwASmRpLp-"
] |
nips_2021_SI-vB7AYS_c | AFEC: Active Forgetting of Negative Transfer in Continual Learning | Continual learning aims to learn a sequence of tasks from dynamic data distributions. Without accessing to the old training samples, knowledge transfer from the old tasks to each new task is difficult to determine, which might be either positive or negative. If the old knowledge interferes with the learning of a new task, i.e., the forward knowledge transfer is negative, then precisely remembering the old tasks will further aggravate the interference, thus decreasing the performance of continual learning. By contrast, biological neural networks can actively forget the old knowledge that conflicts with the learning of a new experience, through regulating the learning-triggered synaptic expansion and synaptic convergence. Inspired by the biological active forgetting, we propose to actively forget the old knowledge that limits the learning of new tasks to benefit continual learning. Under the framework of Bayesian continual learning, we develop a novel approach named Active Forgetting with synaptic Expansion-Convergence (AFEC). Our method dynamically expands parameters to learn each new task and then selectively combines them, which is formally consistent with the underlying mechanism of biological active forgetting. We extensively evaluate AFEC on a variety of continual learning benchmarks, including CIFAR-10 regression tasks, visual classification tasks and Atari reinforcement tasks, where AFEC effectively improves the learning of new tasks and achieves the state-of-the-art performance in a plug-and-play way.
| accept | This work touches on an important problem in Continual Learning that has not been as thoroughly investigated as catastrophic forgetting, namely graceful forgetting (or an active and controlled mode of forgetting) that would allow forward transfer.
While the need of graceful forgetting has been discussed in the past, there are not many methods that manage to do so in a successful way, and this work proposes a practical way of doing just that.
Given the reviews, and the active participation of the authors in the discussion (including new experiments) I believe the current manuscript is of interest to the community and matches the expectations from the conference. | train | [
"T7H6ye1yK6",
"PcaMPzHMrX",
"hjPtmgUFJqK",
"7LxnUXpYGKp",
"CVwaGhCJ-Uk",
"03c-LNGb-xK",
"Uwnmv2dwNR8",
"TXgAZr5N6n6",
"EXzJ-FiF-SJ",
"GZwsPU7h_2O",
"lLhqI4TcRIj",
"qk1qLL01xfC",
"U7pwRxs7ZkI",
"osAn3hhTTJs",
"aPtDo_WNn4Y",
"T2oE18uxfhA",
"S-yQ89DFX6s",
"Ucknb1eicIK",
"uphNNRHK6bC... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
... | [
" Thank you so much for your positive feedback! We will incorporate the additional information and improve the paper in the final version.",
"This paper proposes a new approach for continual learning that actively forgets the old knowledge when learning the new tasks. The approach uses Bayesian continual learning... | [
-1,
6,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"hjPtmgUFJqK",
"nips_2021_SI-vB7AYS_c",
"CVwaGhCJ-Uk",
"nips_2021_SI-vB7AYS_c",
"p35tC-ZWat",
"nips_2021_SI-vB7AYS_c",
"TXgAZr5N6n6",
"nips_2021_SI-vB7AYS_c",
"aPtDo_WNn4Y",
"qk1qLL01xfC",
"U7pwRxs7ZkI",
"uphNNRHK6bC",
"BHf2DBrux_H",
"aPtDo_WNn4Y",
"Ucknb1eicIK",
"nips_2021_SI-vB7AYS_c... |
nips_2021_q4Dln9kWFA0 | Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and Generalization | Despite the significant interests and many progresses in decentralized multi-player multi-armed bandits (MP-MAB) problems in recent years, the regret gap to the natural centralized lower bound in the heterogeneous MP-MAB setting remains open. In this paper, we propose BEACON -- Batched Exploration with Adaptive COmmunicatioN -- that closes this gap. BEACON accomplishes this goal with novel contributions in implicit communication and efficient exploration. For the former, we propose a novel adaptive differential communication (ADC) design that significantly improves the implicit communication efficiency. For the latter, a carefully crafted batched exploration scheme is developed to enable incorporation of the combinatorial upper confidence bound (CUCB) principle. We then generalize the existing linear-reward MP-MAB problems, where the system reward is always the sum of individually collected rewards, to a new MP-MAB problem where the system reward is a general (nonlinear) function of individual rewards. We extend BEACON to solve this problem and prove a logarithmic regret. BEACON bridges the algorithm design and regret analysis of combinatorial MAB (CMAB) and MP-MAB, two largely disjointed areas in MAB, and the results in this paper suggest that this previously ignored connection is worth further investigation.
| accept | This paper looks at the multi-player multi-arm bandit problem, with heterogenous rewards. This specific problem of reaching the centralized case with decentralized protocols was still open, while it was settled with homogenous rewards.
This result requires combining additional, non trivial techniques from combinatorial bandits amongst others, hence hitting the acceptance bar.
| test | [
"GeA7qIBxaMD",
"6yG5ojsI9Q",
"CssXzxzmSn8",
"I1SPCICOj6L",
"C52aldeY4nO",
"3GeMrdIJgnu",
"Io7maU3KZZQ",
"VJE0kJV8m1Z",
"JkWtqRN0E93",
"mLBouzjqAA"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for kindly updating comments. We summarize the reviewer's comments into the following points and would like to make some corresponding clarifications.\n\n- **Point 1 (mixing settings)**: The game is fundamentally changed with implicit communications via collisions and this work mixed the set... | [
-1,
5,
7,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
3,
5,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"6yG5ojsI9Q",
"nips_2021_q4Dln9kWFA0",
"nips_2021_q4Dln9kWFA0",
"nips_2021_q4Dln9kWFA0",
"mLBouzjqAA",
"CssXzxzmSn8",
"JkWtqRN0E93",
"6yG5ojsI9Q",
"nips_2021_q4Dln9kWFA0",
"nips_2021_q4Dln9kWFA0"
] |
nips_2021_zkHlu_3sJYU | SWAD: Domain Generalization by Seeking Flat Minima | Domain generalization (DG) methods aim to achieve generalizability to an unseen target domain by using only training data from the source domains. Although a variety of DG methods have been proposed, a recent study shows that under a fair evaluation protocol, called DomainBed, the simple empirical risk minimization (ERM) approach works comparable to or even outperforms previous methods. Unfortunately, simply solving ERM on a complex, non-convex loss function can easily lead to sub-optimal generalizability by seeking sharp minima. In this paper, we theoretically show that finding flat minima results in a smaller domain generalization gap. We also propose a simple yet effective method, named Stochastic Weight Averaging Densely (SWAD), to find flat minima. SWAD finds flatter minima and suffers less from overfitting than does the vanilla SWA by a dense and overfit-aware stochastic weight sampling strategy. SWAD shows state-of-the-art performances on five DG benchmarks, namely PACS, VLCS, OfficeHome, TerraIncognita, and DomainNet, with consistent and large margins of +1.6% averagely on out-of-domain accuracy. We also compare SWAD with conventional generalization methods, such as data augmentation and consistency regularization methods, to verify that the remarkable performance improvements are originated from by seeking flat minima, not from better in-domain generalizability. Last but not least, SWAD is readily adaptable to existing DG methods without modification; the combination of SWAD and an existing DG method further improves DG performances. Source code is available at https://github.com/khanrc/swad.
| accept | This paper proposes Stochastic Weight Averaging Densely (SWAD) to improve domain generalization. As the name suggests, it is a modification of Stochastic Weight Averaging (SWA) by averaging more densely (every iteration).
Reviewers mostly agree on strength and weaknesses of this work. They all believe that this work is a solid empirical contribution. However, there are two major shortcomings: 1- The novelty is very limited given the minimal changes applied to SWA to get SWAD. 2- The theoretical results seem very unrelated to reasons for the success of SWAD (in particular, as opposed to SWA). I highly recommend rewriting the paper with less emphasize on the theoretical results (maybe moving it to appendix?).
Overall, given the empirical contributions, I'm slightly inclined to accept the paper. I hope authors would take reviewers' suggestion into account to improve the paper. | train | [
"gBOnttx_wS1",
"OEUIkBI_712",
"1JR08lFJZ7n",
"pNqgyHyDEPl",
"DLbZGTdUfe",
"h7pu5gErCqT",
"oQWsJeUJaSu",
"CTSM_Sk_EdA",
"yH0ABhM02ii",
"LdgdXgOYePp",
"OnbbaShrSjk",
"5tGXR4V_Dhq",
"NhxsG3hDWyT",
"MyrN1OtfRX1",
"EYutK7NSLvO",
"NLMEtuQjgt",
"5c5YRIFv-xD",
"3q3xgRZkqZ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Dear Reviewer aAQf, \n\nWe attached additional experiments on ImageNet robustness benchmarks (ImageNet-C, ImageNet-R, and background challenge).\nWe believe that as the reviewer suggested, SWAD also shows effectiveness on other applications, such as ImageNet tasks.\n\nWe will update our paper to include the resul... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3
] | [
"NhxsG3hDWyT",
"nips_2021_zkHlu_3sJYU",
"h7pu5gErCqT",
"nips_2021_zkHlu_3sJYU",
"nips_2021_zkHlu_3sJYU",
"OnbbaShrSjk",
"NhxsG3hDWyT",
"yH0ABhM02ii",
"NLMEtuQjgt",
"nips_2021_zkHlu_3sJYU",
"MyrN1OtfRX1",
"nips_2021_zkHlu_3sJYU",
"5c5YRIFv-xD",
"OEUIkBI_712",
"3q3xgRZkqZ",
"LdgdXgOYePp"... |
nips_2021_J4gRj6d5Qm | Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting | Extending the forecasting time is a critical demand for real applications, such as extreme weather early warning and long-term energy consumption planning. This paper studies the long-term forecasting problem of time series. Prior Transformer-based models adopt various self-attention mechanisms to discover the long-range dependencies. However, intricate temporal patterns of the long-term future prohibit the model from finding reliable dependencies. Also, Transformers have to adopt the sparse versions of point-wise self-attentions for long series efficiency, resulting in the information utilization bottleneck. Going beyond Transformers, we design Autoformer as a novel decomposition architecture with an Auto-Correlation mechanism. We break with the pre-processing convention of series decomposition and renovate it as a basic inner block of deep models. This design empowers Autoformer with progressive decomposition capacities for complex time series. Further, inspired by the stochastic process theory, we design the Auto-Correlation mechanism based on the series periodicity, which conducts the dependencies discovery and representation aggregation at the sub-series level. Auto-Correlation outperforms self-attention in both efficiency and accuracy. In long-term forecasting, Autoformer yields state-of-the-art accuracy, with a 38% relative improvement on six benchmarks, covering five practical applications: energy, traffic, economics, weather and disease. Code is available at this repository: https://github.com/thuml/Autoformer.
| accept | This paper investigates the problem of long-term forecasting for time series models. It proposes a transformer-based architecture with an attention mechanism based on auto-correlation and combines it with a periodicity-based time series decomposition approach. The problem studied is relevant and important. The proposed architecture is novel although the basic concepts it builds on are well known in time series analysis. The proposed approach is technically sound and the empirical evaluation of the approach provides adequate evidence that it has the potential to provide improved performance on the long term time series forecasting problem relative to existing approaches. Following the author response and discussion, most initial reviewer concerns were addressed and the consensus is that the paper should be accepted. One important point for the authors to clarify are the limitations of the approach with respect to time series with different structural properties (e.g, complex seasonality, quasiperiodicity, absence of periodicity, etc.). | val | [
"-kV4efUnAA",
"hPTB_qYqkUv",
"yz2pD9m3hl1",
"20pSe6PIulL",
"22HUYiYRSy",
"FMC5dBztcJ9",
"3V7EIRdBmiR",
"-lJURSyiJIK",
"0nBrZhBno2q",
"zy65OgIxJNe",
"bv3ywfQTiU",
"ELhweqfCxyR",
"XNHFla7NWl",
"B7USyr-bkud",
"9jEC88KacpK",
"DEIVpnwPpZU"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents Autoformer to perform long-term time series forecasting. The key idea is to leverage an auto-correlation mechanism to discover the sub-series similarity based on the series periodicity and aggregate similar sub-series from underlying periods. The experiment results on several datasets showed th... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_J4gRj6d5Qm",
"yz2pD9m3hl1",
"20pSe6PIulL",
"3V7EIRdBmiR",
"XNHFla7NWl",
"nips_2021_J4gRj6d5Qm",
"B7USyr-bkud",
"0nBrZhBno2q",
"ELhweqfCxyR",
"nips_2021_J4gRj6d5Qm",
"DEIVpnwPpZU",
"zy65OgIxJNe",
"9jEC88KacpK",
"-kV4efUnAA",
"nips_2021_J4gRj6d5Qm",
"nips_2021_J4gRj6d5Qm"
] |
nips_2021_luWTh5Q63e | Predicting Event Memorability from Contextual Visual Semantics | Episodic event memory is a key component of human cognition. Predicting event memorability,i.e., to what extent an event is recalled, is a tough challenge in memory research and has profound implications for artificial intelligence. In this study, we investigate factors that affect event memorability according to a cued recall process. Specifically, we explore whether event memorability is contingent on the event context, as well as the intrinsic visual attributes of image cues. We design a novel experiment protocol and conduct a large-scale experiment with 47 elder subjects over 3 months. Subjects’ memory of life events is tested in a cued recall process. Using advanced visual analytics methods, we build a first-of-its-kind event memorability dataset (called R3) with rich information about event context and visual semantic features. Furthermore, we propose a contextual event memory network (CEMNet) that tackles multi-modal input to predict item-wise event memorability, which outperforms competitive benchmarks. The findings inform deeper understanding of episodic event memory, and open up a new avenue for prediction of human episodic memory. Source code is available at https://github.com/ffzzy840304/Predicting-Event-Memorability.
| accept | The paper studies episodic memory in humans and presents a novel event memorability dataset, along with a memory network (CEMNet) to enable computational studies. All the reviewers are positive about the paper and the authors’ responses addressed most of their major concerns. I strongly encourage the authors to make the suggested changes by reviewers — better description and analysis of the data collected and the process used (ZmrY, 9z5U), writing clarifications to sections 3 and 4 (9z5U, ZmrY), adding ablation studies (wyti, ZmrY) and updating the references (wyti, ZmrY). | train | [
"Nmi49EnXj9R",
"PCcWv767RFfz",
"VJ4PlxqQ9ao",
"IPkqqoZbEqd",
"kHBGRfGTuy",
"nTq9fDF9Q-0",
"zFL_Tsha9CA",
"ZQo66gDCDcD",
"PrDNPCu0Ym",
"hTfnbdqP-j"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks for your response. This clarifies most of my concerns. The ordinal approach for scores makes sense, and I am more convinced about the workaround regarding individual differences. I would love to see how this model performs with increased hyperparameter search in a follow-up. Adding the missing references a... | [
-1,
7,
-1,
8,
7,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
3,
4,
-1,
-1,
-1,
-1,
3
] | [
"PrDNPCu0Ym",
"nips_2021_luWTh5Q63e",
"ZQo66gDCDcD",
"nips_2021_luWTh5Q63e",
"nips_2021_luWTh5Q63e",
"hTfnbdqP-j",
"IPkqqoZbEqd",
"kHBGRfGTuy",
"PCcWv767RFfz",
"nips_2021_luWTh5Q63e"
] |
nips_2021_RJ7XFI15Q8f | Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning | Continual learning (CL) learns a sequence of tasks incrementally with the goal of achieving two main objectives: overcoming catastrophic forgetting (CF) and encouraging knowledge transfer (KT) across tasks. However, most existing techniques focus only on overcoming CF and have no mechanism to encourage KT, and thus do not do well in KT. Although several papers have tried to deal with both CF and KT, our experiments show that they suffer from serious CF when the tasks do not have much shared knowledge. Another observation is that most current CL methods do not use pre-trained models, but it has been shown that such models can significantly improve the end task performance. For example, in natural language processing, fine-tuning a BERT-like pre-trained language model is one of the most effective approaches. However, for CL, this approach suffers from serious CF. An interesting question is how to make the best use of pre-trained models for CL. This paper proposes a novel model called CTR to solve these problems. Our experimental results demonstrate the effectiveness of CTR
| accept | The paper studies continually learning of a sequence of natural language processing (NLP) tasks. The authors claim that state of the art approaches that facilitate knowledge transfer (e.g. fine-tuning a BERT-like language model) suffer from serious catastrophic forgetting in the continual learning setting and present a novel model called AFK to achieve knowledge transfer while avoiding catastrophic forgetting in NLP tasks. AFK makes use of a combination of Capsule networks to model each task, a transfer routing algorithm to identify and transfer knowledge across tasks to improve accuracy and task masks to prevent catastrophic forgetting. The effectiveness of the proposed method is demonstrated in empirical evaluations.
The reviewers raised some points that were satisfactorily addressed in the rebuttal, including additional experiments. The rebuttal helped convince the reviewers of the merits of the paper.
| train | [
"bJRK-1pDbme",
"JxIEWKlMyRa",
"Q50RtLZcDSk",
"3-y5knGgOLi",
"pJLTOzK3TH",
"iM5X35hTUQ",
"nZRFF9zy3Q1",
"azLcPY6zXx5",
"DvITlcK4rD",
"8yB-GciMPe",
"ye6ew-fNR-A",
"cztJmgkgPX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for clarifying your definition of the forward transfer and how it differs from the literature. Also, regarding the backward transfer, I expect authors to explicitly state them in the next version of the paper. Furthermore, comparisons with AdapterFusion should definitely improve this work.",
" Thanks for... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
4,
2
] | [
"DvITlcK4rD",
"azLcPY6zXx5",
"nZRFF9zy3Q1",
"nips_2021_RJ7XFI15Q8f",
"iM5X35hTUQ",
"cztJmgkgPX",
"3-y5knGgOLi",
"ye6ew-fNR-A",
"8yB-GciMPe",
"nips_2021_RJ7XFI15Q8f",
"nips_2021_RJ7XFI15Q8f",
"nips_2021_RJ7XFI15Q8f"
] |
nips_2021_ERzpLwEDOY | Bandits with many optimal arms | Rianne de Heide, James Cheshire, Pierre Ménard, Alexandra Carpentier | accept | All reviewers are positive about this paper. The reviewers agree that the paper shows interesting new ideas in the results (especially ones for best arm identification). The reviewers also raise concerns about practical motivations and the presentation structure of the paper. I think the paper can benefit from improving in these directions. | train | [
"fjcRB1ey2qo",
"EQ_tsvKtDC8",
"z6rRLp0XNN",
"8XE18fms_7",
"D97vX_tdmOM",
"gKVTk1ud3O2"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper considers infinitely many armed bandit problems where the significant portion is optimal. In particular, p* fraction of the arms are optimal. \nThis paper considered two objectives (total reward optimization and best arm identification). For the first objective, the paper proposes a version of UCB algor... | [
6,
8,
-1,
-1,
-1,
6
] | [
3,
4,
-1,
-1,
-1,
3
] | [
"nips_2021_ERzpLwEDOY",
"nips_2021_ERzpLwEDOY",
"gKVTk1ud3O2",
"fjcRB1ey2qo",
"EQ_tsvKtDC8",
"nips_2021_ERzpLwEDOY"
] |
nips_2021_MQQeeDiO5vv | Combiner: Full Attention Transformer with Sparse Computation Cost | Hongyu Ren, Hanjun Dai, Zihang Dai, Mengjiao Yang, Jure Leskovec, Dale Schuurmans, Bo Dai | accept | This paper proposes a novel efficient Transformer variant, called Combiner, which is a drop-in replacement of attention, achieving full attention with sub-quadratic cost using structured factorization. The reviewers have several concerns regarding the experimental analysis in the initial review. During the rebuttal period, the authors provided intensive analysis and addressed most of the concerns from the reviewers.
All reviewers recommend acceptance, and I hope the authors could include the new experimental results and ablation studies in the camera-ready version. | train | [
"NlF61GhoQ0u",
"OrVhIAV4OFE",
"e_HRNTbn6Re",
"BYmjbcMYW82",
"ojyUuNimdNm",
"05G4lMnOksD",
"6xOX-caAFw7",
"kqfbFawEk5T",
"qUlNDkyGwYf",
"S2ciB2JlTn",
"yvTMXyQrsxC",
"NUs5dMp_v_",
"eBDk1GdmSJn",
"XneKEittCld",
"4QcQbXghwjY",
"JYC4YFmgaYd",
"gxVY4EYIxYB",
"-h5HnwArwV-",
"Xrc2SWoKap"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"a... | [
" Thanks for your clarification. ",
"This paper focuses on the problem of reducing the computational cost of attention in transformers. In particular, it addresses the lack of expressiveness in existing attention approaches that leverage sparsity. To this end, the authors propose to view attention as a conditiona... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
9
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
4
] | [
"JYC4YFmgaYd",
"nips_2021_MQQeeDiO5vv",
"BYmjbcMYW82",
"ojyUuNimdNm",
"05G4lMnOksD",
"6xOX-caAFw7",
"kqfbFawEk5T",
"qUlNDkyGwYf",
"NUs5dMp_v_",
"nips_2021_MQQeeDiO5vv",
"XneKEittCld",
"OrVhIAV4OFE",
"gxVY4EYIxYB",
"S2ciB2JlTn",
"nips_2021_MQQeeDiO5vv",
"Xrc2SWoKap",
"4QcQbXghwjY",
... |
nips_2021_JG-SlCAx5_K | Geometry Processing with Neural Fields | Most existing geometry processing algorithms use meshes as the default shape representation. Manipulating meshes, however, requires one to maintain high quality in the surface discretization. For example, changing the topology of a mesh usually requires additional procedures such as remeshing. This paper instead proposes the use of neural fields for geometry processing. Neural fields can compactly store complicated shapes without spatial discretization. Moreover, neural fields are infinitely differentiable, which allows them to be optimized for objectives that involve higher-order derivatives. This raises the question: can geometry processing be done entirely using neural fields? We introduce loss functions and architectures to show that some of the most challenging geometry processing tasks, such as deformation and filtering, can be done with neural fields. Experimental results show that our methods are on par with the well-established mesh-based methods without committing to a particular surface discretization. Code is available at https://github.com/stevenygd/NFGP.
| accept | This paper produced a substantial amount of discussion before and after the rebuttal, but in the end the decision was to accept this work. Some readers (notably reviewer 7RWH) were excited about the vision presented in this work and possibilities for extension, while some others had concerns about efficiency/practicality.
In the camera-ready, please more explicit in discussing the limitations of the work (e.g. highlighting that it takes 5 hours for a task that typically can be done in real-time), as well as provide a fair comparison to the baseline to avoid misleading the readers (e.g. running ARAP on essentially degraded meshes). | train | [
"ftFO9-VBN3D",
"PSnEwb1jcq",
"XMn4qqlF8xR",
"isv9cQRr04p",
"6jYfNqIBra",
"Mk300Lxoleu",
"eJa1uPjhtD-",
"29yR4fwmYK",
"zTsbc5ADhfk",
"Wb8R9LKKLQn",
"V8getCjD_QT",
"zZTvgZ8MQa8"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We really appreciate the feedback! We understand your concern and we will include ARAP output as a baseline in the future version.",
"This work introduces a framework for geometry processing on surfaces represented as neural networks (neural implicit fields). \nNamely, authors design two algorithms: one for sur... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
9
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"isv9cQRr04p",
"nips_2021_JG-SlCAx5_K",
"29yR4fwmYK",
"eJa1uPjhtD-",
"zZTvgZ8MQa8",
"Wb8R9LKKLQn",
"V8getCjD_QT",
"PSnEwb1jcq",
"PSnEwb1jcq",
"nips_2021_JG-SlCAx5_K",
"nips_2021_JG-SlCAx5_K",
"nips_2021_JG-SlCAx5_K"
] |
nips_2021_45GfBQYtYlp | Contextual Recommendations and Low-Regret Cutting-Plane Algorithms | Sreenivas Gollapudi, Guru Guruganesh, Kostas Kollias, Pasin Manurangsi, Renato Leme, Jon Schneider | accept | This paper introduces the problem of contextual recommendation, a variant of the linear contextual bandit algorithm in which a learner selects actions based on context in order to maximize reward, but rather observing the reward, observe the identity of the optimal action instead. The reviewers found that the results are novel and technically interesting, and in particular thought that formalizing the problem of regret for cutting plane algorithms and connecting this to contextual recommendation was valuable, and is likely to find further use. Overall, this is a solid contribution (albeit, slightly niche for the NeurIPS community). The authors are encouraged to incorporate the reviewers' suggestions and spend more time motivating the problem, as well as discussing practical aspects of their algorithms (e.g., implementation). | test | [
"P9V3mXcYdCZ",
"pPfYS9QUZe",
"yiZIidLNSYw",
"C8deX7pBFR",
"rpe5hgMcPJK",
"0lwZ0nG-eWC",
"bUawknCoC9y",
"KOgbKq2TfmH",
"7W-GyJNo_2"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Author responses are satisfactory, I keep my score unchanged and vote for accepting the submission.",
"\nThis work studies a variant of contextual linear bandits, where for each episode the learner is presented with a set of possible actions, selects an action, incurs the loss, and observes the best action of t... | [
-1,
7,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
2,
-1,
-1,
-1,
-1,
4,
3,
1
] | [
"0lwZ0nG-eWC",
"nips_2021_45GfBQYtYlp",
"pPfYS9QUZe",
"7W-GyJNo_2",
"KOgbKq2TfmH",
"bUawknCoC9y",
"nips_2021_45GfBQYtYlp",
"nips_2021_45GfBQYtYlp",
"nips_2021_45GfBQYtYlp"
] |
nips_2021_SlxH2AbBBC2 | Speech Separation Using an Asynchronous Fully Recurrent Convolutional Neural Network | Recent advances in the design of neural network architectures, in particular those specialized in modeling sequences, have provided significant improvements in speech separation performance. In this work, we propose to use a bio-inspired architecture called Fully Recurrent Convolutional Neural Network (FRCNN) to solve the separation task. This model contains bottom-up, top-down and lateral connections to fuse information processed at various time-scales represented by stages. In contrast to the traditional approach updating stages in parallel, we propose to first update the stages one by one in the bottom-up direction, then fuse information from adjacent stages simultaneously and finally fuse information from all stages to the bottom stage together. Experiments showed that this asynchronous updating scheme achieved significantly better results with much fewer parameters than the traditional synchronous updating scheme on speech separation. In addition, the proposed model achieved competitive or better results with high efficiency as compared to other state-of-the-art approaches on two benchmark datasets.
| accept | This paper proposes to use a bio-inspired asynchronous fully recurrent convolutional neural network (A-FRCNN) for speech separation. In contrast to the conventional synchronous update, the authors argue that with asynchronous updates via the bottom-up, top-down and lateral connections in the network, the model can fuse information flows at various time scales for improved learning. Experiments on WHAM! and Librimix demonstrate the effectiveness of the proposed approach. The authors show that A-FRCNN can deliver high-performance results on these two datasets compared to some of the existing best performing models under the synchronous update mechanism. While some of the results obtained by A-FRCNN are state of the art and some are slightly underperform, A-FRCNN also offers high efficiency with fewer parameters. The work is well motivated. In the rebuttal, the authors also cleared numerous concerns raised by the reviewers. Overall, this bio-inspired approach is novel and interesting. I would recommend accept. The authors should revise the paper accordingly based on suggestions in the reviews and discussion. | train | [
"D9RERDqNle4",
"Saeu5HrFEW",
"-syvyQjUjl_",
"GRjnbAw1kLA",
"zvAc6Kyzlmr",
"0gAi6-t9lPy",
"DR12djeIwa0",
"Ao97RX9FTtc",
"DBtQN35YwKp",
"mKIIulLhF-x",
"aJ3T1paVPEE",
"yoidmaV4WK9",
"Fn-KgEB4hv",
"WClfDq6_tCa",
"sBUvEoY2lA2",
"XFIuGa72IKa",
"X58pw9tux6f",
"lctK0bwH-r",
"9UJcc5bIs7w"... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
... | [
" Thanks a lot for your help. The latest tables posted in this thread will be added to the main text of the paper, together with discussion on the trade-off between separation performance, model size and computational efficiency. All other comments and suggestions from all reviewers will also be taken into account ... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"Saeu5HrFEW",
"GRjnbAw1kLA",
"nips_2021_SlxH2AbBBC2",
"Ao97RX9FTtc",
"0gAi6-t9lPy",
"DR12djeIwa0",
"mKIIulLhF-x",
"DBtQN35YwKp",
"WClfDq6_tCa",
"aJ3T1paVPEE",
"Fn-KgEB4hv",
"nips_2021_SlxH2AbBBC2",
"lctK0bwH-r",
"sBUvEoY2lA2",
"XFIuGa72IKa",
"-syvyQjUjl_",
"IezYisrA2Ac",
"yoidmaV4W... |
nips_2021_nUtLCcV24hL | Reinforcement Learning Enhanced Explainer for Graph Neural Networks | Graph neural networks (GNNs) have recently emerged as revolutionary technologies for machine learning tasks on graphs. In GNNs, the graph structure is generally incorporated with node representation via the message passing scheme, making the explanation much more challenging. Given a trained GNN model, a GNN explainer aims to identify a most influential subgraph to interpret the prediction of an instance (e.g., a node or a graph), which is essentially a combinatorial optimization problem over graph. The existing works solve this problem by continuous relaxation or search-based heuristics. But they suffer from key issues such as violation of message passing and hand-crafted heuristics, leading to inferior interpretability. To address these issues, we propose a RL-enhanced GNN explainer, RG-Explainer, which consists of three main components: starting point selection, iterative graph generation and stopping criteria learning. RG-Explainer could construct a connected explanatory subgraph by sequentially adding nodes from the boundary of the current generated graph, which is consistent with the message passing scheme. Further, we design an effective seed locator to select the starting point, and learn stopping criteria to generate superior explanations. Extensive experiments on both synthetic and real datasets show that RG-Explainer outperforms state-of-the-art GNN explainers. Moreover, RG-Explainer can be applied in the inductive setting, demonstrating its better generalization ability.
| accept | The paper addresses the issue of explainability in GNNs using RL for finding explanatory subgraphs. The authors provided an extensive rebuttal including new experimental results requested by the reviewers. The AC believes the authors' responses address in a satisfactory manner the reviewers' comments and recommends acceptance. | train | [
"D0w1jnb4JX5",
"tOWpnw15Htd",
"V8AuHkTvU0K",
"-wwZWpy8n_",
"6x_QzEgVHZ-",
"1O62TY9OyuB",
"ssqUy6QCC5A"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an RL-based GNN explainer to address the neglected interactions among nodes and edges in the generated subgraph for model interpretation in the existing works. Specifically, the subgraph generation process is modeled as a Markov decision process (MDP) where the state is the currently selected n... | [
6,
-1,
-1,
-1,
-1,
6,
5
] | [
4,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_nUtLCcV24hL",
"V8AuHkTvU0K",
"D0w1jnb4JX5",
"ssqUy6QCC5A",
"1O62TY9OyuB",
"nips_2021_nUtLCcV24hL",
"nips_2021_nUtLCcV24hL"
] |
nips_2021_V8PcLz1NoQ0 | NAS-Bench-x11 and the Power of Learning Curves | While early research in neural architecture search (NAS) required extreme computational resources, the recent releases of tabular and surrogate benchmarks have greatly increased the speed and reproducibility of NAS research. However, two of the most popular benchmarks do not provide the full training information for each architecture. As a result, on these benchmarks it is not possible to evaluate many types of multi-fidelity algorithms, such as learning curve extrapolation, that require evaluating architectures at arbitrary epochs. In this work, we present a method using singular value decomposition and noise modeling to create surrogate benchmarks, NAS-Bench-111, NAS-Bench-311, and NAS-Bench-NLP11, that output the full training information for each architecture, rather than just the final validation accuracy. We demonstrate the power of using the full training information by introducing a learning curve extrapolation framework to modify single-fidelity algorithms, showing that it leads to improvements over popular single-fidelity algorithms which claimed to be state-of-the-art upon release.
| accept | This paper adds learning curve information to popular NAS benchmarks via a surrogate model. This is valuable to the community since it opens up current NAS benchmarks to multi-fidelity search approaches. Furthermore it demonstrates a compeling 'template' that can be used for creating new NAS benchmarks on new search spaces where it is not feasible to create tabular benchmarks due to the sheer size of the search space.
During the review process a number of improvements have come about with the reviewers, especially when engaging with NYyv with respect to top architecture predictions. It is strongly recommended that the authors include this new information in the paper. In this vein, the paper should clearly mark the limitations of this benchmark and how NAS researchers should consume this benchmark and interpret the outcomes of their search algorithms.
This benchmark is going to be widely used by the community. So the quality of the software release and ease-of-usage is quite critical. Please pay a lot of attention to making sure that the package can be installed and run easily via standard Python package distribution channels like pip/conda and the associated data/surrogate models can be installed in one-click and there are plenty of examples to showcase how to use the benchmark.
| test | [
"x5Ai7ylxcdU",
"lnAxdznK-ek",
"hhqnXtA0thR",
"rNf6QEIG6GR",
"eT9ZZs1FPou",
"AAmkgad38p6",
"VsIzGqVaywZ",
"gn64rXTQmxG",
"wqoqIWJEeFi",
"cNtroj2h8Fb",
"yTxVdlTMNyl",
"QHcyDfec-wO",
"zEQX4S6yKYf",
"MZ-nyMgoI8F",
"ZRica_F5P6e",
"85pv73Y-xV",
"VR9hBdu8bGQ",
"Us800Q_124q",
"hobGmWmv8G... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_r... | [
" We would like to thank all reviewers once again for the initial reviews and lively discussions during the rebuttal period, which has helped to further improve and refine our paper.",
" Thank you, now we understand your concern.\n\n*What is your definition of the best architecture?* In our last two comments, we ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
4
] | [
"VR9hBdu8bGQ",
"hhqnXtA0thR",
"gn64rXTQmxG",
"85pv73Y-xV",
"zEQX4S6yKYf",
"Us800Q_124q",
"hobGmWmv8G-",
"wqoqIWJEeFi",
"yTxVdlTMNyl",
"yTxVdlTMNyl",
"QHcyDfec-wO",
"MZ-nyMgoI8F",
"BF1Ppe8n_Q6",
"0r08DKCXCen",
"hobGmWmv8G-",
"Us800Q_124q",
"nips_2021_V8PcLz1NoQ0",
"nips_2021_V8PcLz1... |
nips_2021_8gmBGNeOfT | Observation-Free Attacks on Stochastic Bandits | Yinglun Xu, Bhuvesh Kumar, Jacob D. Abernethy | accept | This paper is the first one to provide an observation-free attack against some of the well-known bandit algorithms. While there are many existing work on adversarial attacks against bandit algorithms, those attacks assume stronger attacker model which can observe the behaviour of the algorithm and then adapt the future attacks accordingly. This paper relaxes this attacker model by not allowing this observation and thus, making the attack much harder. The main research question is then whether all the bandit algorithms can be attacked successfully under this attack scheme. The paper provides a partial answer to this question by showing that mean-based bandit algorithms such as UCB and TS are still vulnerable against this attack.
As someone quite familiar with the topic of adversarial attacks on bandits, I find this paper to be very interesting. In fact, it introduces a 3rd attacker model (the other 2 are strong attackers - where the manipulation happens after observing the action of the algorithm, and weak attacker model - where the attack has to happen before the action, but the action is still observed). By introducing this new model, the authors managed to show that there are indeed differences in the behaviour of different bandit algorithms. In fact, some algorithms are vulnerable against this attack, while other are not. Note that with the other 2 attacker model all the algorithms would be vulnerable in the same way. This finding provides new insights to the way bandit algorithms work.
In more detail, for the other 2 (existing) attacker models, there are universal and successful attack strategies. Thus, in those attacker models it's hard to really understand whether there are differences between the behaviour of each bandit algorithm against the attack. On the other hand, in this new attacker model of this paper, there is a clear difference (i.e., mean-based vs. non mean-based). I believe this new insight makes the paper's contribution even more interesting. As such, I believe this work is quite interesting and is worth being presented at the conference.
Regarding the comments and criticisms from the other reviewers: I don't agree with the comments regarding the technical issues. I went through the proofs myself and I found the authors' responses to be helpful and clarified the concerns. I also disagree that this model is very similar to the adversarial bandit setting, as the underlying regret notion is not exactly the same (in adversarial bandits we measure the regret based on the observed feedback, assuming that they are correct), while in attacks on bandits problems the regret is measured on the true (and unobservable) rewards. it is true that if the contamination budget is small (e.g., O(logT)) then Exp3 and other adversarial bandit algorithms would recover well. But again, they are not mean-based methods.
I have one criticism though: From the authors' responses and comments, it seems that it is not too difficult to show that non mean-based methods such as Exp3 are still robust against this type of attacker model. If the authors can add this to the paper, that would make the story complete. Therefore my recommendation of acceptance would rely on the assumption that this argument will be added to the final version of the paper. | train | [
"uIVAmux7bnk",
"DfF_xsr0E6I",
"gSAVElSQvH7",
"xXPpnEUJwem",
"zKLT8JAdYMe",
"LeM1zxjqkfL",
"cdqn5REqQs"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes a corruption-based attacking framework for a general kind of MAB learning policies (which is called mean-based algorithms). They show that their framework can enforce all the mean-based algorithms to pull a target arm (which is not the optimal one) for $\\Theta(T)$ number of times with only $o(... | [
6,
5,
-1,
-1,
-1,
-1,
4
] | [
5,
4,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_8gmBGNeOfT",
"nips_2021_8gmBGNeOfT",
"xXPpnEUJwem",
"DfF_xsr0E6I",
"cdqn5REqQs",
"uIVAmux7bnk",
"nips_2021_8gmBGNeOfT"
] |
nips_2021_ThbM9_6DNU | Learning Disentangled Behavior Embeddings | To understand the relationship between behavior and neural activity, experiments in neuroscience often include an animal performing a repeated behavior such as a motor task. Recent progress in computer vision and deep learning has shown great potential in the automated analysis of behavior by leveraging large and high-quality video datasets. In this paper, we design Disentangled Behavior Embedding (DBE) to learn robust behavioral embeddings from unlabeled, multi-view, high-resolution behavioral videos across different animals and multiple sessions. We further combine DBE with a stochastic temporal model to propose Variational Disentangled Behavior Embedding (VDBE), an end-to-end approach that learns meaningful discrete behavior representations and generates interpretable behavioral videos. Our models learn consistent behavior representations by explicitly disentangling the dynamic behavioral factors (pose) from time-invariant, non-behavioral nuisance factors (context) in a deep autoencoder, and exploit the temporal structures of pose dynamics. Compared to competing approaches, DBE and VDBE enjoy superior performance on downstream tasks such as fine-grained behavioral motif generation and behavior decoding.
| accept | This paper proposes to learn disentagled embeddings of behavioural video---between time-dependent dynamic factors (called pose) and other factors (called context) in an unsupervised manner.
The motivation, model setup, objective, and experimental setup are all very well described, evaluated, and analysed. The model in particular is motivated very well given the specific application in consideration (behavioural video), and the ablations performed serve to highlight the advantages of the main components. The analyses also serve to highlight the effects of the disentanglement provided by the model.
I would strongly encourage the authors to carefully consider the reviewers' comments about edits and corrections, and ensure that these are incorporated in the updated manuscript.
Overall, the reviewers agree that this is a very good piece of work and based on these merits, the paper should be accepted for publication.
| train | [
"PH1UTH_EZM",
"jVEZ3-mspwG",
"CTSTZYfdFcB",
"QzHU4aL58C1",
"ch0NivzIg38",
"fZmbEFlN62r",
"hkf9UHCQod",
"ytv92r7P9G4",
"ttIHsxaMSt",
"szL4Ur3tid_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a method named Disentangled Autoencoder (DisAE) to learn disentangled pose and context representations from videos. By keeping the pose component static based on a key frame and by extracting temporal context components, the proposed DBE model starts to generate both continuous and discrete r... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nips_2021_ThbM9_6DNU",
"CTSTZYfdFcB",
"ttIHsxaMSt",
"szL4Ur3tid_",
"ytv92r7P9G4",
"PH1UTH_EZM",
"nips_2021_ThbM9_6DNU",
"nips_2021_ThbM9_6DNU",
"nips_2021_ThbM9_6DNU",
"nips_2021_ThbM9_6DNU"
] |
nips_2021_wtLW-Amuds | The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning | In complex systems, we often observe complex global behavior emerge from a collection of agents interacting with each other in their environment, with each individual agent acting only on locally available information, without knowing the full picture. Such systems have inspired development of artificial intelligence algorithms in areas such as swarm optimization and cellular automata. Motivated by the emergence of collective behavior from complex cellular systems, we build systems that feed each sensory input from the environment into distinct, but identical neural networks, each with no fixed relationship with one another. We show that these sensory networks can be trained to integrate information received locally, and through communication via an attention mechanism, can collectively produce a globally coherent policy. Moreover, the system can still perform its task even if the ordering of its inputs is randomly permuted several times during an episode. These permutation invariant systems also display useful robustness and generalization properties that are broadly applicable. Interactive demo and videos of our results: https://attentionneuron.github.io
| accept | I thank the authors for their submission and active participation in the discussions. Reviewer dfwx has pledged that with the revisions by the authors, they would raise their score. Since reviewer dfwx has not done that yet and did not actively participate in the discussion with the other reviewers, I assume a higher rating than the one currently recorded. Taking this into account, it seems reviewers unanimously agree that this paper is worthy of publication. In particular, reviewers appreciated the diverse set of environments used for evaluation [rWCR,TUZP], that is well executed [Bz2K] and that it presents an original synergy between existing techniques [Bz2K,TUZP]. During rebuttal and discussions, reviewer rWCR's concerns regarding baselines were addressed and reviewers TUZP and Bz2K agreed that the paper is improved based on the author rebuttal. Thus, I recommend acceptance and encourage the authors to further improve the clarity of their paper based on the reviewer feedback. | train | [
"cuElRNecla",
"qWr-ItHE46w",
"0v893gvvGtP",
"KsDZzoHY_nX",
"iRhoFxh1Lgn",
"Xl11HssHDa",
"aad55Zk5WNO",
"l0Oh_6KtQ7e",
"5sGiM4T2_9Y",
"j82ZbnJy39",
"NlsRc5iZoQw",
"huLzum2r9ye",
"dEGlwUrGWrl",
"2pNNjutTx95",
"Doh4a3Akftw",
"rX6mmPiA1G3",
"qRcecezV-yA"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper extends the idea of set transformer to motor control, so that an agent can perform well even when its sensory input channels are shuffled and partly presented.\nThe performance is verified in cart-pole, ant, pong, and car racing tasks by using evolutionary optimization and behavioral closing of pre-trai... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_wtLW-Amuds",
"0v893gvvGtP",
"KsDZzoHY_nX",
"iRhoFxh1Lgn",
"Xl11HssHDa",
"dEGlwUrGWrl",
"NlsRc5iZoQw",
"5sGiM4T2_9Y",
"2pNNjutTx95",
"nips_2021_wtLW-Amuds",
"qRcecezV-yA",
"rX6mmPiA1G3",
"cuElRNecla",
"j82ZbnJy39",
"nips_2021_wtLW-Amuds",
"nips_2021_wtLW-Amuds",
"nips_2021_... |
nips_2021_U7vVeHydyR | Fast Extra Gradient Methods for Smooth Structured Nonconvex-Nonconcave Minimax Problems | Sucheol Lee, Donghwan Kim | accept | This paper proposes a new algorithm that can get 1/k^2 convergence rate for a class of structured smooth nonconvex-nonconcave minimax optimization problems, in terms of the squared gradient norm, improving over the currently known 1/k rate algorithm for the same problem. While the paper does not provide any practical example that falls into the class being analyzed here, the reviewers felt that the paper makes an interesting theoretical progress on the hard and challenging problem of nonconvex-nonconcave minimax optimization. Furthermore, the techniques developed in this paper can potentially lead to further work in this area. Consequently, I am recommending the paper for acceptance. | train | [
"Jm_Ms5NjhZN",
"_Y1-jSUhXpR",
"ob-CBlrUrXp",
"mB1-Hj3S8m",
"31UxgcFiwqr",
"jJV_h8PLKsO",
"TOtn_ZX1iva",
"lB9VNrTl4pT",
"yayhmKtT3lN",
"m6vlwTEvVOV",
"dJtf-oZ3Fex",
"dnTb1iAjXTJ"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are happy to hear that you found our response satisfactory. We realized that our statement was a bit strong to make a point, and we agree to your statement that the regime of $\\rho\\le -1/L$ should be of interest in future. We will make sure to revise the paper so that such limitation is clearly acknowledged ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
3
] | [
"ob-CBlrUrXp",
"mB1-Hj3S8m",
"TOtn_ZX1iva",
"lB9VNrTl4pT",
"dnTb1iAjXTJ",
"m6vlwTEvVOV",
"dJtf-oZ3Fex",
"yayhmKtT3lN",
"nips_2021_U7vVeHydyR",
"nips_2021_U7vVeHydyR",
"nips_2021_U7vVeHydyR",
"nips_2021_U7vVeHydyR"
] |
nips_2021_5-iRjd9FreV | Analysis of Sensing Spectral for Signal Recovery under a Generalized Linear Model | Junjie Ma, Ji Xu, Arian Maleki | accept | In this paper the author studies the impact of the choice of the sensing matrix spectrum in nonlinear inverse problems. In the first three sections, the authors present the information theoretical limits to perfect recovery, the expectation propagation (EP) message-passing type algorithm to solve such nonlinear inverse problems, and its MSE performance predicted by the state evolution formalism. The main result is presented in the last section, the impact of the spectrum is investigated, both in term of the MSE performance and the measurement threshold for perfect recovery. This impact is shown to depend on the type of nonlinearity used in the inverse problem.
The consensus among the reviewers was that the paper was overall well-written, and above the acceptance threshold of NeurIPS 2021. It was judge to have interest for the community of information theory and statistical-physics approaches to learning. A number of comments and criticisms expressed were expressed during the rebuttal, especially concerning the discussion of the results. For instance, the title was judge misleading, as the emphasis on the spectrum should be clearer.
During the rebuttal the authors seem to have answer most of the critics in a satisfactory way to a majority of reviewers. They also shared additional result from their numerical experiments. We believe these new results should definitely be a part of the paper. The reviewers believe that these experiments and their discussion can really improve a lot the paper, and should not only be include din the supplement, but should also be fairly discussed in their new version of the main paper, as they answer a important criticism.
Overall, there is an agreement on the reviewers side (that is, to all but one) to accept the paper for publication at Neurips. Given the good grading, and the fact that authors successfully answered most criticisms from the referee, I therefore recommend acceptance to Neurips, and that the author proceed to the promised change to the paper.
| train | [
"KHCr7dyZFCR",
"d0yFk28mDO",
"TpI7RW0sp2k",
"aG7smo33DAf",
"0un6G6KagsC",
"rUK72IihWSA",
"qfdh7PcxTs",
"xtxTTTCkEw",
"FpyVcQxVxsL",
"OdiJcWp4NN6",
"-dTefgxffDV",
"FCBgFgqUwGV",
"FyJzgPf-QZ1",
"pO82k2fVr2q",
"Pn360X-Gh6_",
"zyDE7frg5Xr",
"708aYSZruuB",
"whj7K7mUaVI",
"vowi07yxvGT"... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"au... | [
"This paper studies the influence of the sensing matrix on the recovery of signals in noiseless generalized linear models The authors define a mathematical property of a given spectrum, that they call “spikiness”. Given a specific activation function, the authors characterize the behaviour of the MSE and of perfect... | [
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"nips_2021_5-iRjd9FreV",
"rUK72IihWSA",
"0un6G6KagsC",
"nips_2021_5-iRjd9FreV",
"FpyVcQxVxsL",
"FCBgFgqUwGV",
"OdiJcWp4NN6",
"pO82k2fVr2q",
"Pn360X-Gh6_",
"V8k1yrLXcT",
"nips_2021_5-iRjd9FreV",
"708aYSZruuB",
"nips_2021_5-iRjd9FreV",
"vowi07yxvGT",
"uDnC8y1f-aT",
"whj7K7mUaVI",
"KHCr... |
nips_2021_dsmxf7FKiaY | Revisiting ResNets: Improved Training and Scaling Strategies | Novel computer vision architectures monopolize the spotlight, but the impact of the model architecture is often conflated with simultaneous changes to training methodology and scaling strategies.Our work revisits the canonical ResNet and studies these three aspects in an effort to disentangle them. Perhaps surprisingly, we find that training and scaling strategies may matter more than architectural changes, and further, that the resulting ResNets match recent state-of-the-art models. We show that the best performing scaling strategy depends on the training regime and offer two new scaling strategies: (1) scale model depth in regimes where overfitting can occur (width scaling is preferable otherwise); (2) increase image resolution more slowly than previously recommended.Using improved training and scaling strategies, we design a family of ResNet architectures, ResNet-RS, which are 1.7x - 2.7x faster than EfficientNets on TPUs, while achieving similar accuracies on ImageNet. In a large-scale semi-supervised learning setup, ResNet-RS achieves 86.2% top-1 ImageNet accuracy, while being 4.7x faster than EfficientNet-NoisyStudent. The training techniques improve transfer performance on a suite of downstream tasks (rivaling state-of-the-art self-supervised algorithms) and extend to video classification on Kinetics-400. We recommend practitioners use these simple revised ResNets as baselines for future research.
| accept | All reviewers unanimously recommend acceptance of the paper. The paper is well written and instructive about the importance of training and scaling strategies. It may deserve some elevated attention as a spotlight contribution. | train | [
"CauKZwHWMix",
"nQX6cGI0_eb",
"ap5k2duZXtJ",
"GR6u1NCnbdf",
"8ItLxydzFHK",
"Lksj3lURgIk",
"Q0Ph57xIc6D",
"xw8FSfjz7HF",
"vx1o6D0zd9N",
"pQ6f7A9Sge",
"mATkrdOrU0q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response. The rebuttal have answered most of my concerns. So I will keep my initial score which is leaning towards accepting the paper.",
"This paper revisits the performance of ResNet from three aspects, architecture, training scheme, and scaling strategy. The author shows that a better trainin... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"vx1o6D0zd9N",
"nips_2021_dsmxf7FKiaY",
"GR6u1NCnbdf",
"8ItLxydzFHK",
"xw8FSfjz7HF",
"Q0Ph57xIc6D",
"pQ6f7A9Sge",
"nQX6cGI0_eb",
"mATkrdOrU0q",
"nips_2021_dsmxf7FKiaY",
"nips_2021_dsmxf7FKiaY"
] |
nips_2021__1VZo_-aUiT | Sparse Flows: Pruning Continuous-depth Models | Continuous deep learning architectures enable learning of flexible probabilistic models for predictive modeling as neural ordinary differential equations (ODEs), and for generative modeling as continuous normalizing flows. In this work, we design a framework to decipher the internal dynamics of these continuous depth models by pruning their network architectures. Our empirical results suggest that pruning improves generalization for neural ODEs in generative modeling. We empirically show that the improvement is because pruning helps avoid mode-collapse and flatten the loss surface. Moreover, pruning finds efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy. We hope our results will invigorate further research into the performance-size trade-offs of modern continuous-depth models.
| accept | The following is a summary of the pros and cons raised by the reviewers:
pros:
* complete work (nNvs)
* Method and background sections are clearly written. (nNvs, AFJg)
* application of pruning to neural ODEs leads to several practical benefits such as generalization and reduced parameter count and its simplicity is a plus (nNvs, uTzg, LH4D)
cons:
* Requires additional experimental results:
- wall-clock time and number of function evalutions during training and inference should be reported to see how pruning influences
this. --> preliminary results described in rebuttal, no reply from reviewer. (nNvs)
- More evidence is needed to support claims such as flattening the loss surface. --> addressed during discussion period. (LH4D)
- missing ablation studies that could help improve understanding of why pruning has benefits --> addressed in rebuttal.(uTzg)
- structured vs unstructured pruning should be investigated on larger scale experiments, not only toy datasets --> addressed during discussion period. (LH4D)
- pruning should be applied to other continuous normalizing flows, not just ffjord. --> authors have provided results on other cnf's on a toy dataset. (AFJg, LH4D)
* clarifications are required with respect to the pruning strategy and optimization choices. --> addressed in rebuttal (nNvs, uTzg)
* findings are unsurprising (AFJg).
The authors and 3 out of 4 reviewers engaged in active and fruitful discussions during the discussion period. The authors included many additional results that were requested by the reviewers, which led to the main concerns being addressed and 3 out of 4 scores being raised. The recommended decision for this submission is accept. | train | [
"hzkEn9ArSXq",
"D89plo_Um_F",
"OEctH-qbXek",
"mSVwb_ucy-m",
"mRgl5e-zKc",
"Ki3qsKlIaUb",
"lseDW0_hC9K",
"a5SWGHk6JjY",
"bUmecP3I2rI",
"IZadfd5TUmK",
"GIW0SugTD7",
"SPlUZCiyYzJ",
"KuVcPUkrVED",
"xhgQncR8qCC",
"birEshdyvSu",
"ApwN6goJKXq",
"o3tJrn1ICFR",
"OXF_ftaMAih",
"QxqTErgG6Af... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"o... | [
" \nWe would like to thank you very much for engaging with us during this fruitful discussion and for your fair and constructive review of our manuscript. \n\nThank you, \nAuthors",
"The paper presents a finding that pruning improves generalisation for neural ODEs in generative modelling alongside with reducing t... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"OEctH-qbXek",
"nips_2021__1VZo_-aUiT",
"mSVwb_ucy-m",
"mRgl5e-zKc",
"Ki3qsKlIaUb",
"lseDW0_hC9K",
"a5SWGHk6JjY",
"bUmecP3I2rI",
"A0WxpQCa5-",
"nips_2021__1VZo_-aUiT",
"QxqTErgG6Af",
"xhgQncR8qCC",
"nips_2021__1VZo_-aUiT",
"ApwN6goJKXq",
"o3tJrn1ICFR",
"OXF_ftaMAih",
"P6peVG7wTuK",
... |
nips_2021_94Sj1CcC_Jl | Spectrum-to-Kernel Translation for Accurate Blind Image Super-Resolution | Guangpin Tao, Xiaozhong Ji, Wenzhuo Wang, Shuo Chen, Chuming Lin, Yun Cao, Tong Lu, Donghao Luo, Ying Tai | accept | This paper tackles the problem of blind super-resolution when the images degraded by arbitrary blur kernels (assumed to be sparse both in frequency and space domains). The method proposes a new blur kernel estimation technique in the frequency domain as an end-to-end deep learning approach. The paper provides theoretical and experimental evidence to back the idea.
The core idea of the work is predicting degradation kernels in the frequency domain using deep neural networks. The proposed method is simple and effective, this was highlighted by all reviewers. The empirical evaluation seems convincing (and were further improved during the response period). The manuscript includes thorough ablation studies which clarify the benefits of each of the proposed components. This was highlighted by Reviewers wMvH and bKSR.
The authors provided a high quality rebuttal including new experiments and clarifying several points raised by the reviewers. In particular, the authors incorporated experiments on (i) mixed blur kernels, (ii) comparison to a more recent blind SR baselines, and (iii) quantitative and qualitative results on real historic images. All reviewers and the AC appreciate the authors response.
After the authors’ response, Reviewer wMvH increased the score to 6. The clarifications provided by the authors were considered sufficient. However, she/he expressed concerns regarding the papers presentation. Similarly Reviewer bKSR increased the score to 7. In the original review, the main concern was clarity.
he AC appreciates the authors response and counts that the authors will incorporate the the clarifications provided in their response. All reviewers provide an accepting score. The AC agrees and recommends accepting the paper.
| train | [
"WgYkr_T-MyA",
"1f1924u1yYE",
"wlAsgrNjQkE",
"NKIh63XUHu",
"w0lMskkj04",
"-DVVXTwpZs8",
"BhGThzDzvlu",
"3RfFBZJA6Tu",
"XAhu890qWZj",
"RpvptgO0G6",
"Sx8qWw1HwhT"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nWe sincerely thank you for your high appreciation on our response. \n\nSure, during the past several weeks, we have already dedicated ourselves to improving the quality of our paper, such as correcting the typos and revising the manuscript regarding the insightful comments from all reviewers. Fo... | [
-1,
-1,
6,
7,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"1f1924u1yYE",
"BhGThzDzvlu",
"nips_2021_94Sj1CcC_Jl",
"nips_2021_94Sj1CcC_Jl",
"NKIh63XUHu",
"NKIh63XUHu",
"wlAsgrNjQkE",
"RpvptgO0G6",
"Sx8qWw1HwhT",
"nips_2021_94Sj1CcC_Jl",
"nips_2021_94Sj1CcC_Jl"
] |
nips_2021_8IiakjcUHFH | On the Rate of Convergence of Regularized Learning in Games: From Bandits and Uncertainty to Optimism and Beyond | In this paper, we examine the convergence rate of a wide range of regularized methods for learning in games. To that end, we propose a unified algorithmic template that we call “follow the generalized leader” (FTGL), and which includes asspecial cases the canonical “follow the regularized leader” algorithm, its optimistic variants, extra-gradient schemes, and many others. The proposed framework is also sufficiently flexible to account for several different feedback models – fromfull information to bandit feedback. In this general setting, we show that FTGL algorithms converge locally to strict Nash equilibria at a rate which does not depend on the level of uncertainty faced by the players, but only on the geometry of the regularizer near the equilibrium. In particular, we show that algorithms based on entropic regularization – like the exponential weights algorithm – enjoy a linear convergence rate, while Euclidean projection methods converge to equilibrium in a finite number of iterations, even with bandit feedback.
| accept | After discussions, reviewers agree that this paper makes solid contribution for
advancing the understanding of the convergence rate for regularized learning in games.
Please do incorporate all the suggestions/discussions from the reviews into the final version. | train | [
"yumGhPrnlhS",
"QPs9n5u9-Z5",
"OJopcYcZTzN",
"j0Sr0RWGHs",
"U0bMfOtLVig",
"dfZr87By96o",
"bupSidhYRM",
"LMfXN8OIlw"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes an abstraction of the online Follow-the-(Blank)-Leader type of methods they call Follow-the-Generalized-Leader (FTGL) that not only incorporates various regularization functions but also different feedback models. The authors provide a general convergence analysis for the family of methods over... | [
6,
-1,
-1,
6,
-1,
-1,
-1,
7
] | [
3,
-1,
-1,
3,
-1,
-1,
-1,
3
] | [
"nips_2021_8IiakjcUHFH",
"yumGhPrnlhS",
"U0bMfOtLVig",
"nips_2021_8IiakjcUHFH",
"j0Sr0RWGHs",
"LMfXN8OIlw",
"yumGhPrnlhS",
"nips_2021_8IiakjcUHFH"
] |
nips_2021_JWRRBHFPKTJ | SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks | Graph neural networks (GNNs) work well when the graph structure is provided. However, this structure may not always be available in real-world applications. One solution to this problem is to infer a task-specific latent structure and then apply a GNN to the inferred graph. Unfortunately, the space of possible graph structures grows super-exponentially with the number of nodes and so the task-specific supervision may be insufficient for learning both the structure and the GNN parameters. In this work, we propose the Simultaneous Learning of Adjacency and GNN Parameters with Self-supervision, or SLAPS, a method that provides more supervision for inferring a graph structure through self-supervision. A comprehensive experimental study demonstrates that SLAPS scales to large graphs with hundreds of thousands of nodes and outperforms several models that have been proposed to learn a task-specific graph structure on established benchmarks.
| accept | This paper proposed an approach for simultaneously inferring graph structure and GNN parameters. In particular, the authors pointed out the problem of edge starvation when learning on graphs with limited number of labels and proposed a self-supervised approach to alleviate such an approach. Experimental results prove the effectiveness of the proposed approach on existing latent graph inference benchmarks.
Overall, the reviewers like the simplicity of the proposed approach with good intuitions by pointing out the edge starvation problem. The authors addressed most of the concerns raised by the reviewers. | train | [
"EVM4tRjuWdv",
"jebVt6-NeH",
"-lGp0OH-f2c",
"Wf0IvjZkAsY",
"D4jRV_Ic7OD",
"VAVCDiYbrgf",
"kvmeDIQdcFE",
"xYLmPC1Apef",
"Zs8laCTF3L",
"o4wvixN_i9d",
"9ken-SANQ2h",
"OLvWD3ypcQ",
"2vDTH8LbYyu",
"HOQCQXqA3jw",
"GV-JSDIWVSF"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper first identifies a starved edge problem in latent graph learning. That is, nodes that are far from labeled nodes may receive insufficient supervision signal during learning. To address this issue, the paper then proposes to leverage a self-supervision signal, which comes from denoising the node features ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_JWRRBHFPKTJ",
"o4wvixN_i9d",
"Zs8laCTF3L",
"GV-JSDIWVSF",
"EVM4tRjuWdv",
"HOQCQXqA3jw",
"OLvWD3ypcQ",
"nips_2021_JWRRBHFPKTJ",
"GV-JSDIWVSF",
"EVM4tRjuWdv",
"OLvWD3ypcQ",
"xYLmPC1Apef",
"HOQCQXqA3jw",
"nips_2021_JWRRBHFPKTJ",
"nips_2021_JWRRBHFPKTJ"
] |
nips_2021_8PA2nX9v_r2 | Aligning Pretraining for Detection via Object-Level Contrastive Learning | Image-level contrastive representation learning has proven to be highly effective as a generic model for transfer learning. Such generality for transfer learning, however, sacrifices specificity if we are interested in a certain downstream task. We argue that this could be sub-optimal and thus advocate a design principle which encourages alignment between the self-supervised pretext task and the downstream task. In this paper, we follow this principle with a pretraining method specifically designed for the task of object detection. We attain alignment in the following three aspects: 1) object-level representations are introduced via selective search bounding boxes as object proposals; 2) the pretraining network architecture incorporates the same dedicated modules used in the detection pipeline (e.g. FPN); 3) the pretraining is equipped with object detection properties such as object-level translation invariance and scale invariance. Our method, called Selective Object COntrastive learning (SoCo), achieves state-of-the-art results for transfer performance on COCO detection using a Mask R-CNN framework. Code is available at https://github.com/hologerry/SoCo.
| accept | All reviewers agree this is a solid paper with a good novel contribution, great experimental results, and detailed analysis/ablation. The authors did a good job addressing the reviewers' concerns in their responses too. Clear accept. | val | [
"gOnqhASV6uA",
"vYqP10qaFB",
"BdiJQ3QampS",
"d5ekVGHKdVF",
"OoA7UcdcnR-",
"SG4j7Nlo6iP",
"Z-SbHzQhsvV",
"jCDyLhOQqa",
"dSV-zu2osaf",
"wuoCMBQ8F-D",
"GA-XM55UpS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
" Thanks for the authors' responses. The results address my questions. Thus, I would raise my rating to clear accept. Thanks for the efforts in rebuttal!",
"This submission addresses the problem of constructing a good initialized pretrained network for the object detection task. Compared to previous work, the aut... | [
-1,
7,
7,
-1,
-1,
9,
-1,
8,
-1,
-1,
-1
] | [
-1,
4,
4,
-1,
-1,
5,
-1,
4,
-1,
-1,
-1
] | [
"GA-XM55UpS",
"nips_2021_8PA2nX9v_r2",
"nips_2021_8PA2nX9v_r2",
"OoA7UcdcnR-",
"BdiJQ3QampS",
"nips_2021_8PA2nX9v_r2",
"wuoCMBQ8F-D",
"nips_2021_8PA2nX9v_r2",
"jCDyLhOQqa",
"SG4j7Nlo6iP",
"vYqP10qaFB"
] |
nips_2021_StKuQ0-dltN | Double/Debiased Machine Learning for Dynamic Treatment Effects | We consider the estimation of treatment effects in settings when multiple treatments are assigned over time and treatments can have a causal effect on future outcomes. We propose an extension of the double/debiased machine learning framework to estimate the dynamic effects of treatments and apply it to a concrete linear Markovian high-dimensional state space model and to general structural nested mean models. Our method allows the use of arbitrary machine learning methods to control for the high dimensional state, subject to a mean square error guarantee, while still allowing parametric estimation and construction of confidence intervals for the dynamic treatment effect parameters of interest. Our method is based on a sequential regression peeling process, which we show can be equivalently interpreted as a Neyman orthogonal moment estimator. This allows us to show root-n asymptotic normality of the estimated causal effects.
| accept | There is a consensus among reviewers that the submission provides a solid
contribution for the challenging problem of estimating dynamic treatment
effects, i.e., estimating at time t+m the the causal effect of a treatment
given at time t. Although based on known DML/Neyman orthogonal scores concepts,
the methodology leverages a novel peeling argument which may have further applications in sequential causal inference problems.
Finally no major issues with the theory were raised in the reviews.
For all these reasons, I recommend to accept the paper.
The authors are encouraged to include in the camera-ready version of the paper
a semi-synthetic experiment that use characteristics from real world datasets,
and to implement the improvements that emerged during the discussion (e.g.,
avoiding d for the policy, expanding on the relationship with [8, 9, 39, 12] in
the related works section)
I also encourage the authors to discuss briefly
the practicability of verifying the various rate conditions appearing
in the assumptions, e.g., when is it possible to observe from the data
whether these rate conditions are satisfied for a given dataset. | train | [
"sRqMEdy4AG",
"p4-dosz-rv",
"6f1kMHVM6Vv",
"KF0aOpVCzH8",
"92duJXTBNxs",
"vY0M92HQtE",
"xARZZhOXXCw",
"_y7msaeKkpR",
"Zf9RRo_qNW0",
"4hOtthPgEf9"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Here is a more elaborate description of the connection. Potentially the closest to our work is that of Deshpande et al 2019 (the relationship and comparison to the rest of the papers in this line of work is similar in spirit so we focus on Deshpande et al to highlight concrete differences). The work of Deshpande ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
1,
3
] | [
"p4-dosz-rv",
"nips_2021_StKuQ0-dltN",
"_y7msaeKkpR",
"4hOtthPgEf9",
"Zf9RRo_qNW0",
"xARZZhOXXCw",
"nips_2021_StKuQ0-dltN",
"nips_2021_StKuQ0-dltN",
"nips_2021_StKuQ0-dltN",
"nips_2021_StKuQ0-dltN"
] |
nips_2021_8xyNqPvFZwC | Local Disentanglement in Variational Auto-Encoders Using Jacobian $L_1$ Regularization | Travers Rhodes, Daniel Lee | accept | This paper introduces a simple and effective way to disentangle factors of variation in VAE representations by penalizing the Jacobian of the decoder with an L1 regularizer. The reviewers unanimously agree that this is a compelling approach, and their concerns were addressed during the discussion period. There are a number of clarifications that should be added to the final draft based on these discussions, but perhaps the main one is to clarify the relationship between the approach and ICA/sparse coding based on learning localized features. Please read through the reviews carefully and ensure that the paper is updated accordingly, as requested by the reviewers. | train | [
"l1VwKPQGrHh",
"-JQ-LENq6e9",
"M2_eDojLvoF",
"q524sfocvWi",
"vcLvCMPV9R",
"hTg4heaX4T_",
"xCeKVnPyZI",
"X1eLTQC5qUy",
"ii0q2R0Tn4p",
"xCr0ZJc4xFw"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks again for your review.\n\nOur evaluation metric is the standard MIG metric (\"Isolating Sources of Disentanglement in VAEs\", Chen et al. 2018) repeatedly applied to random epsilon-ball subsamples of the data with size parameterized by $\\rho$. With $\\rho=1$ our local MIG is equivalent to the standard (gl... | [
-1,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
4
] | [
"-JQ-LENq6e9",
"X1eLTQC5qUy",
"nips_2021_8xyNqPvFZwC",
"ii0q2R0Tn4p",
"nips_2021_8xyNqPvFZwC",
"xCeKVnPyZI",
"vcLvCMPV9R",
"xCr0ZJc4xFw",
"M2_eDojLvoF",
"nips_2021_8xyNqPvFZwC"
] |
nips_2021_KsfuvGB3vco | Design of Experiments for Stochastic Contextual Linear Bandits | In the stochastic linear contextual bandit setting there exist several minimax procedures for exploration with policies that are reactive to the data being acquired. In practice, there can be a significant engineering overhead to deploy these algorithms, especially when the dataset is collected in a distributed fashion or when a human in the loop is needed to implement a different policy. Exploring with a single non-reactive policy is beneficial in such cases. Assuming some batch contexts are available, we design a single stochastic policy to collect a good dataset from which a near-optimal policy can be extracted. We present a theoretical analysis as well as numerical experiments on both synthetic and real-world datasets.
| accept | This paper considers the following variant of the linear contextual bandit problem: Given historical data, design a non-reactive policy that can be used on future gather data that can be used to learn a near-optimal policy. The reviews felt that the problem is well-motivated, and that the problem setting and techniques both have some novelty. However, concerns were raised regarding the overlap between the algorithmic ideas used here and in previous work on reward-free exploration in RL; this should be addressed in the final version of the paper. | train | [
"6xs_Lxw12yw",
"4vEI_CJkli",
"qBWTumINbw",
"CVSjALuOZdj",
"EG0iCGEz16m",
"9xEA1y0mt5x",
"Q05Yi9PYVkc",
"RGbtuMSOgTf",
"zc9uiVhjZv6",
"AmanXkIzL8I",
"T1h6R0g4Xo2"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors propose a new setting for the stochastic contextual linear bandit problem: using past contexts only, design a non-reactive policy for future online data. They argue that this setting arises in practice since often it may be logistically too complicated to deploy an online machine learning algorithm. Th... | [
6,
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"nips_2021_KsfuvGB3vco",
"nips_2021_KsfuvGB3vco",
"nips_2021_KsfuvGB3vco",
"EG0iCGEz16m",
"zc9uiVhjZv6",
"AmanXkIzL8I",
"4vEI_CJkli",
"6xs_Lxw12yw",
"qBWTumINbw",
"T1h6R0g4Xo2",
"nips_2021_KsfuvGB3vco"
] |
nips_2021_KnN6mh23cSX | Encoding Spatial Distribution of Convolutional Features for Texture Representation | Existing convolutional neural networks (CNNs) often use global average pooling (GAP) to aggregate feature maps into a single representation. However, GAP cannot well characterize complex distributive patterns of spatial features while such patterns play an important role in texture-oriented applications, e.g., material recognition and ground terrain classification. In the context of texture representation, this paper addressed the issue by proposing Fractal Encoding (FE), a feature encoding module grounded by multi-fractal geometry. Considering a CNN feature map as a union of level sets of points lying in the 2D space, FE characterizes their spatial layout via a local-global hierarchical fractal analysis which examines the multi-scale power behavior on each level set. This enables a CNN to encode the regularity on the spatial arrangement of image features, leading to a robust yet discriminative spectrum descriptor. In addition, FE has trainable parameters for data adaptivity and can be easily incorporated into existing CNNs for end-to-end training. We applied FE to ResNet-based texture classification and retrieval, and demonstrated its effectiveness on several benchmark datasets.
| accept | The paper addresses the important problem of texture representation in neural network models. The approach is interesting and well received by the reviewers, and I personally find that many industrial applications could benefit from it.
| train | [
"c2k1lTHAUoQ",
"Uep0DA0TO53",
"G---bgokZuj",
"Oq6F1oTD2W",
"0_JIJj9vHCV",
"y0QOhGSKzLW",
"omI7ArCdDXi",
"4Ne8ZFM74wv",
"ZjA85ECREE1",
"InXdPljLJh",
"QLXPhoEZiuc",
"h7KqiL7ILnH"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks for the reviewer taking account into our responses. We will revise the paper as suggested.",
"This paper proposes a new \"global\" pooling approach that aims at preserving detailed/fine-grained information from the representation. The proposed approach is based on Fractal Encoding that are implemented su... | [
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"G---bgokZuj",
"nips_2021_KnN6mh23cSX",
"Oq6F1oTD2W",
"omI7ArCdDXi",
"nips_2021_KnN6mh23cSX",
"ZjA85ECREE1",
"InXdPljLJh",
"QLXPhoEZiuc",
"0_JIJj9vHCV",
"Uep0DA0TO53",
"h7KqiL7ILnH",
"nips_2021_KnN6mh23cSX"
] |
nips_2021_FTt28RYj5Pc | Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds | Certified robustness is a desirable property for deep neural networks in safety-critical applications, and popular training algorithms can certify robustness of a neural network by computing a global bound on its Lipschitz constant. However, such a bound is often loose: it tends to over-regularize the neural network and degrade its natural accuracy. A tighter Lipschitz bound may provide a better tradeoff between natural and certified accuracy, but is generally hard to compute exactly due to non-convexity of the network. In this work, we propose an efficient and trainable \emph{local} Lipschitz upper bound by considering the interactions between activation functions (e.g. ReLU) and weight matrices. Specifically, when computing the induced norm of a weight matrix, we eliminate the corresponding rows and columns where the activation function is guaranteed to be a constant in the neighborhood of each given data point, which provides a provably tighter bound than the global Lipschitz constant of the neural network. Our method can be used as a plug-in module to tighten the Lipschitz bound in many certifiable training algorithms. Furthermore, we propose to clip activation functions (e.g., ReLU and MaxMin) with a learnable upper threshold and a sparsity loss to assist the network to achieve an even tighter local Lipschitz bound. Experimentally, we show that our method consistently outperforms state-of-the-art methods in both clean and certified accuracy on MNIST, CIFAR-10 and TinyImageNet datasets with various network architectures.
| accept | All the reviewers agreed that the paper provides interesting results in bounding the (local) Lipschitz constant of NNs with piece-wise linear activation functions. The reviewers had a number of concerns which were mostly resolved after the authors' responses. I would strongly recommend that the authors incorporate all the comments mentioned in the reviews in their updated version (e.g. the additional experimental results, comparison with prior work, etc). | train | [
"J3QQCzywQ9q",
"633v30M3zQG",
"95qVz_nJyxY",
"7Am39Xlprxv",
"4gPADszt2K3",
"aEBuPN-BMsu",
"wmnfVBrCjYn",
"qiTRuEcb4Hc",
"Y52f8xW-CM4",
"VuUBwq2StBl",
"VFZZMsXAe88",
"s7JEfMMUopf",
"350m2FPNsoJ",
"TBLJeGAHzHk",
"YQmfzK5-2xS",
"WWoTSFUdT5R",
"ixQGnB-YW98",
"xyJTJP5uJjN",
"mqxCTFR6_... | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer again for the helpful followup comments regarding our response. Since the discussion period is ending soon, we hope the reviewer can take a look at our new response. We respectfully point out that there seems to be a **misunderstanding on the main contribution** of our paper.\n\nIn summary, ... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3
] | [
"Y52f8xW-CM4",
"Y52f8xW-CM4",
"7Am39Xlprxv",
"4gPADszt2K3",
"xyJTJP5uJjN",
"nips_2021_FTt28RYj5Pc",
"nips_2021_FTt28RYj5Pc",
"nips_2021_FTt28RYj5Pc",
"WWoTSFUdT5R",
"nips_2021_FTt28RYj5Pc",
"aEBuPN-BMsu",
"xyJTJP5uJjN",
"mqxCTFR6_3E",
"ixQGnB-YW98",
"wmnfVBrCjYn",
"qiTRuEcb4Hc",
"nip... |
nips_2021_guAXBsPR4tY | Average-Reward Learning and Planning with Options | We extend the options framework for temporal abstraction in reinforcement learning from discounted Markov decision processes (MDPs) to average-reward MDPs. Our contributions include general convergent off-policy inter-option learning algorithms, intra-option algorithms for learning values and models, as well as sample-based planning variants of our learning algorithms. Our algorithms and convergence proofs extend those recently developed by Wan, Naik, and Sutton. We also extend the notion of option-interrupting behaviour from the discounted to the average-reward formulation. We show the efficacy of the proposed algorithms with experiments on a continuing version of the Four-Room domain.
| accept | After a lot of discussion amongst the reviewers as well as with the authors, this paper was judged to be strictly borderline. Extending the options framework to the average reward setting is a fine line of inquiry, but as such the setup of this paper is rather weakly motivated as observed by multiple reviewers and the algorithms and analysis are relatively incremental in their novelty. Even for toy experiments, we would have at least liked to see the authors illustrate some capability afforded by the average reward formulation which the discounted formulation lacks, for instance. Perhaps the paper is more suited as a comprehensive journal submission, with a treatment of function approximation and/or option learning. | train | [
"jTrotn2s4km",
"dE6cREUzzf",
"0UbLSNAOGpQ",
"Xton_8Fgeki",
"5zmW7Sl86Hr",
"F6SJiJBiWpk",
"unldRbKrL4",
"sudtMNS54H3",
"gR22pgnTi01",
"y0s1oY8spE9",
"5gmdMbNaFz2"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper extends the options framework to the average reward reinforcement learning setting. The paper proposes methods to learn on SMDP level as well as the intra-option level. Additionally, the paper proposes method for planning and interrupting options under this setting. The paper's structure is similar to t... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"nips_2021_guAXBsPR4tY",
"Xton_8Fgeki",
"unldRbKrL4",
"5zmW7Sl86Hr",
"5gmdMbNaFz2",
"jTrotn2s4km",
"y0s1oY8spE9",
"gR22pgnTi01",
"nips_2021_guAXBsPR4tY",
"nips_2021_guAXBsPR4tY",
"nips_2021_guAXBsPR4tY"
] |
nips_2021_F93Z9Au6HxE | SSAL: Synergizing between Self-Training and Adversarial Learning for Domain Adaptive Object Detection | We study adapting trained object detectors to unseen domains manifesting significant variations of object appearance, viewpoints and backgrounds. Most current methods align domains by either using image or instance-level feature alignment in an adversarial fashion. This often suffers due to the presence of unwanted background and as such lacks class-specific alignment. A common remedy to promote class-level alignment is to use high confidence predictions on the unlabelled domain as pseudo labels. These high confidence predictions are often fallacious since the model is poorly calibrated under domain shift. In this paper, we propose to leverage model’s predictive uncertainty to strike the right balance between adversarial feature alignment and class-level alignment. Specifically, we measure predictive uncertainty on class assignments and the bounding box predictions. Model predictions with low uncertainty are used to generate pseudo-labels for self-supervision, whereas the ones with higher uncertainty are used to generate tiles for an adversarial feature alignment stage. This synergy between tiling around the uncertain object regions and generating pseudo-labels from highly certain object regions allows us to capture both the image and instance level context during the model adaptation stage. We perform extensive experiments covering various domain shift scenarios. Our approach improves upon existing state-of-the-art methods with visible margins.
| accept | Following the rebuttal and discussion phase two reviewers increased their scores leading to final recommendations of one clear accept, two leaning towards accept and one rejection rating. After considering all reviews and discussion between authors and reviewers, the AC agrees with the reviewer consensus that this work merits acceptance. The key concern prior to the rebuttal was whether the combination of prior approaches (such as MC-dropout, adversarial alignment, and pseudo-label training) were sufficiently new. The rebuttal provided sufficient clarification on this point, especially through new ablation studies. The authors should make sure to include these ablation studies as well as additional text clarifying the novelty of their contribution in the final version. | train | [
"kG13shp6Jd",
"l4INrHcVX5X",
"hUtZWaGgmC",
"-7-Q54BvaOo",
"e2FGK9gHBY",
"l4uV_KLSKT",
"CcYn6yRgxtm",
"_r9MtUxP7IZ",
"wL19-D5cv5h",
"0kRCDKUWCcW",
"4cZdT0ISlK9",
"DxVlZhS_HQk",
"HwaHdETmJU7"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" As a final remark we want to state that a major concern of the reviewer was w.r.t to tiling ablations (justification of using uncertain tiles) and the impact of warmup. We have provided the additional tiling-choice ablation for the former and also have provided an ablation to distill the improvement from warmup (... | [
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
4
] | [
-1,
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"wL19-D5cv5h",
"nips_2021_F93Z9Au6HxE",
"nips_2021_F93Z9Au6HxE",
"0kRCDKUWCcW",
"nips_2021_F93Z9Au6HxE",
"CcYn6yRgxtm",
"4cZdT0ISlK9",
"l4INrHcVX5X",
"HwaHdETmJU7",
"hUtZWaGgmC",
"DxVlZhS_HQk",
"nips_2021_F93Z9Au6HxE",
"nips_2021_F93Z9Au6HxE"
] |
nips_2021_bayZPpw9lM | Counterexample Guided RL Policy Refinement Using Bayesian Optimization | Constructing Reinforcement Learning (RL) policies that adhere to safety requirements is an emerging field of study. RL agents learn via trial and error with an objective to optimize a reward signal. Often policies that are designed to accumulate rewards do not satisfy safety specifications. We present a methodology for counterexample guided refinement of a trained RL policy against a given safety specification. Our approach has two main components. The first component is an approach to discover failure trajectories using Bayesian optimization over multiple parameters of uncertainty from a policy learnt in a model-free setting. The second component selectively modifies the failure points of the policy using gradient-based updates. The approach has been tested on several RL environments, and we demonstrate that the policy can be made to respect the safety specifications through such targeted changes.
| accept | This paper has considerable support from the reviewers but not does not seem to rise to the level of an oral.
For me the formal adversarial set up is strange. Some form importance sampling under an "unsafe" state-transition bias would yield an estimate of the probability of a safety violation. Optimizing an estimate of the probability of a safety violation, possibly with stochastically drawn model parameters, seems like a cleaner conceptual set up to me. But the reviewers are happy and I will go along. | train | [
"fHe7IIhC_K5",
"2bjRJFHJLAQ",
"H-LfnJmiFL",
"xvkbdhSZnzJ",
"VpAKN5xOJsU",
"-w_4cxw0x8",
"tth_EUAc3yX",
"tub8a2te6AO",
"CBB5KUKrwvn",
"xpCosltE9Ep",
"USAMgHxa0fX",
"-a_tsYLuCGY",
"b87kgTS8zC6",
"7B1T3T1erQM",
"O4k83vXKlC",
"Vr9rdIM2b80"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper presents a counterexample-guided policy refinement method to check and correct an already trained RL policy. The proposed algorithm (i) uses Bayesian Optimization to find failure cases in a policy, (ii) then learns a new sub-policy to locally overcome the failed cases, and (iii) finally uses a slightly ... | [
7,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1
] | [
3,
-1,
3,
-1,
2,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_bayZPpw9lM",
"Vr9rdIM2b80",
"nips_2021_bayZPpw9lM",
"tub8a2te6AO",
"nips_2021_bayZPpw9lM",
"-a_tsYLuCGY",
"xpCosltE9Ep",
"CBB5KUKrwvn",
"7B1T3T1erQM",
"O4k83vXKlC",
"nips_2021_bayZPpw9lM",
"VpAKN5xOJsU",
"nips_2021_bayZPpw9lM",
"H-LfnJmiFL",
"USAMgHxa0fX",
"fHe7IIhC_K5"
] |
nips_2021_X7XNPor93uG | Stable, Fast and Accurate: Kernelized Attention with Relative Positional Encoding | Shengjie Luo, Shanda Li, Tianle Cai, Di He, Dinglan Peng, Shuxin Zheng, Guolin Ke, Liwei Wang, Tie-Yan Liu | accept | This is an interesting paper extending linear attention mechanisms proposed in Performers to work with an *arbitrary* relative position encoding (RPE) mask. Previous results showed that it is possible for special instantiations of RPE. This paper shows that as long as a mask has Toeplitz structure (this can be actually further extended to a low displacement rank structure), efficient computation of the attention module can be achieved by combining random feature map tricks with Fast Fourier Transformer mechanism (every RPE is by definition a Toeplitz matrix). The authors furthermore claim, that by incorporating RPE into linear attention modules, they can substantially improve their overall accuracy. According to them, this is possible since by adding RPEs, one can constrain queries and keys to be L2-normalized (resulting in the substantial reduction of the variance of the random feature map estimator that grows exponentially in the lengths of keys/queries). Normally such a normalization would affect the expressiveness of the model, but according to the authors, the RPE (which in principle is not bounded) provides extra expressiveness that would be lost otherwise.
This claim is controversial and additional detailed experiments showing that RPE can indeed carry out the otherwise-lost expressiveness would further strengthen the paper (in particular an extensive ablation studies over L2-normalized variants with and without RPE would provide extra clarity). The paper would also benefit from providing more theoretical understanding of this phenomenon.
The other important point that should be discussed in more detail is an efficient implementation on TPUs of Fast Fourier Transform. Unless such an implementation is provided, practical usage of this algorithm will be limited.
Nevertheless, it is an important result in the field and an elegant approach to incorporating structured masks into kernelizable attention modules. This algorithmic observation is the main contribution of the paper. | train | [
"CTldkdPmisZ",
"tLeDiaYBew-",
"OX1XyyKIoAY",
"aMegvKYGP_c",
"8-wIQ6GXEP7",
"DbgrFdoMewP",
"fzjKCNpBJ9I",
"Q5qA9shBgYQ",
"jDNyKwPUti",
"HSMoeGKokk",
"RyfiFZ1XHYL",
"LMPAZR5QYte",
"4dO2nWjsbRz",
"ppiMwYa-WMW",
"mkydVbxMwTZ"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I think that all my issues have been addressed.",
" Thank you for your feedback. We have been working on additional experiments and will update the empirical results in the final version of our paper.",
" Thank you for your feedback. For the causal language modeling, we only need to set $b_{i-j}=-\\infty$ (i.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
4
] | [
"OX1XyyKIoAY",
"fzjKCNpBJ9I",
"8-wIQ6GXEP7",
"DbgrFdoMewP",
"HSMoeGKokk",
"RyfiFZ1XHYL",
"Q5qA9shBgYQ",
"mkydVbxMwTZ",
"ppiMwYa-WMW",
"4dO2nWjsbRz",
"LMPAZR5QYte",
"nips_2021_X7XNPor93uG",
"nips_2021_X7XNPor93uG",
"nips_2021_X7XNPor93uG",
"nips_2021_X7XNPor93uG"
] |
nips_2021_t-0eCf8L4-a | Learning in Non-Cooperative Configurable Markov Decision Processes | The Configurable Markov Decision Process framework includes two entities: a Reinforcement Learning agent and a configurator that can modify some environmental parameters to improve the agent's performance. This presupposes that the two actors have the same reward functions. What if the configurator does not have the same intentions as the agent? This paper introduces the Non-Cooperative Configurable Markov Decision Process, a setting that allows having two (possibly different) reward functions for the configurator and the agent. Then, we consider an online learning problem, where the configurator has to find the best among a finite set of possible configurations. We propose two learning algorithms to minimize the configurator's expected regret, which exploits the problem's structure, depending on the agent's feedback. While a naive application of the UCB algorithm yields a regret that grows indefinitely over time, we show that our approach suffers only bounded regret. Furthermore, we empirically show the performance of our algorithm in simulated domains.
| accept | Overall, the reviewers are fairly positive on this paper, which offers a novel take on the “environment design” problem in the non-cooperative (but not full adversarial) Stackelberg-type setting, using the framework of configurable MDPs. This new formulation is rather interesting, the (problem-dependent) regret analysis and the empirical confirmation offered are also valuable (though the empirical setup is critiqued as rather simple). The author response alleviated some of the reviewers questions and concerns, especially with regard to the problem formulation and its motivation There reviewers make some useful suggestions for improving the clarity of the paper, and related work suggestions. These should be addressed by the authors in revision.
| train | [
"Os-Zqkjv20S",
"PrkeDTX9ZUw",
"o2txSJvCj6",
"gIlybCzPwR",
"hbqcjpdi_gj",
"Tfx6w5VCL2C",
"MJJCz98K6uc",
"kHT8RVBZdCf",
"eMh9BpHk8R"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors provide an algorithm for online learning in a non-cooperative configurable MDP. They propose a setting where a configurator can choose a transition model for the MDP from a finite set of models. The goal is then to observe the agent's behaviour (either the state-action sequences or a noisy reward) and ... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
1,
3,
3
] | [
"nips_2021_t-0eCf8L4-a",
"Tfx6w5VCL2C",
"kHT8RVBZdCf",
"MJJCz98K6uc",
"Os-Zqkjv20S",
"eMh9BpHk8R",
"nips_2021_t-0eCf8L4-a",
"nips_2021_t-0eCf8L4-a",
"nips_2021_t-0eCf8L4-a"
] |
nips_2021_yCA2i3bGbfC | Identification of Partially Observed Linear Causal Models: Graphical Conditions for the Non-Gaussian and Heterogeneous Cases | In causal discovery, linear non-Gaussian acyclic models (LiNGAMs) have been studied extensively. While the causally sufficient case is well understood, in many real problems the observed variables are not causally related. Rather, they are generated by latent variables, such as confounders and mediators, which may themselves be causally related. Existing results on the identification of the causal structure among the latent variables often require very strong graphical assumptions. In this paper, we consider partially observed linear models with either non-Gaussian or heterogeneous errors. In that case we give two graphical conditions which are necessary for identification of the causal structure. These conditions are closely related to sparsity of the causal edges. Together with one additional condition on the coefficients, which holds generically for any graph, the two graphical conditions are also sufficient for identifiability. These new conditions can be satisfied even when there is a large number of latent variables. We demonstrate the validity of our results on synthetic data.
| accept | The reviewers appreciated the relevance of the paper to the NeurIPS community and the strength of the technical results.
Additionally, the authors did a good job of engaging with the reviewers to dispel most concerns. Hopefully, the discussion clarified to the authors which parts of the paper are less readable, and they can use this information to improve the writing. | train | [
"mnkekmcTHO",
"dEgKNnY19Ow",
"TA3IKdXp8MG",
"AgTluFJi0Jh",
"qIUQAOC9NV",
"Wme6rL1UK4t",
"aM0ArNHaW1R",
"fIa0AfuR5Ya",
"puS5zjiHCS",
"cRSruaFaHfx",
"nva4OJcn3v",
"myNyhpNVuPt",
"mUrAMCVd4uT",
"3-6KmenXoAA",
"aMMrFUuCxA9",
"PNQWZnWGPO",
"rJNU7j3ipeL",
"IHx71HrQz9c",
"91kS6f4a8Vr",
... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"o... | [
" I found that A3 was what I wanted to know by asking those questions. ",
"This paper is about statistical causal inference. More specifically, this paper is about causal discovery that infers causal graphs. The data used is observational data without intervention. All the variables are continuous.\n\nThis paper ... | [
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
4,
3
] | [
"TA3IKdXp8MG",
"nips_2021_yCA2i3bGbfC",
"AgTluFJi0Jh",
"qIUQAOC9NV",
"Wme6rL1UK4t",
"aM0ArNHaW1R",
"fIa0AfuR5Ya",
"cRSruaFaHfx",
"szAkEcSb9J",
"nva4OJcn3v",
"myNyhpNVuPt",
"mUrAMCVd4uT",
"3-6KmenXoAA",
"aMMrFUuCxA9",
"PNQWZnWGPO",
"rJNU7j3ipeL",
"IHx71HrQz9c",
"wUrjQyjDJH3",
"v3i... |
nips_2021_gRqHB07GGz3 | DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer | We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiable renderers. Many previous learning-based approaches for inverse graphics adopt rasterization-based renderers and assume naive lighting and material models, which often fail to account for non-Lambertian, specular reflections commonly observed in the wild. In this work, we propose DIBR++, a hybrid differentiable renderer which supports these photorealistic effects by combining rasterization and ray-tracing, taking the advantage of their respective strengths---speed and realism. Our renderer incorporates environmental lighting and spatially-varying material models to efficiently approximate light transport, either through direct estimation or via spherical basis functions. Compared to more advanced physics-based differentiable renderers leveraging path tracing, DIBR++ is highly performant due to its compact and expressive shading model, which enables easy integration with learning frameworks for geometry, reflectance and lighting prediction from a single image without requiring any ground-truth. We experimentally demonstrate that our approach achieves superior material and lighting disentanglement on synthetic and real data compared to existing rasterization-based approaches and showcase several artistic applications including material editing and relighting.
| accept | The submission was thoroughly reviewed and discussed. The ratings are borderline, but three of the four reviewers support acceptance. Reviewers are concerned about the magnitude of the contribution, but generally see the work as a useful addition to the literature. The AC supports the accept recommendation. The authors are encouraged to thoroughly address the reviewers' concerns and recommendations in the revision. | train | [
"Q6UasSmn7VI",
"dAuplYD2YJ1",
"4wgOCEVCdZr",
"wsFZ70LrNv9",
"7o3WoylshEE",
"9qUgPk9ut6z",
"1zmlqmPSsMi",
"0fUZ6NqzSy",
"i__XJ4nE-A4",
"TS__uyAyblu",
"R34ktWHY8w",
"2vmGE3F_iAq",
"GloaujQ2PTK",
"TFfS1q0oIIH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper aims for single-view 3D object reconstruction without 3D supervision. It jointly infers geometry, reflectance, and lighting given a single image. The authors use a deferred image-based renderer and apply two standard techniques for shading. The proposed method is evaluated on synthetic datasets of metal... | [
6,
5,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_gRqHB07GGz3",
"nips_2021_gRqHB07GGz3",
"nips_2021_gRqHB07GGz3",
"nips_2021_gRqHB07GGz3",
"9qUgPk9ut6z",
"1zmlqmPSsMi",
"2vmGE3F_iAq",
"i__XJ4nE-A4",
"TS__uyAyblu",
"wsFZ70LrNv9",
"Q6UasSmn7VI",
"4wgOCEVCdZr",
"dAuplYD2YJ1",
"nips_2021_gRqHB07GGz3"
] |
nips_2021_jar9C-V8GH | Coresets for Time Series Clustering | Lingxiao Huang, K Sudhir, Nisheeth Vishnoi | accept |
The paper suggests a coreset for clustering in the context of signals.
Indeed, there are too few coresets in this very active area.
There were concern regarding experimental results and novelty that the authors should address in the final version.
In addition, there are few coresets for signals that the authors should cite in the final version:
https://papers.nips.cc/paper/5581-coresets-for-k-segmentation-of-streaming-data
https://dl.acm.org/doi/abs/10.1145/2814569
| train | [
"iuhS2TKg9iu",
"vqlhFSYvtz",
"9qdGIr5DLx_",
"UDwFDDsW9-d",
"nketu7r9Y2z",
"L2uhbV7j9Ke",
"oI5mQF-SRj",
"l9ajOj9qR8",
"9Ogcc1QwHO3",
"cA3gNXQVdLV",
"OZ_ogfqg8Sh",
"ZoYiiKwYNML",
"n1PthJniQNB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your detailed response and additional experiments. I maintain my score: 7 .",
" Dear authors,\n\n1. I have increased my score mainly due to your response regarding motivation of the model based time-series clustering. I strongly suggest adding the Frühwirth-Schnatter reference and also briefly discus... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
3
] | [
"L2uhbV7j9Ke",
"nketu7r9Y2z",
"nips_2021_jar9C-V8GH",
"l9ajOj9qR8",
"9qdGIr5DLx_",
"cA3gNXQVdLV",
"ZoYiiKwYNML",
"OZ_ogfqg8Sh",
"n1PthJniQNB",
"nips_2021_jar9C-V8GH",
"nips_2021_jar9C-V8GH",
"nips_2021_jar9C-V8GH",
"nips_2021_jar9C-V8GH"
] |
nips_2021_bXehDYUjjXi | A Variational Perspective on Diffusion-Based Generative Models and Score Matching | Discrete-time diffusion-based generative models and score matching methods have shown promising results in modeling high-dimensional image data. Recently, Song et al. (2021) show that diffusion processes that transform data into noise can be reversed via learning the score function, i.e. the gradient of the log-density of the perturbed data. They propose to plug the learned score function into an inverse formula to define a generative diffusion process. Despite the empirical success, a theoretical underpinning of this procedure is still lacking. In this work, we approach the (continuous-time) generative diffusion directly and derive a variational framework for likelihood estimation, which includes continuous-time normalizing flows as a special case, and can be seen as an infinitely deep variational autoencoder. Under this framework, we show that minimizing the score-matching loss is equivalent to maximizing a lower bound of the likelihood of the plug-in reverse SDE proposed by Song et al. (2021), bridging the theoretical gap.
| accept | The main pros and cons that came up during reviews and discussions are:
pros:
* excellent paper with a elegant approach (k6w1, dz2y, xxp9, 99PV)
* new results opening up many interesting future works (dz2y, xxp9)
cons:
* framing in the context of prior work can be improved. (k6w1, dz2y)
* The advantages and disadvantages of proposed bound over other bounds could be clarified. (k6w1)
* notation could be clarified. (k6w1)
* Section 6 can be improved in terms of clarity and can be expanded.(dz2y, 99PV)
* presentation of results can be improved. figures are hard to read and a discussion is missing. (xxp9)
* limited experimental results, although these are not the focus of the paper. (99PV)
* paper re-states too many existing theorems. Authors disagree, state it is important for a self-contained paper. (99PV)
The authors and reviewers engaged in discussions during the discussion period and most of the concerns raised by the reviewers were addressed. 3/5 reviewers raised their scores, leading to a unanimous recommendation of acceptance. The recommended decision by the meta-reviewer is to accept the paper. | train | [
"LIHr_jsqHW",
"0YWKHgWLWxS",
"jM0oDsx0pPx",
"26TeEDTgBj4",
"4I6QcjkqH8c",
"toDTfztcJV",
"Zsax8weHHm",
"GbB5g3PEkth",
"QNnDwfxAD-",
"o6MMGot8iVc",
"yHJG2_cJBVO",
"XoI23u9fvNQ",
"VyQRpBJScb"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks to the authors for responding to my questions. The paper is enjoyable to read and I hope to see it accepted.",
"The paper develops a variational framework for general diffusion processes. A continuous-time ELBO is developed that extends the ELBO of discrete-time diffusion processes. They use this to show... | [
-1,
7,
-1,
-1,
7,
-1,
-1,
8,
-1,
-1,
-1,
-1,
7
] | [
-1,
3,
-1,
-1,
3,
-1,
-1,
2,
-1,
-1,
-1,
-1,
4
] | [
"26TeEDTgBj4",
"nips_2021_bXehDYUjjXi",
"o6MMGot8iVc",
"VyQRpBJScb",
"nips_2021_bXehDYUjjXi",
"XoI23u9fvNQ",
"GbB5g3PEkth",
"nips_2021_bXehDYUjjXi",
"nips_2021_bXehDYUjjXi",
"0YWKHgWLWxS",
"GbB5g3PEkth",
"4I6QcjkqH8c",
"nips_2021_bXehDYUjjXi"
] |
nips_2021_iKYO63MOWwi | Online Active Learning with Surrogate Loss Functions | We derive a novel active learning algorithm in the streaming setting for binary classification tasks. The algorithm leverages weak labels to minimize the number of label requests, and trains a model to optimize a surrogate loss on a resulting set of labeled and weak-labeled points. Our algorithm jointly admits two crucial properties: theoretical guarantees in the general agnostic setting and a strong empirical performance. Our theoretical analysis shows that the algorithm attains favorable generalization and label complexity bounds, while our empirical study on 18 real-world datasets demonstrate that the algorithm outperforms standard baselines, including the Margin Algorithm, or Uncertainty Sampling, a high-performing active learning algorithm favored by practitioners.
| accept | This paper proposes a fresh approach to active learning with non zero-one losses. The proposed algorithm has theoretical guarantees that are competitive with existing works, while also performing in practice better than existing practical techniques, in a large set of experiments. This double advantage is of significant value to the advancement of the field. Moreover, the approach includes fresh ideas that could open up new avenues of investigation. Some reviewers have suggested additional experiments. We encourage the authors to add those to their final version.
| train | [
"ldSYJfVE3YG",
"G7SSlNEK3Pc",
"zFDg4p80W4X",
"8EFHpD2Ckq",
"SfCaCEscRie",
"OD4CjXpJ_en",
"9P9UOHFHVXc",
"FAEEKxXEn4O",
"sb-Dn_QLR0D",
"ufKvXLznZ8v",
"egQZSY4uYUE",
"RDtYMj27uf"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for further clarifications!",
" We thank the reviewer for kindly taking the time to read and respond to our comments. \n\n$\\textbf{On comparing algorithms with a fixed budget.}$\n\nWe will add to our empirical results the comparison based on a fixed labeling budget. Please observe that, when focusing... | [
-1,
-1,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
-1,
-1,
4,
-1,
3,
-1,
-1,
-1,
4,
3
] | [
"G7SSlNEK3Pc",
"zFDg4p80W4X",
"8EFHpD2Ckq",
"SfCaCEscRie",
"nips_2021_iKYO63MOWwi",
"FAEEKxXEn4O",
"nips_2021_iKYO63MOWwi",
"RDtYMj27uf",
"egQZSY4uYUE",
"9P9UOHFHVXc",
"nips_2021_iKYO63MOWwi",
"nips_2021_iKYO63MOWwi"
] |
nips_2021_LuxVfHv-59s | Does Preprocessing Help Training Over-parameterized Neural Networks? | Zhao Song, Shuo Yang, Ruizhe Zhang | accept | This paper gives new insights into pre-processing in training over-parameterized neural networks. All reviewers recommend acceptance. | train | [
"0_NVXq5RGf",
"VyvamRohth",
"R5iK8u65fnj",
"O3lnau_WDjP",
"R0pSwpJTT2L",
"YbBKSe3vmuL",
"K0ipJfkNzf",
"k3xP7ElHLOk"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" The response has addressed my concerns and I have increased my score accordingly. I think this paper made good contributions on designing provable algorithms that can reduce the computation cost. Although the results are restricted into the two-layer NTK regime, I believe many of the ideas and proof techniques in... | [
-1,
7,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"R5iK8u65fnj",
"nips_2021_LuxVfHv-59s",
"VyvamRohth",
"k3xP7ElHLOk",
"K0ipJfkNzf",
"nips_2021_LuxVfHv-59s",
"nips_2021_LuxVfHv-59s",
"nips_2021_LuxVfHv-59s"
] |
nips_2021_DXJl9826dm | Causal Influence Detection for Improving Efficiency in Reinforcement Learning | Many reinforcement learning (RL) environments consist of independent entities that interact sparsely. In such environments, RL agents have only limited influence over other entities in any particular situation. Our idea in this work is that learning can be efficiently guided by knowing when and what the agent can influence with its actions. To achieve this, we introduce a measure of situation-dependent causal influence based on conditional mutual information and show that it can reliably detect states of influence. We then propose several ways to integrate this measure into RL algorithms to improve exploration and off-policy learning. All modified algorithms show strong increases in data efficiency on robotic manipulation tasks.
| accept | I would like to congratulate the authors on an excellent investigation. This paper makes very nice progress in the combination of RL and causal inference, and all the reviewers and myself found the work exciting and important.
Please make sure to address the reviewers' feedback in the final version of the paper. | train | [
"AiO7V4EXYTl",
"mCMXWfcFJO",
"rE7jq0tEO_b",
"03MOVRxYZmy",
"c9PFIlRErzq",
"yXWhZ5LUZpc",
"JgunJiTa-D9",
"Kl5XRUALXK",
"lrqi0Luwgp",
"c-y9QZ-2w_l",
"bB8bHCysx5d",
"rAmYcfdpd0I"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clarifications. ",
"This work is built upon the idea of independent causal mechanisms in reinforcement learning environments where an agent only has limited inference over other entities in a given situation. Based on this premise, the authors introduce a measure called situation-dependent causal... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
8
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"rE7jq0tEO_b",
"nips_2021_DXJl9826dm",
"03MOVRxYZmy",
"c9PFIlRErzq",
"mCMXWfcFJO",
"rAmYcfdpd0I",
"bB8bHCysx5d",
"c-y9QZ-2w_l",
"nips_2021_DXJl9826dm",
"nips_2021_DXJl9826dm",
"nips_2021_DXJl9826dm",
"nips_2021_DXJl9826dm"
] |
nips_2021_eATOjMwxfUQ | LADA: Look-Ahead Data Acquisition via Augmentation for Deep Active Learning | Active learning effectively collects data instances for training deep learning models when the labeled dataset is limited and the annotation cost is high. Data augmentation is another effective technique to enlarge the limited amount of labeled instances. The scarcity of labeled dataset leads us to consider the integration of data augmentation and active learning. One possible approach is a pipelined combination, which selects informative instances via the acquisition function and generates virtual instances from the selected instances via augmentation. However, this pipelined approach would not guarantee the informativeness of the virtual instances. This paper proposes Look-Ahead Data Acquisition via augmentation, or LADA framework, that looks ahead the effect of data augmentation in the process of acquisition. LADA jointly considers both 1) unlabeled data instance to be selected and 2) virtual data instance to be generated by data augmentation, to construct the acquisition function. Moreover, to generate maximally informative virtual instances, LADA optimizes the data augmentation policy to maximize the predictive acquisition score, resulting in the proposal of InfoSTN and InfoMixup. The experimental results of LADA show a significant improvement over the recent augmentation and acquisition baselines that were independently applied.
| accept | The manuscript proposes to combine data augmentation with active learning. Instead of simply applying augmentation on top of the data selected by the acquisition function, the manuscript aims to also involve data augmentation in the data acquisition step. Data augmentation policy is optimized to increase the acquisition function score, and the resulting augmented data is then used in the search process.
Reviewers agreed that the idea of looking ahead and considering the informativeness of the augmentations for the data acquisition process is interesting and useful. As the authors point out, a similar idea has been considered; the key difference being that the augmentation scheme is learned instead of being fixed. Reviewer qwu7 also pointed out other related works on 'look-ahead' in the context of active learning. The authors emphasised that the novelty of the work lies in the look-ahead in conjunction with the augmentation of virtual instances. During rebuttal, Reviewer pwAg guided authors to produce stronger baselines, and synergetic merge of active learning and semi-supervised learning.
| train | [
"0YOCTLBU2T",
"6DVq3rbpOhi",
"XRE29EtyuZ",
"x1mK4cDdPxk",
"VCoE6Mr2VlE",
"1nRCWTDzDKI",
"FvyI8bLAIGc",
"1tt6rOhxPkz",
"HfPwAyLzDBH",
"YNleSqzMZJS",
"MIUfsNiMWcu",
"YVICkhuJxOq",
"4dk_bG3GIMv",
"cE4Nu0fijiB",
"THFsWYhGy-",
"yVQWh_yyRXy",
"JuerY52-Rb",
"0bhN61s9vh4",
"EcN0cj1WYVt",... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_revi... | [
"This paper proposes an interesting alternative approach to active learning. The main idea is an interesting combination of active learning and data augmentation by selecting instances that are informative with and without augmentation. Moreover, the proposed approach allows to optime the data augmentation policy i... | [
7,
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
5,
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_eATOjMwxfUQ",
"EcN0cj1WYVt",
"VCoE6Mr2VlE",
"nips_2021_eATOjMwxfUQ",
"HfPwAyLzDBH",
"HfPwAyLzDBH",
"nips_2021_eATOjMwxfUQ",
"qTFxyQsnyv",
"YNleSqzMZJS",
"MIUfsNiMWcu",
"YVICkhuJxOq",
"4dk_bG3GIMv",
"cE4Nu0fijiB",
"THFsWYhGy-",
"yVQWh_yyRXy",
"JuerY52-Rb",
"-gtHkQLPdLho",
... |
nips_2021_XeM4Lld0zTR | Policy Optimization in Adversarial MDPs: Improved Exploration via Dilated Bonuses | Haipeng Luo, Chen-Yu Wei, Chung-Wei Lee | accept | The paper proposes some interesting technical tools for online learning in MDPs with adversarial rewards, improving existing results in tabular MDPs and large-scale MDPs under realizable linear function approximation and bandit feedback. All reviewers appreciated the clear improvement over previous results and the novelty of the technical contribution, with some expert reviewers opining that the proposed techniques are likely to inspire future work in the field. Based on my own reading, I concur with this assessment and agree that the paper should definitely be accepted for publication at the conference. | train | [
"WOtiya7zye6",
"crOb49BqhzK",
"nvJsMCZFCHQ",
"0wa1Qr26fVS",
"F25BphypJt4",
"Cg66EeEg2w",
"jaBvRg9VTP_",
"PrTEqBWAvWR",
"sFtc8W7kB4",
"zO0Q1o8Ml5M",
"2H9iQdt7wX_"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for detailed answers to my questions. I keep my score unchanged.",
" Thank you for carefully addressing my concerns. My overall assessment of the paper remains unchanged.",
" Thanks for the response. My recommendation remains acceptance. ",
" Thank you for the constructive suggestions for our work... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
4
] | [
"F25BphypJt4",
"0wa1Qr26fVS",
"jaBvRg9VTP_",
"2H9iQdt7wX_",
"zO0Q1o8Ml5M",
"sFtc8W7kB4",
"PrTEqBWAvWR",
"nips_2021_XeM4Lld0zTR",
"nips_2021_XeM4Lld0zTR",
"nips_2021_XeM4Lld0zTR",
"nips_2021_XeM4Lld0zTR"
] |
nips_2021_MBxJ0ydw6b | Multiclass versus Binary Differentially Private PAC Learning | Satchit Sivakumar, Mark Bun, Marco Gaboardi | accept | This is an elegant paper establishing fundamental results for multiclass differentially private PAC learning. The reviewers found the paper well written and the results interesting. | train | [
"jLxt8H4AvYt",
"hPF9QDopGR0",
"OlL6tn0KyhQ",
"ClqGtEIorVo"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewers for their conscientious reading and evaluation of our work. \n\nSince the submission deadline, we were actually able to show that the multiclass Littlestone dimension of a hypothesis class with $k$ labels cannot be too much larger than the maximum Littlestone dimension over its binary restr... | [
-1,
7,
7,
7
] | [
-1,
3,
3,
4
] | [
"nips_2021_MBxJ0ydw6b",
"nips_2021_MBxJ0ydw6b",
"nips_2021_MBxJ0ydw6b",
"nips_2021_MBxJ0ydw6b"
] |
nips_2021_xmMHxfE1qS6 | Adversarially Robust Change Point Detection | Mengchu Li, Yi Yu | accept | The paper gives an adversarially robust way to perform change point detection in univariate time series analysis. Adversarial robustness in this context has not been considered before, and this paper provides a new algorithm as well as empirical evidence. | train | [
"XMNxkf2_vLl",
"7m8qLv1JaJI",
"0mRv4LXDwvd",
"QoUHDFP1cEo",
"uzSLrtaYNYy",
"TRPpKIZ2KOK"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a method that aims for *robust* detection change points in a time series in the presence of noise, against the following adversarial attacks: (i) hiding true change points; (ii) creating spurious change points; and (iii) increasing the localisation error rate. The key approach is to formulatin... | [
6,
-1,
-1,
-1,
8,
6
] | [
4,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_xmMHxfE1qS6",
"TRPpKIZ2KOK",
"XMNxkf2_vLl",
"uzSLrtaYNYy",
"nips_2021_xmMHxfE1qS6",
"nips_2021_xmMHxfE1qS6"
] |
nips_2021_-iu9-C_lan | Cycle Self-Training for Domain Adaptation | Mainstream approaches for unsupervised domain adaptation (UDA) learn domain-invariant representations to narrow the domain shift, which are empirically effective but theoretically challenged by the hardness or impossibility theorems. Recently, self-training has been gaining momentum in UDA, which exploits unlabeled target data by training with target pseudo-labels. However, as corroborated in this work, under distributional shift, the pseudo-labels can be unreliable in terms of their large discrepancy from target ground truth. In this paper, we propose Cycle Self-Training (CST), a principled self-training algorithm that explicitly enforces pseudo-labels to generalize across domains. CST cycles between a forward step and a reverse step until convergence. In the forward step, CST generates target pseudo-labels with a source-trained classifier. In the reverse step, CST trains a target classifier using target pseudo-labels, and then updates the shared representations to make the target classifier perform well on the source data. We introduce the Tsallis entropy as a confidence-friendly regularization to improve the quality of target pseudo-labels. We analyze CST theoretically under realistic assumptions, and provide hard cases where CST recovers target ground truth, while both invariant feature learning and vanilla self-training fail. Empirical results indicate that CST significantly improves over the state-of-the-arts on visual recognition and sentiment analysis benchmarks.
| accept | The reviewers unanimously agree that the paper should be accepted. I find the idea of the paper interesting and the empirical results seem encouraging. I am not convinced that the presented algorithm in its full complexity can be theoretically studied. The experimental results are encouraging but still falling behind the state-of-the-art. Notation is cluttered and exposition is not as clear as it should be. I recommend acceptance as a poster, but please incorporate reviewers' feedback to improve the paper. | train | [
"VfJ8n-wpPVd",
"9kOY3scXbYw",
"uXdXeH1rf8f",
"Iwr1ujKjwn",
"CHLpaTttaN",
"F8-GqcKroZs",
"cDD_Jswbya9",
"4wIYOlvH-9s",
"B__Vplh90cm",
"dOn-Y6qbfB6"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Proposes cycle self-training (CST) an unsupervised domain adaptation (UDA) algorithm that cycles between standard self-training of a target classifier on (source model generated) target pseudolabels, and a new cycle self-training step that updates backbone representations so as to generalize the target classifier ... | [
7,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_-iu9-C_lan",
"nips_2021_-iu9-C_lan",
"nips_2021_-iu9-C_lan",
"F8-GqcKroZs",
"F8-GqcKroZs",
"uXdXeH1rf8f",
"9kOY3scXbYw",
"VfJ8n-wpPVd",
"dOn-Y6qbfB6",
"nips_2021_-iu9-C_lan"
] |
nips_2021_xWq1MVj7YrE | Novel Visual Category Discovery with Dual Ranking Statistics and Mutual Knowledge Distillation | In this paper, we tackle the problem of novel visual category discovery, i.e., grouping unlabelled images from new classes into different semantic partitions by leveraging a labelled dataset that contains images from other different but relevant categories. This is a more realistic and challenging setting than conventional semi-supervised learning. We propose a two-branch learning framework for this problem, with one branch focusing on local part-level information and the other branch focusing on overall characteristics. To transfer knowledge from the labelled data to the unlabelled, we propose using dual ranking statistics on both branches to generate pseudo labels for training on the unlabelled data. We further introduce a mutual knowledge distillation method to allow information exchange and encourage agreement between the two branches for discovering new categories, allowing our model to enjoy the benefits of global and local features. We comprehensively evaluate our method on public benchmarks for generic object classification, as well as the more challenging datasets for fine-grained visual recognition, achieving state-of-the-art performance.
| accept | This paper presents a novel class discovery technique based on dual ranking statistics and mutual knowledge distillation. The main idea is reasonable and shows superior performance compared to existing methods. All reviewers are positive about this paper and there are no particularly negative comments. However, the proposed method is lacking in theoretical support and the experiments are limited to small datasets and/or relatively a small number of unlabeled classes. Overall, I recommend accepting this paper for a poster presentation. | train | [
"6KX2Yzfrxed",
"xaMKlysgX4T",
"m2tAdue1nfx",
"iyhD0cNDbq_",
"heisoiGk-wT",
"olKlb-0alad",
"M2CDkdaxhC",
"z36c1SmYQB0",
"3FB3qAUglVr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers the problem of novel class discovery (NCD). Specifically, the authors propose a Dual Ranking Statistics method that can leverage both global and local factors for learning novel classes. In addition, a Mutual Knowledge Distillation approach is proposed to explore the relationship between the g... | [
8,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
5,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"nips_2021_xWq1MVj7YrE",
"nips_2021_xWq1MVj7YrE",
"3FB3qAUglVr",
"6KX2Yzfrxed",
"z36c1SmYQB0",
"M2CDkdaxhC",
"nips_2021_xWq1MVj7YrE",
"nips_2021_xWq1MVj7YrE",
"nips_2021_xWq1MVj7YrE"
] |
nips_2021_hx2Ckkzdf53 | Stochastic Anderson Mixing for Nonconvex Stochastic Optimization | Anderson mixing (AM) is an acceleration method for fixed-point iterations. Despite its success and wide usage in scientific computing, the convergence theory of AM remains unclear, and its applications to machine learning problems are not well explored. In this paper, by introducing damped projection and adaptive regularization to the classical AM, we propose a Stochastic Anderson Mixing (SAM) scheme to solve nonconvex stochastic optimization problems. Under mild assumptions, we establish the convergence theory of SAM, including the almost sure convergence to stationary points and the worst-case iteration complexity. Moreover, the complexity bound can be improved when randomly choosing an iterate as the output. To further accelerate the convergence, we incorporate a variance reduction technique into the proposed SAM. We also propose a preconditioned mixing strategy for SAM which can empirically achieve faster convergence or better generalization ability. Finally, we apply the SAM method to train various neural networks including the vanilla CNN, ResNets, WideResNet, ResNeXt, DenseNet and LSTM. Experimental results on image classification and language model demonstrate the advantages of our method.
| accept | This paper studies Anderson mixing for stochastic optimization. While the paper is on the borderline, the reviewers mostly found the paper to be interesting and believe that the approach can potentially have some practical use in training neural networks. The presentation of the paper is one of the primary criticisms of the paper. After going through the paper, I tend to agree with the reviewers. In particular:
(1) The paper needs to be revised in order to make the theoretical details easily digestible to the readers.
(2) The authors also need to add more discussion on comparison with quasi Newton methods.
(3) Also, I would like to see some empirical comparison of optimizers in the main paper with respect to the *wall-time* rather than number of epochs.
I expect the authors to make these changes in the final version. | train | [
"SKnuV4edJZ",
"BAXG6zTvAoe",
"J5z--KwIQCu",
"2ppVAJK7tW",
"SeYwrN36Ge5",
"SyO-IN6b2ly",
"j793oMp98_",
"tWfDktn4EIk",
"Am5wMZ7Mlvx"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a stochastic Anderson mixing method to solve non-convex stochastic optimization problems. The main contribution is a convergence theory and the applications of this method for deep learning problems. Adaptive, variance reduced, and preconditioned versions of this methods are also studied. \n ... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"nips_2021_hx2Ckkzdf53",
"nips_2021_hx2Ckkzdf53",
"SKnuV4edJZ",
"Am5wMZ7Mlvx",
"tWfDktn4EIk",
"j793oMp98_",
"nips_2021_hx2Ckkzdf53",
"nips_2021_hx2Ckkzdf53",
"nips_2021_hx2Ckkzdf53"
] |
nips_2021_J2YvvXDp7H | Sample-Efficient Reinforcement Learning for Linearly-Parameterized MDPs with a Generative Model | Bingyan Wang, Yuling Yan, Jianqing Fan | accept | The paper has initially received mixed reviews, with several reviewers taking issue with the strength of the assumptions. After reading the very detailed author response and some further discussion, two reviewers have decided to raise their scores, so eventually all scores ended up being positive. The reviewers agreed that the paper offers an interesting technical contribution clearly improving the previous state of the art, and that it is worthy of being published at the conference. Based on my own reading, I concur with this assessment and strongly recommend this paper for acceptance. | train | [
"JJNJBMvC7Ay",
"8DM14U3p-r",
"6JQ-XqMYIAh",
"kQTsv9VO2AM",
"cVYAx9E_LeE",
"_Lomm-VArU",
"KfJifyghrY",
"GVDKRy1u6-",
"Obdp0l-GrlI",
"P6Cuc0fG10",
"TdAfTqxY5K7",
"YY4VF8bp2-w"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers reinforcement learning problem in infinite-horizon, discounted reward MDPs with a generative model. The authors aim to obtain upper bounds on the number of samples required to efficiently learn the optimal policy or the optimal $Q$-function when number of states and actions are very large. In o... | [
7,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"nips_2021_J2YvvXDp7H",
"Obdp0l-GrlI",
"kQTsv9VO2AM",
"KfJifyghrY",
"nips_2021_J2YvvXDp7H",
"YY4VF8bp2-w",
"cVYAx9E_LeE",
"cVYAx9E_LeE",
"JJNJBMvC7Ay",
"TdAfTqxY5K7",
"nips_2021_J2YvvXDp7H",
"nips_2021_J2YvvXDp7H"
] |
nips_2021_5TuGBbNSyAc | NN-Baker: A Neural-network Infused Algorithmic Framework for Optimization Problems on Geometric Intersection Graphs | Recent years have witnessed a surge of approaches to use neural networks to help tackle combinatorial optimization problems, including graph optimization problems. However, theoretical understanding of such approaches remains limited. In this paper, we consider the geometric setting, where graphs are induced by points in a fixed dimensional Euclidean space. We show that several graph optimization problems can be approximated by an algorithm that is polynomial in graph size n via a framework we propose, call the Baker-paradigm. More importantly, a key advantage of the Baker-paradigm is that it decomposes the input problem into (at most linear number of) small sub-problems of fixed sizes (independent of the size of the input). For the family of such fixed-size sub-problems, we can now design neural networks with universal approximation guarantees to solve them. This leads to a mixed algorithmic-ML framework, which we call NN-Baker that has the capacity to approximately solve a family of graph optimization problems (e.g, maximum independent set and minimum vertex cover) in time linear to input graph size, and only polynomial to approximation parameter. We instantiate our NN-Baker by a CNN version and GNN version, and demonstrate the effectiveness and efficiency of our approach via a range of experiments.
| accept | All reviewers found the papier interesting and appreciated the articulation with theoretical guarantees.
The reviewers have asked a number of questions and made a number of comments: the authors are strongly encourage to take them into account and to use the elements that they provided in their responses to the reviewers to prepare the final version of the paper. | train | [
"W55tZXS_Cb9",
"_3fjYJhFxMd",
"FQfRjnSTR-_",
"ADQOzaylMXD",
"2gRexJKur5V",
"MKuulgtU99M",
"aQjttayv2J",
"M1wkVSGEhjm",
"UhkHkepN0k",
"b0xkLC5ChO0",
"3u1Y3tVXK38",
"y8-Wh6ZEgzN"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your time and comments!",
" Thanks for your time and comments! ",
" I appreciate the thorough response! I stand by my score and hope the paper will be accepted.",
" Thank you for the response. \n\nI read the response and other reviews. I think this paper contains good contributions. I maintain my... | [
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"FQfRjnSTR-_",
"ADQOzaylMXD",
"b0xkLC5ChO0",
"M1wkVSGEhjm",
"aQjttayv2J",
"nips_2021_5TuGBbNSyAc",
"UhkHkepN0k",
"y8-Wh6ZEgzN",
"MKuulgtU99M",
"3u1Y3tVXK38",
"nips_2021_5TuGBbNSyAc",
"nips_2021_5TuGBbNSyAc"
] |
nips_2021_WKR5ZUs91n | A Note on Sparse Generalized Eigenvalue Problem | The sparse generalized eigenvalue problem (SGEP) aims to find the leading eigenvector with sparsity structure. SGEP plays an important role in statistical learning and has wide applications including, but not limited to, sparse principal component analysis, sparse canonical correlation analysis and sparse Fisher discriminant analysis, etc. Due to the sparsity constraint, the solution of SGEP entails interesting properties from both numerical and statistical perspectives. In this paper, we provide a detailed sensitivity analysis for SGEP and establish the rate-optimal perturbation bound under the sparse setting. Specifically, we show that the bound is related to the perturbation/noise level and the recovery of the true support of the leading eigenvector as well. We also investigate the estimator of SGEP via imposing a non-convex regularization. Such estimator can achieve the optimal error rate and can recover the sparsity structure as well. Extensive numerical experiments corroborate our theoretical findings via using alternating direction method of multipliers (ADMM)-based computational method.
| accept | The main contribution of the paper is to give matching upper and lower bounds of the approximation error in the sparse generalized eigenproblem setting. This is an important contribution. To help better position the contribution it maybe best the authors add an explicit discussion on why the bounds are optimal. | train | [
"3wpXKrEQUos",
"huXgFx7YN7B",
"nVB1zAG7VGZ",
"85VXiE8ewpc",
"wnaCn3_79Pm",
"vK5dyhKWOhj",
"Af1Evqkh0Kw",
"OcK60ngfktS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper studies a class of sparse generalized eigenproblems which encompasses several common problems in machine learning. For this class of problems it gives upper and lower perturbation bounds which are shown to be rate-optimal. The authors then present an optimization problem with a non-convex penalty functio... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
6
] | [
3,
-1,
3,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_WKR5ZUs91n",
"Af1Evqkh0Kw",
"nips_2021_WKR5ZUs91n",
"nVB1zAG7VGZ",
"OcK60ngfktS",
"3wpXKrEQUos",
"nVB1zAG7VGZ",
"nips_2021_WKR5ZUs91n"
] |
nips_2021_83SeeJals7j | RMIX: Learning Risk-Sensitive Policies forCooperative Reinforcement Learning Agents | Current value-based multi-agent reinforcement learning methods optimize individual Q values to guide individuals' behaviours via centralized training with decentralized execution (CTDE). However, such expected, i.e., risk-neutral, Q value is not sufficient even with CTDE due to the randomness of rewards and the uncertainty in environments, which causes the failure of these methods to train coordinating agents in complex environments. To address these issues, we propose RMIX, a novel cooperative MARL method with the Conditional Value at Risk (CVaR) measure over the learned distributions of individuals' Q values. Specifically, we first learn the return distributions of individuals to analytically calculate CVaR for decentralized execution. Then, to handle the temporal nature of the stochastic outcomes during executions, we propose a dynamic risk level predictor for risk level tuning. Finally, we optimize the CVaR policies with CVaR values used to estimate the target in TD error during centralized training and the CVaR values are used as auxiliary local rewards to update the local distribution via Quantile Regression loss. Empirically, we show that our method outperforms many state-of-the-art methods on various multi-agent risk-sensitive navigation scenarios and challenging StarCraft II cooperative tasks, demonstrating enhanced coordination and revealing improved sample efficiency.
| accept | All reviewers felt the author rebuttals satisfactorily addressed their concerns and three reviewers increased their scores as a result. Overall, there was a unanimous decision to accept. The authors are strongly encouraged to incorporate review comments and rebuttal discussion into their final revision. | train | [
"lxQmFKRc6Vh",
"ZRmpRGs1Kgo",
"_TUJvHWNjj2",
"olOSdK9SQgL",
"dUXjDg_p5y7",
"vc47bjlHoH0",
"w25LncyCTlO",
"_QT-vhpcxm",
"W1El505xtMT",
"p-hVgUgofhv",
"iJDOLWm5pHo",
"_o_Q6Z9VWD",
"XHWl7ptd8Ym",
"ZTzJAUlGEQf",
"Ucfim9TmtVj",
"ijA-LQDN9Oa",
"fN29A_kUBET",
"AbrGE8mJVIE"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" Dear Review 5UXh,\n\nWe are glad to hear that our previous responses addressed your concerns. Thank you for raising the score.",
" Dear Reviewer iywX,\n\nThank you very much for raising the score.",
" Dear Reviewer CWF4,\n\nWe are glad to hear that our response helped you understand our method. Thank you very... | [
-1,
-1,
-1,
7,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
3,
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"dUXjDg_p5y7",
"W1El505xtMT",
"w25LncyCTlO",
"nips_2021_83SeeJals7j",
"_o_Q6Z9VWD",
"nips_2021_83SeeJals7j",
"p-hVgUgofhv",
"nips_2021_83SeeJals7j",
"ZTzJAUlGEQf",
"vc47bjlHoH0",
"nips_2021_83SeeJals7j",
"olOSdK9SQgL",
"ZTzJAUlGEQf",
"ijA-LQDN9Oa",
"vc47bjlHoH0",
"_QT-vhpcxm",
"AbrGE... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.