paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2021_l7-DBWawSZH | Optimal Policies Tend To Seek Power | Some researchers speculate that intelligent reinforcement learning (RL) agents would be incentivized to seek resources and power in pursuit of their objectives. Other researchers are skeptical, because RL agents need not have human-like power-seeking instincts. To clarify this debate, we develop the first formal theory of the statistical tendencies of optimal policies. In the context of Markov decision processes, we prove that certain environmental symmetries are sufficient for optimal policies to tend to seek power over the environment. These symmetries exist in many environments in which the agent can be shut down or destroyed. We prove that in these environments, most reward functions make it optimal to seek power by keeping a range of options available and, when maximizing average reward, by navigating towards larger sets of potential terminal states.
| accept | The discussion with the authors helped clear all the reviewer concerns. The scores were all updated to clear accept. | val | [
"FLqGCE72T4t",
"X7YA4kMpLHm",
"Xz2PY159LQ4",
"EoIv6jsMRmY",
"b-P07w6vtG6",
"JncEfraEDTG",
"eM2JAW7VrBB",
"kDlGDpzHG0V",
"GB-JPRWOch",
"5zLMd6fnNis",
"D7QsNlbqbNH",
"2hZDXEWu1RM",
"f9Fe0NanWgJ",
"MzqYbabo6ci",
"wm8wmKLkX9g",
"eKWo0cqA8fM",
"Ds1glyHZo57",
"hSiYRIJPuDh",
"v0l1_pAoEY... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"of... | [
"The paper formalises the notion of an agent's power in a state $s$ as the average optimal value of that state, which the authors claim can be measured using a related power function. The paper posits that most optimal policies tend to seek states that achieve a higher value of power according to the power function... | [
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_l7-DBWawSZH",
"nips_2021_l7-DBWawSZH",
"EoIv6jsMRmY",
"b-P07w6vtG6",
"XtirZtt-8uj",
"eM2JAW7VrBB",
"kDlGDpzHG0V",
"bq92ID1J8y-",
"MzqYbabo6ci",
"wm8wmKLkX9g",
"2hZDXEWu1RM",
"f9Fe0NanWgJ",
"oAfxclHMIrl",
"Ds1glyHZo57",
"eKWo0cqA8fM",
"7NuduTLZmLI",
"7NuduTLZmLI",
"X7YA4k... |
nips_2021_Pgv4fwfh63L | Catalytic Role Of Noise And Necessity Of Inductive Biases In The Emergence Of Compositional Communication | Communication is compositional if complex signals can be represented as a combination of simpler subparts. In this paper, we theoretically show that inductive biases on both the training framework and the data are needed to develop a compositional communication. Moreover, we prove that compositionality spontaneously arises in the signaling games, where agents communicate over a noisy channel. We experimentally confirm that a range of noise levels, which depends on the model and the data, indeed promotes compositionality. Finally, we provide a comprehensive study of this dependence and report results in terms of recently studied compositionality metrics: topographical similarity, conflict count, and context independence.
| accept | The paper studies the effect of a noisy channel in an emergent communication setting. They formally prove that, under fairly restrictive assumptions (type of noise, loss functions used, capacity of the channel, etc.), noise in the communication channel promotes emergence of compositional languages. The paper presents extensive empirical results studying the emergence of compositional languages in conditions that also diverge from the theoretical ones.
--
This paper offers to the literature of emergent communication a first, restrictive, but interesting, formal result on the emergence of compositional languages. All reviewers agree that the paper is interesting and the experiments are extensive. The section on the limitation of the current approach and the restrictive assumptions of the theorems have been found useful and important to inspire future work. I suggest the authors to incorporate the precious reviewers' feedback, and especially: (a) the sheer amount of experiments is hard to process, one possible direction for improvement would be to add a synthesis of the overall trends and most important observations; (b) improve writing and clarity of Section 3, expanding on the relevance and interpretation of the theorem and the task setup.
Overall, I think this paper brings a worthy contribution to the field and can inspire future theoretical work to relax the strict assumptions considered here. Therefore, I recommend this paper for acceptance.
| train | [
"Mbw3Vsscafk",
"IVtCUSqKz3X",
"zHoocvUdgr",
"MFjMz7aBY3",
"D8-xmL5vPva",
"I3JJ0-JxrKP",
"yziRQy_dew",
"6m1iBeK-lJX",
"9ujVCTZQ68i",
"z01NbxHZ3ik",
"IkdVWp9JqJ3"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I agree that the fact that the environments considered in this paper are small-scale should not count against the paper (as it is common in the emergent communication literature), and I also agree that the number of experiments and ablations in this paper is extensive, which is one of the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"yziRQy_dew",
"zHoocvUdgr",
"I3JJ0-JxrKP",
"9ujVCTZQ68i",
"z01NbxHZ3ik",
"IkdVWp9JqJ3",
"6m1iBeK-lJX",
"nips_2021_Pgv4fwfh63L",
"nips_2021_Pgv4fwfh63L",
"nips_2021_Pgv4fwfh63L",
"nips_2021_Pgv4fwfh63L"
] |
nips_2021_GEm4o9A6Jfb | PLUR: A Unifying, Graph-Based View of Program Learning, Understanding, and Repair | Machine learning for understanding and editing source code has recently attracted significant interest, with many developments in new models, new code representations, and new tasks.This proliferation can appear disparate and disconnected, making each approach seemingly unique and incompatible, thus obscuring the core machine learning challenges and contributions.In this work, we demonstrate that the landscape can be significantly simplified by taking a general approach of mapping a graph to a sequence of tokens and pointers.Our main result is to show that 16 recently published tasks of different shapes can be cast in this form, based on which a single model architecture achieves near or above state-of-the-art results on nearly all tasks, outperforming custom models like code2seq and alternative generic models like Transformers.This unification further enables multi-task learning and a series of cross-cutting experiments about the importance of different modeling choices for code understanding and repair tasks.The full framework, called PLUR, is easily extensible to more tasks, and will be open-sourced (https://github.com/google-research/plur).
| accept | This paper presents a unifying, graph-based ML architecture for understanding and editing source code. The method, called PLUR, takes the general approach of mapping a graph to a sequence of tokens and pointers, a form into which many ML4Code tasks can be cast. The authors show that PLUR admits 15 recently published, diverse tasks, enabling a single model architecture to achieve near or above state-of-the-art results on them. PLUR is extensible to new problems beyond these initial 15. Significantly, its unifying framework enables approaches like multi-task learning and experiments on the importance of different modelling choices for ML4Code. These experiments, several of which are conducted in the present work, should guide the community toward robust improvements in ML4Code model development.
Reviewers are in consensus that the paper is above the NeurIPS acceptance threshold, with one championing it as a top-50% submission. They agree that it is technically strong, clear, and significant, as do I. The variety of tasks covered by the experiments is persuasive. Two reviewers mentioned that the framework does not exactly innovate on the modelling side, since it is based on the existing Graph2ToCoPo formulation; however, its unification of previously disconnected ML4Code approaches simplifies this important problem space in an insightful, clarifying manner that will almost certainly impact the field going forward. I recommend the paper for acceptance. | train | [
"Tvx0Jt0ulU",
"NG5RhkiaK2",
"bzFGexAKeMm",
"i3XBi4gQtEk",
"BvRiFm2Ie8",
"r7ZGzojP1ky",
"QlYCLA537O",
"ZheRkWBNk0J",
"Di2ue6b8uL",
"yqhJGHzPMh",
"U3OpYjhNcx_",
"kgFiQt4v-Gx"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your detailed response! This makes sense and is very insightful. I'll keep my review of Accept.",
" Thank you, this is very insightful! I increased my score to a clear accept.",
"The paper presents a general architecture which unifies fifteen previously\nseparate ML4Code tasks and achieves state-of... | [
-1,
-1,
8,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"ZheRkWBNk0J",
"QlYCLA537O",
"nips_2021_GEm4o9A6Jfb",
"nips_2021_GEm4o9A6Jfb",
"QlYCLA537O",
"i3XBi4gQtEk",
"bzFGexAKeMm",
"kgFiQt4v-Gx",
"U3OpYjhNcx_",
"nips_2021_GEm4o9A6Jfb",
"nips_2021_GEm4o9A6Jfb",
"nips_2021_GEm4o9A6Jfb"
] |
nips_2021_pk4q0SD_r1X | COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining | We present a self-supervised learning framework, COCO-LM, that pretrains Language Models by COrrecting and COntrasting corrupted text sequences. Following ELECTRA-style pretraining, COCO-LM employs an auxiliary language model to corrupt text sequences, upon which it constructs two new tasks for pretraining the main model. The first token-level task, Corrective Language Modeling, is to detect and correct tokens replaced by the auxiliary model, in order to better capture token-level semantics. The second sequence-level task, Sequence Contrastive Learning, is to align text sequences originated from the same source input while ensuring uniformity in the representation space. Experiments on GLUE and SQuAD demonstrate that COCO-LM not only outperforms recent state-of-the-art pretrained models in accuracy, but also improves pretraining efficiency. It achieves the MNLI accuracy of ELECTRA with 50% of its pretraining GPU hours. With the same pretraining steps of standard base/large-sized models, COCO-LM outperforms the previous best models by 1+ GLUE average points.
| accept | This work improves ELECTRA language pretraining approach by introducing Corrective Language Model task (so the model can generate words) and Sequence Contrastive Learning (so sentence representations are more informative). I agree with the reviewers that the authors have conducted extensive experiments to show that the approach is effective and the analyses are also very thorough and insightful, e.g., analyzing cosine similarities of sentence vectors. There are a few points that the original paper wasn't convincing, e.g., Coco-LM wasn't better than ELECTRA in Large++ setup and only word-similarity is used to demonstrate the main point that Coco-LM has better language modeling capabilities. The new results added for the Large++ and Few-Shot Prompt-Based Fine-Tuning Results convincingly addressed these concerns; hence, I recommend Accept. | test | [
"CCw3AyVANk5",
"wnaifbo287w",
"BLn_SGav0Yz",
"plKzbp0g4to",
"cs4KGEZFRU",
"r6wYWBZ87_c",
"S3JpHlwGk3",
"aTQZIjWECle",
"f1_6S5fDYrn"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper tackles two challenges in current self-supervised learning frameworks, including the pre-training efficiency and the anisotropy of text representations. To address these two challenges, the authors propose the COCO-LM framework that pre-trains language models by correcting and contrasting corrupted text... | [
6,
7,
7,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
5
] | [
"nips_2021_pk4q0SD_r1X",
"nips_2021_pk4q0SD_r1X",
"nips_2021_pk4q0SD_r1X",
"nips_2021_pk4q0SD_r1X",
"CCw3AyVANk5",
"wnaifbo287w",
"BLn_SGav0Yz",
"f1_6S5fDYrn",
"nips_2021_pk4q0SD_r1X"
] |
nips_2021_PqkKlKQuGZw | Minibatch and Momentum Model-based Methods for Stochastic Weakly Convex Optimization | Stochastic model-based methods have received increasing attention lately due to their appealing robustness to the stepsize selection and provable efficiency guarantee. We make two important extensions for improving model-based methods on stochastic weakly convex optimization. First, we propose new minibatch model- based methods by involving a set of samples to approximate the model function in each iteration. For the first time, we show that stochastic algorithms achieve linear speedup over the batch size even for non-smooth and non-convex (particularly, weakly convex) problems. To this end, we develop a novel sensitivity analysis of the proximal mapping involved in each algorithm iteration. Our analysis appears to be of independent interests in more general settings. Second, motivated by the success of momentum stochastic gradient descent, we propose a new stochastic extrapolated model-based method, greatly extending the classic Polyak momentum technique to a wider class of stochastic algorithms for weakly convex optimization. The rate of convergence to some natural stationarity condition is established over a fairly flexible range of extrapolation terms.While mainly focusing on weakly convex optimization, we also extend our work to convex optimization. We apply the minibatch and extrapolated model-based methods to stochastic convex optimization, for which we provide a new complexity bound and promising linear speedup in batch size. Moreover, an accelerated model-based method based on Nesterov’s momentum is presented, for which we establish an optimal complexity bound for reaching optimality.
| accept | This paper analyzes two extensions of the recently studied (stochastic) model based optimization methods: i) minibatch variant, ii) momentum variant. The authors focus on a class of nonsmooth nonconvex functions which become convex after sufficiently strong Tikhonov regularization. The analysis approach is based on leveraging a recent result by Davis and Drusvyatskiy characterizing the degree of stationarity of a point via the gradient of its Moreau envelope, and on utilizing tools from algorithmic stability literature. The authors recover several existing and several new convergence results.
The reviewers assessed the paper favorably. There was a reasonable discussion between the reviewers and the authors. I have read the paper myself and agree that the work is of sufficient quality to be published in NeurIPS. I have noticed a number of grammatical and stylistic issues and would recommend the authors to carefully proofread the paper. | val | [
"Ce4NX11w5pN",
"g5N6ddOvDed",
"aVpCOVCPR38",
"_egdjAmPSM",
"eMEBb9B29UJ",
"fP_1A6NHLv",
"0f4OZAlWDDV",
"saM633g5IO",
"g7d8g8u9sUa",
"wGGk737Js6s",
"-_xrGG3mE_s"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed response!\n\nFor Question 1, I understand that searching for the best initial stepsize is needed, and I am more clear about it after reading your explanation. I assume that the choice for stepsize should decrease w.r.t. $m$ since $\\sqrt{m}$ is in the denominator, but in Figure 2, we ca... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"saM633g5IO",
"_egdjAmPSM",
"fP_1A6NHLv",
"g7d8g8u9sUa",
"nips_2021_PqkKlKQuGZw",
"0f4OZAlWDDV",
"eMEBb9B29UJ",
"wGGk737Js6s",
"-_xrGG3mE_s",
"nips_2021_PqkKlKQuGZw",
"nips_2021_PqkKlKQuGZw"
] |
nips_2021_WDLf8cTq_V8 | XDO: A Double Oracle Algorithm for Extensive-Form Games | Policy Space Response Oracles (PSRO) is a reinforcement learning (RL) algorithm for two-player zero-sum games that has been empirically shown to find approximate Nash equilibria in large games. Although PSRO is guaranteed to converge to an approximate Nash equilibrium and can handle continuous actions, it may take an exponential number of iterations as the number of information states (infostates) grows. We propose Extensive-Form Double Oracle (XDO), an extensive-form double oracle algorithm for two-player zero-sum games that is guaranteed to converge to an approximate Nash equilibrium linearly in the number of infostates. Unlike PSRO, which mixes best responses at the root of the game, XDO mixes best responses at every infostate. We also introduce Neural XDO (NXDO), where the best response is learned through deep RL. In tabular experiments on Leduc poker, we find that XDO achieves an approximate Nash equilibrium in a number of iterations an order of magnitude smaller than PSRO. Experiments on a modified Leduc poker game and Oshi-Zumo show that tabular XDO achieves a lower exploitability than CFR with the same amount of computation. We also find that NXDO outperforms PSRO and NFSP on a sequential multidimensional continuous-action game. NXDO is the first deep RL method that can find an approximate Nash equilibrium in high-dimensional continuous-action sequential games.
| accept | This submission had a range of scores across reviewers, with votes both for reject and accept. However, there is actually a strong consensus on the strength and weaknesses of the paper. Conceptually, it's a fairly straightforward idea, but one that may be worth exploring in light of the performance of other double-oracle methods. However, all reviewers agreed that it is not a particularly novel direction.
The numerical strength of the approach is not very positive: the authors show good results on games that the authors designed in order to get good performance from their proposed algorithms. On the existing games that were tried, the proposed algorithms had fairly weak performance. At the same time, the reviewers all felt that the authors did a good job experimentally: they tried a good number of small and medium-sized games and show that XDO/NXDO performs poorly in certain settings and quite well in other (somewhat contrived) settings. Thus one could argue that the paper is a pretty good reference point for the method: perhaps the approach is not that useful overall, and this paper lays out that performance in a reasonable way.
The authors are strongly encouraged to include the additional experiments from the rebuttals. | val | [
"0xh-EJueoRd",
"cIPGqVJ71Ib",
"d2fvcC5R-jS",
"7ft0ZIrcClV",
"4JO5jCtdu1h",
"ZXdwu09kiJ2",
"oJ7hXQ7X9q",
"auGS0Y9bGA",
"9J2m4InWwSz",
"of_xis_R47j",
"9MXznOlytYF",
"qo-slwx_rgd",
"AM0-fbIOVrP",
"Sh2DUm9CoV",
"CO4VgR3dkyX",
"H7BsrHy_SUu",
"z5hQnzJnkfD",
"WvzqdpDGcpf",
"opYaiGOvh-K"... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_re... | [
" That makes sense, thanks. I'm still not sure that would work because it won't work for regret matching in normal form games with added actions, but I will try this out for XDO. ",
" >Aren’t there cases where we add actions that don’t add entire new IISGs but still contain some descendants whose opponent infose... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"cIPGqVJ71Ib",
"d2fvcC5R-jS",
"7ft0ZIrcClV",
"4JO5jCtdu1h",
"ZXdwu09kiJ2",
"oJ7hXQ7X9q",
"of_xis_R47j",
"nips_2021_WDLf8cTq_V8",
"CO4VgR3dkyX",
"9MXznOlytYF",
"qo-slwx_rgd",
"AM0-fbIOVrP",
"H7BsrHy_SUu",
"H7BsrHy_SUu",
"auGS0Y9bGA",
"5ozae752bX",
"Bm02j5sKV99",
"opYaiGOvh-K",
"ni... |
nips_2021_7CKWdq2yQv | Active Assessment of Prediction Services as Accuracy Surface Over Attribute Combinations | Our goal is to evaluate the accuracy of a black-box classification model, not as a single aggregate on a given test data distribution, but as a surface over a large number of combinations of attributes characterizing multiple test data distributions. Such attributed accuracy measures become important as machine learning models get deployed as a service, where the training data distribution is hidden from clients, and different clients may be interested in diverse regions of the data distribution. We present Attributed Accuracy Assay (AAA) --- a Gaussian Process (GP)-based probabilistic estimator for such an accuracy surface. Each attribute combination, called an 'arm' is associated with a Beta density from which the service's accuracy is sampled. We expect the GP to smooth the parameters of the Beta density over related arms to mitigate sparsity. We show that obvious application of GPs cannot address the challenge of heteroscedastic uncertainty over a huge attribute space that is sparsely and unevenly populated. In response, we present two enhancements: pooling sparse observations, and regularizing the scale parameter of the Beta densities. After introducing these innovations, we establish the effectiveness of AAA both in terms of its estimation accuracy and exploration efficiency, through extensive experiments and analysis.
| accept | The reviewers are in consensus that this submission represents an interesting, well-motivated contribution to a field that I personally find is under-represented and -valued at conferences: model evaluation. The reviewers are slightly concerned around the explanation of some results and how they are presented -- I hope the authors will take these comments to heart and encourage them to incorporate these changes in preparing the camera-ready version of their manuscript. | train | [
"JiYH1H2YF7S",
"X8eLv1CkN8E",
"dy8gmU_x7FZ",
"Z6KYuaSQGqF",
"nxa7moiwqM7",
"szpb4NoM8tw",
"hZ_p8i9-29e"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for providing additional experimental results! I think the results can use more explanation on how they are generated but overall look good.",
" This paper describes a method to estimate the predictive performance of a black-box classifier on different tasks depending on the subsets of attributes ... | [
-1,
6,
-1,
-1,
-1,
7,
7
] | [
-1,
4,
-1,
-1,
-1,
3,
4
] | [
"Z6KYuaSQGqF",
"nips_2021_7CKWdq2yQv",
"nxa7moiwqM7",
"hZ_p8i9-29e",
"X8eLv1CkN8E",
"nips_2021_7CKWdq2yQv",
"nips_2021_7CKWdq2yQv"
] |
nips_2021_FP7_Q6Np8w | A mechanistic multi-area recurrent network model of decision-making | Recurrent neural networks (RNNs) trained on neuroscience-based tasks have been widely used as models for cortical areas performing analogous tasks. However, very few tasks involve a single cortical area, and instead require the coordination of multiple brain areas. Despite the importance of multi-area computation, there is a limited understanding of the principles underlying such computation. We propose to use multi-area RNNs with neuroscience-inspired architecture constraints to derive key features of multi-area computation. In particular, we show that incorporating multiple areas and Dale's Law is critical for biasing the networks to learn biologically plausible solutions. Additionally, we leverage the full observability of the RNNs to show that output-relevant information is preferentially propagated between areas. These results suggest that cortex uses modular computation to generate minimal sufficient representations of task information. More broadly, our results suggest that constrained multi-area RNNs can produce experimentally testable hypotheses for computations that occur within and across multiple brain areas, enabling new insights into distributed computation in neural systems.
| accept | The reviewers appreciated the clear presentation of the paper, the interesting observation that a three-area network can qualitatively reproduce the results of the Mante et al paper with regards to neural selectivity, and the thorough investigation of different hyper-parameters influencing network behaviour. However, and after extensive internal discussions in which all reviewers participated (even if they might not have updated their review reports), it remained unclear what the _causes_ of this observation were-- in part, the paper mostly (and in contrast to some of its claims) mostly made statements about _sufficient_ ingredients of the networks, and not about _necessary_ ones (e.g. regarding the role of Dale's law). More broadly, the study would have benefited from a clearer statement about _why_ a three-area (and not, e.g. a two-area) network can achieve the desired behavior. We hope that the feedback by the reviewers will allow be useful for you. | train | [
"InQPVEwmUr",
"ZVZFxT_MrHP",
"daajqJL0oY",
"hRlTub1NZ4M",
"dOGTIa0-wI",
"WWxmUBRPoz",
"f-hPmHukq2n",
"Gzsl_xM18M",
"HY8d-YfJVXq",
"WeWQyZQLL49",
"n-aY1duJ_xS",
"-jkiCkYDwGT",
"q80LLEvl_mY",
"zcI29ZGvd7v"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the reply. We agree and had performed additional analyses, also prompted by Reviewer EDPH. \n\n**In response to the number of areas:** The primary mechanism through which only direction information is present by the last area was through the preferential alignment of the direction axis with the top sin... | [
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
8,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"dOGTIa0-wI",
"HY8d-YfJVXq",
"f-hPmHukq2n",
"WeWQyZQLL49",
"n-aY1duJ_xS",
"nips_2021_FP7_Q6Np8w",
"Gzsl_xM18M",
"q80LLEvl_mY",
"zcI29ZGvd7v",
"-jkiCkYDwGT",
"WWxmUBRPoz",
"nips_2021_FP7_Q6Np8w",
"nips_2021_FP7_Q6Np8w",
"nips_2021_FP7_Q6Np8w"
] |
nips_2021_fzwx-pzQGxe | Learning to Compose Visual Relations | The visual world around us can be described as a structured set of objects and their associated relations. An image of a room may be conjured given only the description of the underlying objects and their associated relations. While there has been significant work on designing deep neural networks which may compose individual objects together, less work has been done on composing the individual relations between objects. A principal difficulty is that while the placement of objects is mutually independent, their relations are entangled and dependent on each other. To circumvent this issue, existing works primarily compose relations by utilizing a holistic encoder, in the form of text or graphs. In this work, we instead propose to represent each relation as an unnormalized density (an energy-based model), enabling us to compose separate relations in a factorized manner. We show that such a factorized decomposition allows the model to both generate and edit scenes that have multiple sets of relations more faithfully. We further show that decomposition enables our model to effectively understand the underlying relational scene structure.
| accept | This paper presents a quite novel framework to factorize and compose separate object relations, with the ability to generate and edit images with composed relations. The performance is significantly better than StyleGAN2 CLIP and encouraging. One of the limitations that reviewers are concerned about is that most of the experiments are conducted on synthetic data, and more analyses on real-world scenes are expected. All reviewers carefully read the authors' responses, have extensive discussions and agree on the novelty and explanation of this work. Considering the high quality of this paper and consistent positive comments by reviewers, The AC recommends accepting this paper.
| train | [
"YdURmf96Aqe",
"udRPkmHzt9J",
"ZVn6REo-nZa",
"0d-9n53ONk",
"rxarFQvpS5D",
"LZDo7Oru-ce",
"bobKIMjhOC3",
"Jt5GaY9-Nxj",
"xBQRlP_gLw7",
"KvL4XuQIr9",
"QdBNASt7nJL",
"7vkf3HC9wrn",
"Z-SvI5pW10u",
"DscVSiXqFHl",
"ijTQ2AA6CY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks the authors for addressing my concerns and comments. After reading other reviewers' comments and full rebuttal, I would like keep my rating as 7. \n\nI especially like the discussion about different quantitative measures and the human evaluation on generated images, I suggest to add them into the final ver... | [
-1,
-1,
6,
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
-1,
-1,
3,
5,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"ijTQ2AA6CY",
"DscVSiXqFHl",
"nips_2021_fzwx-pzQGxe",
"nips_2021_fzwx-pzQGxe",
"0d-9n53ONk",
"bobKIMjhOC3",
"nips_2021_fzwx-pzQGxe",
"0d-9n53ONk",
"ZVn6REo-nZa",
"ijTQ2AA6CY",
"bobKIMjhOC3",
"DscVSiXqFHl",
"nips_2021_fzwx-pzQGxe",
"nips_2021_fzwx-pzQGxe",
"nips_2021_fzwx-pzQGxe"
] |
nips_2021_M7emZFOLbH | Identity testing for Mallows model | In this paper, we devise identity tests for ranking data that is generated from Mallows model both in the \emph{asymptotic} and \emph{non-asymptotic} settings. First we consider the case when the central ranking is known, and devise two algorithms for testing the spread parameter of the Mallows model. The first one is obtained by constructing a Uniformly Most Powerful Unbiased (UMPU) test in the asymptotic setting and then converting it into a sample-optimal non-asymptotic identity test. The resulting test is, however, impractical even for medium sized data, because it requires computing the distribution of the sufficient statistic. The second non-asymptotic test is derived from an optimal learning algorithm for the Mallows model. This test is both easy to compute and is sample-optimal for a wide range of parameters. Next, we consider testing Mallows models for the unknown central ranking case. This case can be tackled in the asymptotic setting by introducing a bias that exponentially decays with the sample size. We support all our findings with extensive numerical experiments and show that the proposed tests scale gracefully with the number of items to be ranked.
| accept | Mallows model are a popular distribution of rankings. This papers studies the problem of testing whether the underlying distribution is equal to a known Mallows model or is far from it. This is a clean question, and is a novel avenue for applying goodness-of-fit testing. The paper is above the bar for NeurIPS. | train | [
"jLFK7y61KfY",
"u0VAF376P8",
"M5a5_tcQD5l",
"0FcWUttPh3",
"cPjIDvE7UJ",
"2faGy-BwYO",
"eLEXQ0Zj7Rl",
"X_KSL_uKu0n"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose asymptotic and non-asymptotic test for the hypothesis $\\phi = \\phi_0$ where $\\phi$ is the spread parameter of a Mallows model. To do that, they first prove that we can derive a non-asymptotic test from an asymptotic one (Prop 4.1), then show that an optimal unbiased UMP test can be defined i... | [
6,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
2,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"nips_2021_M7emZFOLbH",
"jLFK7y61KfY",
"X_KSL_uKu0n",
"eLEXQ0Zj7Rl",
"2faGy-BwYO",
"nips_2021_M7emZFOLbH",
"nips_2021_M7emZFOLbH",
"nips_2021_M7emZFOLbH"
] |
nips_2021_QGauQ-9y42C | Bandits with Knapsacks beyond the Worst Case | Bandits with Knapsacks (BwK) is a general model for multi-armed bandits under supply/budget constraints. While worst-case regret bounds for BwK are well-understood, we present three results that go beyond the worst-case perspective. First, we provide upper and lower bounds which amount to a full characterization for logarithmic, instance-dependent regret rates.Second, we consider "simple regret" in BwK, which tracks algorithm's performance in a given round, and prove that it is small in all but a few rounds. Third, we provide a "generalreduction" from BwK to bandits which takes advantage of some known helpful structure, and apply this reduction to combinatorial semi-bandits, linear contextual bandits, and multinomial-logit bandits. Our results build on the BwK algorithm from prior work, providing new analyses thereof.
| accept | This paper investigates the bandits with knapsack (BwK) problem. It extends the state of the art by looking beyond the worst case regret analysis and provides logarithmic regret upper and lower bounds, first of their kinds. The second contribution of the paper is the analysis of BwK from the simple regret's perspective. Here the authors have proved that for any $\epsilon > 0$, apart from $O(K/\epsilon^2)\log(KTd))$ time steps, this simple regret is typically smaller than $\epsilon$. The final main contribution of the paper is the new proving concept that helps converting proof techniques from the classical bandit to the BwK setting, making the regret analysis in the BwK domain more seamless. Apart from this, the paper also introduces a number of new gap concepts and proof techniques.
As someone quite familiar with the BwK literature, I find this paper to be fascinating. If any weaknesses I need to mention then perhaps the denseness of the paper. It was a bit difficult to follow all the new concepts and terms. However, overall I find the amount of novelty and new ideas compensate this weakness.
From the reviewers' comments I can conclude that they also (more or less) share my judgement that this is a good paper. It is quite unfortunate that the reviewers were not involved in the discussion with the authors in a more active way to further elaborate on the strengths and weaknesses of the paper. Nevertheless, none of the reviewers did mention any reasons for rejecting this paper. Therefore, I recommend it to be accepted as a poster.
| train | [
"SPHzVnCs-C",
"wx0aIdxyij0",
"qMHT-TZMCIK",
"YPIgSWK9uZu",
"2O6vJCiejty",
"Rnt4-wrbIyK",
"FxmZf5Vkkj",
"gaUZ8MRr4Cy"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\n**Re $G_{LAG}$:**\nIn our response upthread, we list three reasons why we think $G_{LAG}$ is a reasonable notion of gap. Are you saying you don’t find them sufficient? In particular, there already is a \"basic\" lower bound which applies -- the one from Lai-Robbins. \n\nPresumably, what you are ... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"YPIgSWK9uZu",
"gaUZ8MRr4Cy",
"FxmZf5Vkkj",
"qMHT-TZMCIK",
"Rnt4-wrbIyK",
"nips_2021_QGauQ-9y42C",
"nips_2021_QGauQ-9y42C",
"nips_2021_QGauQ-9y42C"
] |
nips_2021_gBxJ0R9Oua | Closing the loop in medical decision support by understanding clinical decision-making: A case study on organ transplantation | Significant effort has been placed on developing decision support tools to improve patient care. However, drivers of real-world clinical decisions in complex medical scenarios are not yet well-understood, resulting in substantial gaps between these tools and practical applications. In light of this, we highlight that more attention on understanding clinical decision-making is required both to elucidate current clinical practices and to enable effective human-machine interactions. This is imperative in high-stakes scenarios with scarce available resources. Using organ transplantation as a case study, we formalize the desiderata of methods for understanding clinical decision-making. We show that most existing machine learning methods are insufficient to meet these requirements and propose iTransplant, a novel data-driven framework to learn the factors affecting decisions on organ offers in an instance-wise fashion directly from clinical data, as a possible solution. Through experiments on real-world liver transplantation data from OPTN, we demonstrate the use of iTransplant to: (1) discover which criteria are most important to clinicians for organ offer acceptance; (2) identify patient-specific organ preferences of clinicians allowing automatic patient stratification; and (3) explore variations in transplantation practices between different transplant centers. Finally, we emphasize that the insights gained by iTransplant can be used to inform the development of future decision support tools.
| accept | Reviewers' scores and sentiments were remarkably consistent and positive toward this work. The approach taken to address a high-stakes problem seems appropriate, and reviewers appreciated both that approach and its validation. Still, reviewers -- especially Q9SD and oN9u -- did have outstanding questions regarding details about the methodology and about the experimental validation, and I would encourage the authors to update their work to reflect answers to those questions (many of which were given in the rebuttal). | train | [
"5p5VzO_9-sg",
"p-ZoHYrIU9K",
"q9NGdgYzTB",
"WUdQeJUIQ0P",
"D3ffmBTOUF7",
"L1nXfIQLmuA",
"V8tovMFLrGB",
"50ddgbwL8tQ",
"r7hO0f0VaT0",
"lyxcy5xqDyo",
"RSFTZ4iaB7n"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors propose a policy learning strategy that is patient-driven and interpretable. They focus on the case of organ transplant, not predicting matching but rather whether an organ will be accepted by a clinician for their patient. The authors show that their method perform on par with other app... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_gBxJ0R9Oua",
"D3ffmBTOUF7",
"5p5VzO_9-sg",
"5p5VzO_9-sg",
"5p5VzO_9-sg",
"5p5VzO_9-sg",
"RSFTZ4iaB7n",
"RSFTZ4iaB7n",
"lyxcy5xqDyo",
"nips_2021_gBxJ0R9Oua",
"nips_2021_gBxJ0R9Oua"
] |
nips_2021_i0DmV60aeK | Change Point Detection via Multivariate Singular Spectrum Analysis | Arwa Alanqary, Abdullah Alomar, Devavrat Shah | accept |
The paper studies sequential change-point detection using a combination multi-variate singular spectrum analysis (SSA) and CUSUM. The paper presents a novel analysis of the online CPD setting with a dependent (non-i.i.d.) sequence and characterizing the tradeoff of detection delay vs. false-alarm. Extensive numerical results validate the performance of the algorithm. | train | [
"-VTRDXTsBmv",
"hJyJsx6VN_7",
"iOqmBsbYFNF",
"tbCo8IAHaXg",
"YTbhtOTRloH",
"gsSUmrXgscb",
"RC7BNgvJj5D",
"SgDlnwpN8SG",
"dX_OVthhT6s",
"ixUa6djYNC",
"TWIRFI6Qde1",
"mxdkub4wYYC",
"EMyMsT7CI53"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks a lot for the response. Most of my previous concerns are resolved. I would suggest the authors incorporate these contents into the updated version to make it more clear and complete.",
" We thank the reviewer again for their detailed review and feedback and we will be happy to answer any further question... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"ixUa6djYNC",
"tbCo8IAHaXg",
"gsSUmrXgscb",
"RC7BNgvJj5D",
"nips_2021_i0DmV60aeK",
"dX_OVthhT6s",
"EMyMsT7CI53",
"mxdkub4wYYC",
"YTbhtOTRloH",
"TWIRFI6Qde1",
"nips_2021_i0DmV60aeK",
"nips_2021_i0DmV60aeK",
"nips_2021_i0DmV60aeK"
] |
nips_2021_Wiq6Mg8btwT | Meta-learning to Improve Pre-training | Pre-training (PT) followed by fine-tuning (FT) is an effective method for training neural networks, and has led to significant performance improvements in many domains. PT can incorporate various design choices such as task and data reweighting strategies, augmentation policies, and noise models, all of which can significantly impact the quality of representations learned. The hyperparameters introduced by these strategies therefore must be tuned appropriately. However, setting the values of these hyperparameters is challenging. Most existing methods either struggle to scale to high dimensions, are too slow and memory-intensive, or cannot be directly applied to the two-stage PT and FT learning process. In this work, we propose an efficient, gradient-based algorithm to meta-learn PT hyperparameters. We formalize the PT hyperparameter optimization problem and propose a novel method to obtain PT hyperparameter gradients by combining implicit differentiation and backpropagation through unrolled optimization. We demonstrate that our method improves predictive performance on two real-world domains. First, we optimize high-dimensional task weighting hyperparameters for multitask pre-training on protein-protein interaction graphs and improve AUROC by up to 3.9%. Second, we optimize a data augmentation neural network for self-supervised PT with SimCLR on electrocardiography data and improve AUROC by up to 1.9%.
| accept | Pretrain-finetuning is an important and popular learning framework for modern deep learning models, and has achieved significant success. How to learn some hyperparameter of the framework is important yet not well explored. This paper proposes a meta learning strategy for this via direct and implicit differentiation to perform gradient descent. The technique is not new but has not been applied to this problem.
All reviewers seem to consider the paper to be in the borderline. They raised a number of questions including the setting of the number of PT/FT steps, if it will cause short horizon bias, and also asked for some additional baselines to demonstrate the effectiveness of the proposed method. Most of the questions, in my opinion, have been well addressed in the rebuttal with further explanation and extra experiments. Thus I would recommend acceptance of the paper. One concern I have is that it seems the scale of the data in the experiments is not large enough (from what I understand in the paper, though the statistics of some data have not been provided). I guess investigating the large-scale setting is important because this is where the PT/FT framework really reveals its power. I would recommend the authors to revise the paper according to the comments from the reviewers, and consider extra large-scale experiments (such as the ImageNet scale data) to make the paper stronger. | train | [
"L90YpeVlikG",
"pnR_r58f7yo",
"irP1QB4XpBV",
"WB_rOF1y1LR",
"Ioonax566-k",
"mD7uMTo9XfA",
"RQmD8Wwuhfq",
"MMbtjKWlzXX",
"KkEVKMAuNZu",
"pv3YbvvXwdx",
"h3mTvJZRYC8",
"5bsEYCFhVYx",
"eNXLRf9gnxr",
"QhmEeL8ZU6J",
"ltYMZOkORmg",
"KY70IczFhH4",
"THXwi3nDLQw",
"FTiYSGrKBt_",
"8bx8uxWzi... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
" Dear Reviewer, thank you very much for reading our response and updating your score. We are happy to provide more information about the revised presentation and any points that are still unclear if that would help clarify aspects of the work, and help improve the paper in your eyes?\n\nThank you again!",
"The p... | [
-1,
5,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
2,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"irP1QB4XpBV",
"nips_2021_Wiq6Mg8btwT",
"ltYMZOkORmg",
"mD7uMTo9XfA",
"nips_2021_Wiq6Mg8btwT",
"5bsEYCFhVYx",
"NMg5zIV_vT",
"pnR_r58f7yo",
"Ioonax566-k",
"unRJTQgb4iG",
"5bsEYCFhVYx",
"QhmEeL8ZU6J",
"NMg5zIV_vT",
"KY70IczFhH4",
"pnR_r58f7yo",
"THXwi3nDLQw",
"Ioonax566-k",
"8bx8uxWz... |
nips_2021_l-0rLXvctI | Fair Sparse Regression with Clustering: An Invex Relaxation for a Combinatorial Problem | In this paper, we study the problem of fair sparse regression on a biased dataset where bias depends upon a hidden binary attribute. The presence of a hidden attribute adds an extra layer of complexity to the problem by combining sparse regression and clustering with unknown binary labels. The corresponding optimization problem is combinatorial, but we propose a novel relaxation of it as an invex optimization problem. To the best of our knowledge, this is the first invex relaxation for a combinatorial problem. We show that the inclusion of the debiasing/fairness constraint in our model has no adverse effect on the performance. Rather, it enables the recovery of the hidden attribute. The support of our recovered regression parameter vector matches exactly with the true parameter vector. Moreover, we simultaneously solve the clustering problem by recovering the exact value of the hidden attribute for each sample. Our method uses carefully constructed primal dual witnesses to provide theoretical guarantees for the combinatorial problem. To that end, we show that the sample complexity of our method is logarithmic in terms of the dimension of the regression parameter vector.
| accept | In this work, the authors consider the problem of learning a fair sparse linear model when the sensitive attribute is binary and unknown. They establish that the solution of an invex optimization problem can recover both the correct support of the regression coefficients and the hidden sensitive labels. This problem and solution is of interest to the ML community as a variant of the generally important question of learning fair models that will be particularly apt in certain applications where the sensitive attribute is unknown. | train | [
"5Y1g9ogHyPJ",
"d3hEjqvylWX",
"nWMxhcb9TVn",
"lQdJbiRccLX",
"f7rHEazq-s4",
"q-SX_lyywZw",
"upo0OXxUCl9",
"s2UzOAd-8f1",
"_sgaraAQnP7",
"1CG2-fP7Lmh",
"pCtWhR5Q54o",
"9syDZbBpJqK",
"_BsPTEOTkso",
"j3ep73rgC4",
"M71WIFcU1sr",
"6BIaMVzHII",
"lCHyaoVOwul"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers the task of sparse regression on a biased dataset where the bias depends on a hidden binary attribute. A new optimization problem is formulated which is then relaxed into an invex optimization problem. The authors then prove that their invex fair lasso formulation correctly recovers the hidden ... | [
7,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_l-0rLXvctI",
"nWMxhcb9TVn",
"upo0OXxUCl9",
"q-SX_lyywZw",
"nips_2021_l-0rLXvctI",
"1CG2-fP7Lmh",
"M71WIFcU1sr",
"_sgaraAQnP7",
"nips_2021_l-0rLXvctI",
"pCtWhR5Q54o",
"f7rHEazq-s4",
"5Y1g9ogHyPJ",
"lCHyaoVOwul",
"6BIaMVzHII",
"nips_2021_l-0rLXvctI",
"nips_2021_l-0rLXvctI",
... |
nips_2021_rg8gNkvs3u | Probabilistic Margins for Instance Reweighting in Adversarial Training | Reweighting adversarial data during training has been recently shown to improve adversarial robustness, where data closer to the current decision boundaries are regarded as more critical and given larger weights. However, existing methods measuring the closeness are not very reliable: they are discrete and can take only a few values, and they are path-dependent, i.e., they may change given the same start and end points with different attack paths. In this paper, we propose three types of probabilistic margin (PM), which are continuous and path-independent, for measuring the aforementioned closeness and reweighing adversarial data. Specifically, a PM is defined as the difference between two estimated class-posterior probabilities, e.g., such a probability of the true label minus the probability of the most confusing label given some natural data. Though different PMs capture different geometric properties, all three PMs share a negative correlation with the vulnerability of data: data with larger/smaller PMs are safer/riskier and should have smaller/larger weights. Experiments demonstrated that PMs are reliable and PM-based reweighting methods outperformed state-of-the-art counterparts.
| accept | The paper studied the reweighting strategy in adversarial training and got 8776 scores. Before rebuttal, only one reviewer had some concerns on the experiments, while the authors provided a successful rebuttal with additional experiments and addressed the reviewer's concerns. Thus, I recommend accepting the paper. | train | [
"XCR0Lk3nAIM",
"tCb2qcFU8-",
"KqZelTQj1E",
"DU4VwRmOWzh",
"RV_AYESO_Rk",
"I4fsKyjS66T",
"OjgR6adnhRr",
"Y6e_2YhvCrY",
"TPyePKCNyXm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper propose three types of probabilistic margin (PM) to improve the model robustness. The proposed method can measure the aforementioned closeness and reweight adversarial data. \nAnd some experiments were conducted to validate the effectiveness of the proposed method. This paper have following problem... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
5,
3,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_rg8gNkvs3u",
"nips_2021_rg8gNkvs3u",
"XCR0Lk3nAIM",
"tCb2qcFU8-",
"XCR0Lk3nAIM",
"TPyePKCNyXm",
"Y6e_2YhvCrY",
"nips_2021_rg8gNkvs3u",
"nips_2021_rg8gNkvs3u"
] |
nips_2021_O8Ffv3aRJr | Unbalanced Optimal Transport through Non-negative Penalized Linear Regression | This paper addresses the problem of Unbalanced Optimal Transport (UOT) in which the marginal conditions are relaxed (using weighted penalties in lieu of equality) and no additional regularization is enforced on the OT plan. In this context, we show that the corresponding optimization problem can be reformulated as a non-negative penalized linear regression problem. This reformulation allows us to propose novel algorithms inspired from inverse problems and nonnegative matrix factorization. In particular, we consider majorization-minimization which leads in our setting to efficient multiplicative updates for a variety of penalties. Furthermore, we derive for the first time an efficient algorithm to compute the regularization path of UOT with quadratic penalties. The proposed algorithm provides a continuity of piece-wise linear OT plans converging to the solution of balanced OT (corresponding to infinite penalty weights). We perform several numerical experiments on simulated and real data illustrating the new algorithms, and provide a detailed discussion about more sophisticated optimization tools that can further be used to solve OT problems thanks to our reformulation.
| accept | The authors consider the unbalanced optimal transport (OT) problem and its semi-relaxed variants. They then consider various optimization approaches that come from the mirror descent family by choosing an appropriate Bregman divergence. In particular, they consider the entropic and the ell2 mirror maps.
In the ell2 case, the authors connect the optimization problem to the LARS optimization, which has been quite successful in the sparse recovery literature. They propose a regularization path or homotopy algorithm for the unbalanced OT plans and demonstrate its performance numerically.
The developments in the paper mostly rely on recognizing the LASSO structure in the unbalanced OT problem and then applying existing techniques to it. In the worst case, this approach has exponential complexity but the authors argue that the [Mairal and Yu 2012] approach can be applied to obtain eps-approximate primal dual gap as an afterthought. Unfortunately, there is also no follow up to back up how this epsilon guarantee to the quality of the OT map.
| train | [
"IAteZO5Gzy6",
"sA4eYGW-0Q9",
"B3g-2N2mjKU",
"SkL-orYXG9x",
"5BtLq3qRdMI",
"dwLT8ztEXoY",
"ZRK7Zygh5p",
"D1RPa9Lflsh",
"ejd7Eb2-NO",
"zADEY4UBhS",
"twjDRnBreS",
"dMyNifcQio",
"IhqkJIDyyYF",
"ewawbkMea0J"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose to reformulate the unbalanced Optimal Transport (e.g., Kullback-Leibler and $\\ell_2$-regularization) and non-negative penalized linear regression with $\\ell_1$-regularization. For $\\ell_2$-regularized UOT, the authors derive the regularization path.\n It is interesting to reformulate the UO... | [
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_O8Ffv3aRJr",
"dMyNifcQio",
"5BtLq3qRdMI",
"nips_2021_O8Ffv3aRJr",
"dwLT8ztEXoY",
"zADEY4UBhS",
"D1RPa9Lflsh",
"nips_2021_O8Ffv3aRJr",
"ewawbkMea0J",
"SkL-orYXG9x",
"IAteZO5Gzy6",
"IhqkJIDyyYF",
"nips_2021_O8Ffv3aRJr",
"nips_2021_O8Ffv3aRJr"
] |
nips_2021_nPHA8fGicZk | The Difficulty of Passive Learning in Deep Reinforcement Learning | Learning to act from observational data without active environmental interaction is a well-known challenge in Reinforcement Learning (RL). Recent approaches involve constraints on the learned policy or conservative updates, preventing strong deviations from the state-action distribution of the dataset. Although these methods are evaluated using non-linear function approximation, theoretical justifications are mostly limited to the tabular or linear cases. Given the impressive results of deep reinforcement learning, we argue for a need to more clearly understand the challenges in this setting.In the vein of Held & Hein's classic 1963 experiment, we propose the "tandem learning" experimental paradigm which facilitates our empirical analysis of the difficulties in offline reinforcement learning. We identify function approximation in conjunction with fixed data distributions as the strongest factors, thereby extending but also challenging hypotheses stated in past work. Our results provide relevant insights for offline deep reinforcement learning, while also shedding new light on phenomena observed in the online case of learning control.
| accept | The paper analyzes the reason for the poor performance of learning control policies from offline datasets. The main conclusion is that
- Insufficient coverage of suboptimal actions to poor performance when the policy is evaluated is online.
- This problem is further aggravated by deep networks that extrapolate value function in the regions of the space that are infrequently explored by the agent.
Three reviewers vote for accepting the paper, while reviewer LigT dissents. The concerns about (i) more seeds; (ii) increased mathematical rigor will definitely make the claims of the paper stronger. However, they are not a ground for rejection. I encourage the authors to incorporate these suggestions. | train | [
"GTIFipJJNGJ",
"t4dmZWNxwrG",
"XUuvbjMLinB",
"0t5Zn9XATI",
"BKJzapsq7IS",
"2hdTN8vAwPn",
"wxM1vgN5BNs",
"vj79aJOrewo",
"dJAmu3Tv0C"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper addresses the difficulties that standard RL methods have in the offline setting. It presents a new analysis technique designed to better understand the causes of the difficulties. The technique, called Tandem RL setting, and forked tandem, extend the setup, introduced by Fujimoto, Meger, and Precup in th... | [
8,
5,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_nPHA8fGicZk",
"nips_2021_nPHA8fGicZk",
"nips_2021_nPHA8fGicZk",
"t4dmZWNxwrG",
"GTIFipJJNGJ",
"dJAmu3Tv0C",
"vj79aJOrewo",
"nips_2021_nPHA8fGicZk",
"nips_2021_nPHA8fGicZk"
] |
nips_2021_o2mbl-Hmfgd | Intriguing Properties of Vision Transformers | Vision transformers (ViT) have demonstrated impressive performance across numerous machine vision tasks. These models are based on multi-head self-attention mechanisms that can flexibly attend to a sequence of image patches to encode contextual cues. An important question is how such flexibility (in attending image-wide context conditioned on a given patch) can facilitate handling nuisances in natural images e.g., severe occlusions, domain shifts, spatial permutations, adversarial and natural perturbations. We systematically study this question via an extensive set of experiments encompassing three ViT families and provide comparisons with a high-performing convolutional neural network (CNN). We show and analyze the following intriguing properties of ViT: (a)Transformers are highly robust to severe occlusions, perturbations and domain shifts, e.g., retain as high as 60% top-1 accuracy on ImageNet even after randomly occluding 80% of the image content. (b)The robustness towards occlusions is not due to texture bias, instead we show that ViTs are significantly less biased towards local textures, compared to CNNs. When properly trained to encode shape-based features, ViTs demonstrate shape recognition capability comparable to that of human visual system, previously unmatched in the literature. (c)Using ViTs to encode shape representation leads to an interesting consequence of accurate semantic segmentation without pixel-level supervision. (d)Off-the-shelf features from a single ViT model can be combined to create a feature ensemble, leading to high accuracy rates across a range of classification datasets in both traditional and few-shot learning paradigms. We show effective features of ViTs are due to flexible and dynamic receptive fields possible via self-attention mechanisms. Our code will be publicly released.
| accept | The reviewers agree that this is a solid paper that should be accepted. The analysis of vision transformers vs convnets provided in the paper is through, interesting and potentially useful for practitioners. The authors mostly addressed the (not very numerous) reviewer' concerns. Clear accept. | train | [
"7pz63WtTlot",
"_R6ioJPjeW",
"I0r3PbmYBx",
"bLdXLqk9S9a",
"7e6YbSb99_P",
"S0DOFjEJEK3",
"VT6yIyhm_Ro",
"ffQYUe5tiHU",
"puYStzd0i-4",
"vj5PYT6bu8a",
"tuDV6lKe-Wz",
"X75bAVJ6xe7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I think my concerns are well addressed by the authors' response. Thus, I would like to keep my rating and wish the authors could revise the paper as mentioned in the response.",
" I'd like to keep my rating and recommend the submission for acceptance.",
" I would like to keep my rating and recommend the submi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"7e6YbSb99_P",
"S0DOFjEJEK3",
"ffQYUe5tiHU",
"VT6yIyhm_Ro",
"X75bAVJ6xe7",
"tuDV6lKe-Wz",
"vj5PYT6bu8a",
"puYStzd0i-4",
"nips_2021_o2mbl-Hmfgd",
"nips_2021_o2mbl-Hmfgd",
"nips_2021_o2mbl-Hmfgd",
"nips_2021_o2mbl-Hmfgd"
] |
nips_2021_lgDP84byd5 | PartialFed: Cross-Domain Personalized Federated Learning via Partial Initialization | The burst of applications empowered by massive data have aroused unprecedented privacy concerns in AI society. Currently, data confidentiality protection has been one core issue during deep model training. Federated Learning (FL), which enables privacy-preserving training across multiple silos, gained rising popularity for its parameter-only communication. However, previous works have shown that FL revealed a significant performance drop if the data distributions are heterogeneous among different clients, especially when the clients have cross-domain characteristic, such as traffic, aerial and in-door. To address this challenging problem, we propose a novel idea, PartialFed, which loads a subset of the global model’s parameters rather than loading the entire model used in most previous works. We first validate our algorithm with manually decided loading strategies inspired by various expert priors, named PartialFed-Fix. Then we develop PartialFed-Adaptive, which automatically selects personalized loading strategy for each client. The superiority of our algorithm is proved by demonstrating the new state-of-the-art results on cross-domain federated classification and detection. In particular, solely by initializing a small fraction of layers locally, we improve the performance of FedAvg on Office-Home and UODB by 4.88% and 2.65%, respectively. Further studies show that the adaptive strategy performs significantly better on domains with large deviation, e.g. improves AP50 by 4.03% and 4.89% on aerial and medical image detection compared to FedAvg.
| accept | Thank you for your submission. The reviewers agree that the paper provides an interesting result. The authors should follow the reviewers' suggestions to improve their presentation in the paper. | train | [
"XZK3NAawUBT",
"0TUCz_ofl8f",
"H99qJoVXSSx",
"0t6YxoSVRc",
"0yMPfpUG-06",
"e2ac7-gR1Z_",
"kOHwA0mckpV",
"M3iSjtKvKPB",
"1I8mMIN3k6Y"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a new method called ‘PartialFed’ to solve the performance degrading caused by data heterogeneity in the federated learning scenario. The main difference between this work and previous work is that the proposed method partially loads global parameters given by FedAvg, while the previous methods l... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
6,
7
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
2,
5
] | [
"nips_2021_lgDP84byd5",
"0t6YxoSVRc",
"nips_2021_lgDP84byd5",
"XZK3NAawUBT",
"1I8mMIN3k6Y",
"H99qJoVXSSx",
"M3iSjtKvKPB",
"nips_2021_lgDP84byd5",
"nips_2021_lgDP84byd5"
] |
nips_2021_0Kb33DHJ1g | Adaptive Diffusion in Graph Neural Networks | The success of graph neural networks (GNNs) largely relies on the process of aggregating information from neighbors defined by the input graph structures. Notably, message passing based GNNs, e.g., graph convolutional networks, leverage the immediate neighbors of each node during the aggregation process, and recently, graph diffusion convolution (GDC) is proposed to expand the propagation neighborhood by leveraging generalized graph diffusion. However, the neighborhood size in GDC is manually tuned for each graph by conducting grid search over the validation set, making its generalization practically limited. To address this issue, we propose the adaptive diffusion convolution (ADC) strategy to automatically learn the optimal neighborhood size from the data. Furthermore, we break the conventional assumption that all GNN layers and feature channels (dimensions) should use the same neighborhood for propagation. We design strategies to enable ADC to learn a dedicated propagation neighborhood for each GNN layer and each feature channel, making the GNN architecture fully coupled with graph structures---the unique property that differs GNNs from traditional neural networks. By directly plugging ADC into existing GNNs, we observe consistent and significant outperformance over both GDC and their vanilla versions across various datasets, demonstrating the improved model capacity brought by automatically learning unique neighborhood size per layer and per channel in GNNs.
| accept | This paper proposes a method called adaptive graph diffusion convolution (ADC), which can adaptively decide the neighborhood size for each layer and feature channel. The proposed method can be plugged in to any GNN frameworks. The experimental studies have demonstrated the superiority of the proposed module. The reviewers were split in the beginning, with concerns about novelty etc. After receiving a strong rebuttal from authors, including additional experiments, two reviewers have raised their scores to reflect their satisfaction to the rebuttal. In the end, the reviewers have reached consensus to accept this paper. | test | [
"aVIKNfaGmgV",
"IjpnWWbacti",
"MkYD8DOeTy",
"OINyTZNOuk",
"7duZRp-Zkt4",
"eSiYf3FFBa0",
"KhLNIr-IARK",
"vSYfPE5fjhV",
"64kSkc0CZ6a",
"Ggg7Zby0Ge5",
"50v7Wz9qvtt"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for the newly added results. Together with the answers to my other concerns, they essentially addressed the issues I raised up so I would love to improve the score by 2 points.",
"This paper introduces adaptive graph diffusion convolution (ADC), a new module that adaptively and automatical... | [
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
8
] | [
-1,
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"MkYD8DOeTy",
"nips_2021_0Kb33DHJ1g",
"64kSkc0CZ6a",
"nips_2021_0Kb33DHJ1g",
"vSYfPE5fjhV",
"50v7Wz9qvtt",
"Ggg7Zby0Ge5",
"OINyTZNOuk",
"IjpnWWbacti",
"nips_2021_0Kb33DHJ1g",
"nips_2021_0Kb33DHJ1g"
] |
nips_2021_lS-ocNKV-u | Recurrent Submodular Welfare and Matroid Blocking Semi-Bandits | Orestis Papadigenopoulos, Constantine Caramanis | accept | This paper on bandits merged together many different aspects, submodularity on matroid, blocking bandits...
We are not necessarily excited by papers that combine together exciting concepts, but this specific papers had other merits as the base offline problem is also not that trivial and requires different (new) techniques.
Overall, the reviewers were all positive, the paper is well written, clear and easy to read. It can therefore be accepted at NeurIPS | train | [
"vpAlk_uWrRR",
"llnIWA9tc1R",
"SGBsEFJ9B2K",
"Mf92fHWsESQ",
"5J62fRPfN4",
"euQp4RVm97I",
"velir3EoiqM",
"0phvsh3FxF3",
"GEs83EFk8e7",
"zoy67Zm8JKg"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies an extension of the matroid bandit problem where each arm $i$ is blocked for a known $d_i$ rounds after it is played, and this is referred to as the matroid blocking semi-bandit (MBB) problem. When the arm distributions are known, the authors reduce the optimization problem to a recurrent submodu... | [
6,
8,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"nips_2021_lS-ocNKV-u",
"nips_2021_lS-ocNKV-u",
"llnIWA9tc1R",
"vpAlk_uWrRR",
"zoy67Zm8JKg",
"GEs83EFk8e7",
"0phvsh3FxF3",
"nips_2021_lS-ocNKV-u",
"nips_2021_lS-ocNKV-u",
"nips_2021_lS-ocNKV-u"
] |
nips_2021_Wl32WBZnSP4 | Representer Point Selection via Local Jacobian Expansion for Post-hoc Classifier Explanation of Deep Neural Networks and Ensemble Models | Yi Sui, Ga Wu, Scott Sanner | accept | This paper proposes a variant of RPS called RPS-LJE that is claimed to offer several advantages over the previously proposed RPS-L2. The “explanation” acts on the last layer of deep nets only, atributing predictions to training points while treating the neural network as fixed. As such, it seems unreasonable to call this (or any related works that act similarly) explanatinos of “deep neural networks” vs of linear models wiith some fixed basis. The conversation centered around several claimed advantages — efficiency (it appears to be resolved in discussion that there is no efficiency gain over influence functions), a faithfulness advantage over RPS-L2, and the striking similarity and high agreement with influence function, raising the question of whether the method is really adding anything or if the novelty is all in the new perspective/derivation.
The rebuttal and discussion were thoughtful and all reviewers were engaged. However, while they converged on several points of fact, a few others were left unresolved and their were some genuine normative disagreements about whether the contributions that withstood the test of the debate were sufficient to warrant acceptance.
I will not hide the fact that this is a difficult metareview to write and that I am not sure how to resolve the remaining impasse. I am still seeking additional opinions and hope to make a more confident assessment but believe this is a genuinely difficult case. My current best assessment now is that this paper is truly borderline. However, I believe that with reasonable adjustments to claims and presentation per promises made in the discussion period, this paper is honest work and we can leave it to posterity (vs the review process) to determine the work's eventual impact. | train | [
"FswFW4Kix-L",
"mUJJy3qbtuw",
"FfoLFaQFzk8",
"lnNDzwjA_AW",
"AQlC8il-PhW",
"Gk4Qr9S_s5",
"Jj3ULFP1uXs",
"0ewBpuJ1lbu",
"xwKUx3yDBx9",
"xLcrkeY4wAw",
"fK6KmZUPXYj",
"maE2szfnep0",
"viEtBOOW4G"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper addresses the problem of model explainability by looking at which training examples were responsible for the model’s predictions. The contributed model takes inspiration from RPS and changes its computation of the data importance factor such that it becomes less sensitive to the choice of hyper-paramete... | [
5,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"nips_2021_Wl32WBZnSP4",
"AQlC8il-PhW",
"fK6KmZUPXYj",
"nips_2021_Wl32WBZnSP4",
"Gk4Qr9S_s5",
"xwKUx3yDBx9",
"FswFW4Kix-L",
"lnNDzwjA_AW",
"maE2szfnep0",
"Jj3ULFP1uXs",
"viEtBOOW4G",
"nips_2021_Wl32WBZnSP4",
"nips_2021_Wl32WBZnSP4"
] |
nips_2021_aM7UsuOAzB3 | Editing a classifier by rewriting its prediction rules | We propose a methodology for modifying the behavior of a classifier by directly rewriting its prediction rules. Our method requires virtually no additional data collection and can be applied to a variety of settings, including adapting a model to new environments, and modifying it to ignore spurious features.
| accept | The authors propose a model to edit a classifier's decision boundary, in order to enforce invariance of the classifier to regions in the image that are known a priori to not change the semantic label (i.e. a car on a road should still be classified as a car on a snowy road). They proceed in two steps: first they rely on segmentation models to isolate concepts (e.g. road) and style transfer to transform such concepts (e.g. road -> snowy road) to obtain augmentations of the input image. Taking inspiration from previous work, they learn a "translation" between inner features of the base classifier corresponding to the concept in the transformed image, to the corresponding features of the original image, effectively making the transformed image behave like the original one. Experiments show that the procedure generalizes well, even when just learning from very few transformed exemplars.
--
Reviewers recognized the potential impact of the proposed approach in real-world scenarios and the strength of the shown results in a harder few-shot generalization setting (the method is robust on modifications for classes held out during training). A reviewer found this work "thought-provoking" and seen "as a start of a line of research that could be fruitful". All reviewers also recognized the main criticisms of this paper, which turn around: (a) the overall lack of rigor in the problem definition; (b) it's rather unclear what are the limitations of this approach and when the approach may fail; (c) the somewhat anecdotal nature of the experiments, which test for specific concept-rule pairs rather than a more systematic evaluation.
Regarding (a), reviewers found the paper a bit hand-wavy at times, for example, there's no definition of what is meant by "rule". Regarding (b), one reviewer found particularly critical that the paper didn't address the question "how many rules can be changed simultaneously?", given that the paper only deals with changing one rule at a time. Authors are strongly encouraged to think about adding an experiment illustrating this point in the final revision, for example, by verifying performance when training multiple rule translations. During rebuttal, I discussed with the authors a specific experimental scenario that can illustrate a clear failure of this model. I hope the results of our discussion will be included in the paper. Regarding (c), a reviewer acknowledged that the lack of large scale systematic evaluation might also be "due to lack of datasets and the nature of the problem being studied". To make evaluation stronger in terms of baselines, I suggest the authors to incorporate also additional standard contrastive/constitency-based regularization methods, as suggested by a reviewer.
Overall, even if reviewers were not unanimous about acceptance, I believe that the results and use cases showcased in this paper can bring excitement and inspire future work and outweigh the criticisms expressed above. I recommend this paper for acceptance. I suggest the authors do their best to address the criticisms of the reviewers (with a special effort for those who were the most negative) and incorporate the aforementioned suggestions in their final version. | train | [
"6vNWW_HFIWf",
"9Uu-ev0_7C2",
"gtPMSrXbyeQ",
"EeXF12iZw0y",
"Vktp2oOk3I",
"fCw2m-JHsst",
"-z6szAPI9Ag",
"loSuhw1PqrC"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a model editing method that proceeds in two steps. First, they rely on off-the-shelf segmentation models and style transfer models to learn a data transformation $x \\to x'$ that preserves semantic meaning (e.g., in a vehicle recognition task, they might convert a snowy road into a non-snowy ro... | [
7,
6,
-1,
-1,
-1,
-1,
4,
7
] | [
4,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_aM7UsuOAzB3",
"nips_2021_aM7UsuOAzB3",
"loSuhw1PqrC",
"9Uu-ev0_7C2",
"-z6szAPI9Ag",
"6vNWW_HFIWf",
"nips_2021_aM7UsuOAzB3",
"nips_2021_aM7UsuOAzB3"
] |
nips_2021_dwY40cSK-dt | How Modular should Neural Module Networks Be for Systematic Generalization? | Neural Module Networks (NMNs) aim at Visual Question Answering (VQA) via composition of modules that tackle a sub-task. NMNs are a promising strategy to achieve systematic generalization, ie. overcoming biasing factors in the training distribution. However, the aspects of NMNs that facilitate systematic generalization are not fully understood. In this paper, we demonstrate that the degree of modularity of the NMN have large influence on systematic generalization. In a series of experiments on three VQA datasets (VQA-MNIST, SQOOP, and CLEVR-CoGenT), our results reveal that tuning the degree of modularity, especially at the image encoder stage, reaches substantially higher systematic generalization. These findings lead to new NMN architectures that outperform previous ones in terms of systematic generalization.
| accept | The paper presents a careful investigation into how modularity affects systematic generalization across a number of synthetic datasets. The key contribution is to highlight the role of modularity, especially when introduced at the early stage of the network.
The most important concern raised by reviewers is that the experimental setting is limited. Neural Module Network, the network that all experiments are based on, is very rarely used in practice, and all the datasets are synthetic. On top of this, ground truth grouping or program layout is provided to the network, which wouldn't be normally available. It is genuinely difficult to tell how the results will translate into performance improvements more broadly.
On the positive side, the paper is technically sound and the conclusions are interesting, as agreed by most reviewers. With the added experiments in the rebuttal, the experimental data clearly supports that adding modularity to NMNs improves systematic generalization. Based on this, I am happy to support acceptance of the work. I would like to strongly encourage the Authors to broaden their experimental setting in future work, so that these results have real impact on how we train neural networks. Please remember to address all reviewers' comments in the camera ready version. | train | [
"ih4TBeLl9x",
"XGIjMaIC1B",
"SrsJN6w2LLF",
"UCSLC96rHt",
"nHBicutJwm8",
"H85-JFVAhC"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies neural module networks (NMNs) for systematic generalization on VQA tasks. Of particular interest is understanding how the degree of modularity across various stages of information processing affects the ability to generalize systematically. \n\nTo that extent three stages are distinguished: (1) ... | [
4,
-1,
-1,
6,
7,
7
] | [
3,
-1,
-1,
3,
3,
4
] | [
"nips_2021_dwY40cSK-dt",
"nips_2021_dwY40cSK-dt",
"nips_2021_dwY40cSK-dt",
"nips_2021_dwY40cSK-dt",
"nips_2021_dwY40cSK-dt",
"nips_2021_dwY40cSK-dt"
] |
nips_2021_a1wQOh27zcy | Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing | Unsupervised domain adaptation which aims to adapt models trained on a labeled source domain to a completely unlabeled target domain has attracted much attention in recent years. While many domain adaptation techniques have been proposed for images, the problem of unsupervised domain adaptation in videos remains largely underexplored. In this paper, we introduce Contrast and Mix (CoMix), a new contrastive learning framework that aims to learn discriminative invariant feature representations for unsupervised video domain adaptation. First, unlike existing methods that rely on adversarial learning for feature alignment, we utilize temporal contrastive learning to bridge the domain gap by maximizing the similarity between encoded representations of an unlabeled video at two different speeds as well as minimizing the similarity between different videos played at different speeds. Second, we propose a novel extension to the temporal contrastive loss by using background mixing that allows additional positives per anchor, thus adapting contrastive learning to leverage action semantics shared across both domains. Moreover, we also integrate a supervised contrastive learning objective using target pseudo-labels to enhance discriminability of the latent space for video domain adaptation. Extensive experiments on several benchmark datasets demonstrate the superiority of our proposed approach over state-of-the-art methods. Project page: https://cvir.github.io/projects/comix.
| accept | The initial reviews raised several concerns about novelty, missing experiments, and analyses that provide insights. The rebuttal addressed most of the concerns; some reviewers raised their ratings after the rebuttal. Overall, the paper shows how to combine existing ideas to tackle a challenging problem in a simple and effective manner. The reviews are generally positive, mainly due to the good empirical performance, but they are not overwhelmingly enthusiastic about the technical novelty. I carefully read all the reviews, rebuttal, and discussion, and also read the paper in detail. I tend to agree that the paper lacks novelty but must say the proposed approach is simple, effective, and performs well. I am recommending an acceptance, but wouldn't mind if it is rejected. | test | [
"4sVIDs0O4nn",
"Qr3mARKBhwA",
"wqX46R7bsCa",
"a-tuGpAAX1c",
"KIaeWT6NS97",
"5RR7duzmw-q",
"2VBwNan0pjc",
"YShH0x3mNT",
"2C84NVUnvMr",
"fg_HcxKFCK8",
"Vqs6wtCdIN2",
"yxlBmo_y_q1",
"z-asGaJ5f3n",
"YfH95USZ9t",
"Dw6np9RGeRI",
"owrpmXptvH3",
"yNwprjWtf1",
"LJpwM5zmux",
"chIX120rna",
... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"... | [
" We are glad that our response could clarify all the interesting queries. Thanks a lot for the constructive discussion which, we are sure, made our paper stronger.",
" Thanks for the quick response. This looks clearer. ",
" Thanks for the feedback! We are glad at your appreciation of our effort. Please find be... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"Qr3mARKBhwA",
"wqX46R7bsCa",
"a-tuGpAAX1c",
"Dw6np9RGeRI",
"2VBwNan0pjc",
"fg_HcxKFCK8",
"chIX120rna",
"nips_2021_a1wQOh27zcy",
"nips_2021_a1wQOh27zcy",
"bY_WcgVnBTd",
"z-asGaJ5f3n",
"nips_2021_a1wQOh27zcy",
"p6KgQTUIhn",
"owrpmXptvH3",
"YfH95USZ9t",
"55BYvx90TeL",
"nips_2021_a1wQOh... |
nips_2021_bV89lw5OF8x | The Flip Side of the Reweighted Coin: Duality of Adaptive Dropout and Regularization | Among the most successful methods for sparsifying deep (neural) networks are those that adaptively mask the network weights throughout training. By examining this masking, or dropout, in the linear case, we uncover a duality between such adaptive methods and regularization through the so-called “η-trick” that casts both as iteratively reweighted optimizations. We show that any dropout strategy that adapts to the weights in a monotonic way corresponds to an effective subquadratic regularization penalty, and therefore leads to sparse solutions. We obtain the effective penalties for several popular sparsification strategies, which are remarkably similar to classical penalties commonly used in sparse optimization. Considering variational dropout as a case study, we demonstrate similar empirical behavior between the adaptive dropout method and classical methods on the task of deep network sparsification, validating our theory.
| accept | The paper shows that popular heuristics in deep learning, e.g. dropout, in the context of linear regression amount to regularizing the empirical objective with sub-quadratic penalties which promote sparsity. The paper introduces interesting tools that may be interesting outside the scope of the paper. Overall, a good paper. | val | [
"5I0Sh3ks_dU",
"g5xQsNBHTi",
"j78iQ0NwYXU",
"LXzuP_XDfqr",
"Z-JMgvDHtGO",
"_8-CpRiBHZV",
"JyB-7VUHxXl",
"4Gp2sq206Zc",
"bLj4IbryuWZ",
"6dyoNX9xmT"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Hi,\nThe authors have partially addressed my concerns, I've increased my score.\nThanks!\n",
"For linear regression, the paper shows that several popular heuristics used in deep learning correspond to regularized empirical risk minimization with sub-quadratic penalties, thereby, arguing that these methods indee... | [
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
3,
2
] | [
"j78iQ0NwYXU",
"nips_2021_bV89lw5OF8x",
"g5xQsNBHTi",
"nips_2021_bV89lw5OF8x",
"6dyoNX9xmT",
"g5xQsNBHTi",
"LXzuP_XDfqr",
"bLj4IbryuWZ",
"nips_2021_bV89lw5OF8x",
"nips_2021_bV89lw5OF8x"
] |
nips_2021_-__S9T8QrDD | Active Learning of Convex Halfspaces on Graphs | We systematically study the query complexity of learning geodesically convex halfspaces on graphs. Geodesic convexity is a natural generalisation of Euclidean convexity and allows the definition of convex sets and halfspaces on graphs. We prove an upper bound on the query complexity linear in the treewidth and the minimum hull set size but only logarithmic in the diameter. We show tight lower bounds along well-established separation axioms and identify the Radon number as a central parameter of the query complexity and the VC dimension. While previous bounds typically depend on the cut size of the labelling, all parameters in our bounds can be computed from the unlabelled graph. We provide evidence that ground-truth communities in real-world graphs are often convex and empirically compare our proposed approach with other active learning algorithms.
| accept | This is an original work which proposes and explores a natural and elegant abstraction of convexity. The reviewers unanimously agreed to accept this paper. | train | [
"bV4_zfxN37P",
"zu3fJa-0db6",
"gyHugvnf0P8",
"5EWqLGboQa",
"f70HerDwm4Q",
"fxz88ahzGwn",
"ReSZSJ-M2PZ",
"FFNniqGxJPU",
"R478L_c_tKm",
"RMv2SCt6gLR",
"YpJiZLofXkG",
"sYcJXfdVPJ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper studies the query complexity of learning geodesically convex halfspaces on graphs. In the present context, a geodesically convex set of a graph $G = (V,E)$ is a subset $C$ of $V$ such that each shortest path containing any two vertices of $C$ is contained in $C$, and a halfspace of $G$ is any convex se... | [
7,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
2,
-1,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_-__S9T8QrDD",
"YpJiZLofXkG",
"nips_2021_-__S9T8QrDD",
"fxz88ahzGwn",
"nips_2021_-__S9T8QrDD",
"ReSZSJ-M2PZ",
"R478L_c_tKm",
"f70HerDwm4Q",
"gyHugvnf0P8",
"sYcJXfdVPJ",
"bV4_zfxN37P",
"nips_2021_-__S9T8QrDD"
] |
nips_2021_H4e7mBnC9f0 | Differentiable Spike: Rethinking Gradient-Descent for Training Spiking Neural Networks | Spiking Neural Networks (SNNs) have emerged as a biology-inspired method mimicking the spiking nature of brain neurons. This bio-mimicry derives SNNs' energy efficiency of inference on neuromorphic hardware. However, it also causes an intrinsic disadvantage in training high-performing SNNs from scratch since the discrete spike prohibits the gradient calculation. To overcome this issue, the surrogate gradient (SG) approach has been proposed as a continuous relaxation. Yet the heuristic choice of SG leaves it vacant how the SG benefits the SNN training. In this work, we first theoretically study the gradient descent problem in SNN training and introduce finite difference gradient to quantitatively analyze the training behavior of SNN. Based on the introduced finite difference gradient, we propose a new family of Differentiable Spike (Dspike) functions that can adaptively evolve during training to find the optimal shape and smoothness for gradient estimation. Extensive experiments over several popular network structures show that training SNN with Dspike consistently outperforms the state-of-the-art training methods. For example, on the CIFAR10-DVS classification task, we can train a spiking ResNet-18 and achieve 75.4% top-1 accuracy with 10 time steps.
| accept | The authors present a method to train spiking neural networks (SNNs) using an adaptive surrogate gradient method.
Training of SNNs is difficult due to the well-known problem of non-existing derivative of the spiking neuron output. This is usually tackled using a surrogate gradient method. However, the functional form of the surrogate gradient is quite arbitrary.
The authors first analyze the training behavior of SNNs. They propose to use a parameterized function for the surrogate gradient (sg) and estimate its best actual shape based on a comparison with a finite-difference gradient. Then they use this sg for parameter updates. They test their method on challenging data sets such as CIFAR100 and ImageNet and show excellent performance.
The paper is very well written and clear. The adaptive surrogate gradient is a novel contribution that has potential to impact future SNN training methods. The experimental results are compelling.
Weaknesses and potential improvements:
The functional form of the adaptive surrogate gradient lacks motivation.
Reviewers have identified some unclear or over-stated statements. The authors should refine these statements accordingly.
| train | [
"icu4SPPjW3G",
"AdKMnX6Tcw",
"GUiRi8sR-4j",
"X-QKp0sjMPY",
"X2OMt0HsUE_",
"IHAzxxvcQ4L",
"hwq1RYGbub",
"_TKZ8OAmZQ",
"5PVTRTBR1sT",
"8v5reaUNjy2",
"aeCUuv6J0oy"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your constructive discussion on FDG and your time in reading our paper and rebuttal. Here I want to clarify a bit more on the contribution and completeness of our work. \n\n1. Our main contribution is NOT to create the FDG for general purposes but to apply it to solve the degeneration of representivity... | [
-1,
6,
-1,
5,
-1,
-1,
-1,
-1,
-1,
8,
5
] | [
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"aeCUuv6J0oy",
"nips_2021_H4e7mBnC9f0",
"X-QKp0sjMPY",
"nips_2021_H4e7mBnC9f0",
"8v5reaUNjy2",
"AdKMnX6Tcw",
"X-QKp0sjMPY",
"nips_2021_H4e7mBnC9f0",
"aeCUuv6J0oy",
"nips_2021_H4e7mBnC9f0",
"nips_2021_H4e7mBnC9f0"
] |
nips_2021_ACV8iBHtbR | Probabilistic Entity Representation Model for Reasoning over Knowledge Graphs | Logical reasoning over Knowledge Graphs (KGs) is a fundamental technique that can provide an efficient querying mechanism over large and incomplete databases. Current approaches employ spatial geometries such as boxes to learn query representations that encompass the answer entities and model the logical operations of projection and intersection. However, their geometry is restrictive and leads to non-smooth strict boundaries, which further results in ambiguous answer entities. Furthermore, previous works propose transformation tricks to handle unions which results in non-closure and, thus, cannot be chained in a stream. In this paper, we propose a Probabilistic Entity Representation Model (PERM) to encode entities as a Multivariate Gaussian density with mean and covariance parameters to capture its semantic position and smooth decision boundary, respectively. Additionally, we also define the closed logical operations of projection, intersection, and union that can be aggregated using an end-to-end objective function. On the logical query reasoning problem, we demonstrate that the proposed PERM significantly outperforms the state-of-the-art methods on various public benchmark KG datasets on standard evaluation metrics. We also evaluate PERM’s competence on a COVID-19 drug-repurposing case study and show that our proposed work is able to recommend drugs with substantially better F1 than current methods. Finally, we demonstrate the working of our PERM’s query answering process through a low-dimensional visualization of the Gaussian representations.
| accept | The paper attempts to improve logical reasoning over knowledge graphs. In this regards, the authors propose a novel technique of embedding complex logical queries and the chain reasoning steps as Gaussian mixtures. It is claimed that this allows for increased model expressibility and especially fits well for intersection. From empirical results, it seems the proposed method works well for certain types of queries. The reviewers found this work to be novel and interesting, but reached to a consensus that the reasoning provided do not corroborate with experimental results. Also the reviewers found the presentation of some of the results hard to comprehend (e.g. the visualizations). Also it would be useful to understand and provide examples where the proposed model still fails. Thus, unfortunately I cannot recommend an acceptance of the paper in its current form, however the authors are strongly encouraged to make the fixes and resubmit to the next venue. | train | [
"izOTWtFikDP",
"EKKiJdLvmkB",
"j5rp6a7jwhI",
"moNkrHe-hHt",
"JM-NjTER_Cc",
"gXZmAbf1TCQ",
"nGLu2t-sEXF",
"G8LSL7MMPTh",
"0wDeS1Fcsdl",
"Lvxg-AIrCfC",
"X9C5nPkvhSj",
"yVvAIhRdGNZ",
"Yb37WM0TjJA",
"mCBYlXUlhik"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks a lot for the response.\n\nYou are right, that the Query2Box loss is not discontinuous but actually non-smooth. However, the issue with the loss is not the gradient computation. The strict non-smooth borders of Query2Box make the answer representations very sensitive to the query borders and they cause inc... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"j5rp6a7jwhI",
"nips_2021_ACV8iBHtbR",
"G8LSL7MMPTh",
"nGLu2t-sEXF",
"gXZmAbf1TCQ",
"0wDeS1Fcsdl",
"Lvxg-AIrCfC",
"EKKiJdLvmkB",
"mCBYlXUlhik",
"yVvAIhRdGNZ",
"Yb37WM0TjJA",
"nips_2021_ACV8iBHtbR",
"nips_2021_ACV8iBHtbR",
"nips_2021_ACV8iBHtbR"
] |
nips_2021_FackmHUDcXX | Black Box Probabilistic Numerics | Probabilistic numerics casts numerical tasks, such the numerical solution of differential equations, as inference problems to be solved. One approach is to model the unknown quantity of interest as a random variable, and to constrain this variable using data generated during the course of a traditional numerical method. However, data may be nonlinearly related to the quantity of interest, rendering the proper conditioning of random variables difficult and limiting the range of numerical tasks that can be addressed. Instead, this paper proposes to construct probabilistic numerical methods based only on the final output from a traditional method. A convergent sequence of approximations to the quantity of interest constitute a dataset, from which the limiting quantity of interest can be extrapolated, in a probabilistic analogue of Richardson’s deferred approach to the limit. This black box approach (1) massively expands the range of tasks to which probabilistic numerics can be applied, (2) inherits the features and performance of state-of-the-art numerical methods, and (3) enables provably higher orders of convergence to be achieved. Applications are presented for nonlinear ordinary and partial differential equations, as well as for eigenvalue problems—a setting for which no probabilistic numerical methods have yet been developed.
| accept | This paper suggests a "black box" approach to probabilistic numerics. The idea is to treat the output of existing (non-probabilistic) numerical methods as observations, after which the true/unknown quantities can be treated via probabilistic inference. Examples are given applying this idea to ODEs, to eigenvalue problems, and to uncertainty with nonlinear PDEs. All reviewers felt the paper was novel, technically sound, useful, and clearly written. | train | [
"cNFyRdHsJWN",
"vuUQCtwLIYa",
"OTreLc-8Hj7",
"66RpUCuzxC-",
"K5tkcQlMmu",
"Tk_uj82hExU",
"8hLApg1zz25",
"U0KAvLfJLoj",
"m8gVhD1HV1i"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarifying response. My recommendation to accept this paper remains the same.",
" We thank the reviewer for a positive review. We chose to make our comparisons with PN ODE methods because these are among the most well studied PN methods. Note that since the submission we have additionally made... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
3
] | [
"66RpUCuzxC-",
"8hLApg1zz25",
"m8gVhD1HV1i",
"U0KAvLfJLoj",
"Tk_uj82hExU",
"nips_2021_FackmHUDcXX",
"nips_2021_FackmHUDcXX",
"nips_2021_FackmHUDcXX",
"nips_2021_FackmHUDcXX"
] |
nips_2021_nNfj0pVn4Q | Interpolation can hurt robust generalization even when there is no noise | Numerous recent works show that overparameterization implicitly reduces variance for min-norm interpolators and max-margin classifiers. These findings suggest that ridge regularization has vanishing benefits in high dimensions. We challenge this narrative by showing that, even in the absence of noise, avoiding interpolation through ridge regularization can significantly improve generalization. We prove this phenomenon for the robust risk of both linear regression and classification, and hence provide the first theoretical result on \emph{robust overfitting}.
| accept | This paper aims to understand the effect of ridge regularization in overparameterized settings. It is believed that overparameterized models already have implicit bias and incorporating regularization does not provide any additional benefits in terms of standard risk. However, this paper shows that, for linear models, employing ridge regularization does help improve robust generalization in both regression and classification settings, even when the training is performed with noiseless data.
All reviewers agree that the paper studies an interesting and timely problem, and makes multiple novel theoretical contributions. There were some concerns about the writing of the paper that have been resolved via the discussion among the authors and the reviewers. Once reviewers' feedback is incorporated, this paper will be a valuable addition to NeurIPS 2021. | train | [
"qNXPWsoQmWm",
"_aAQSGlZm",
"KUPQ03ObRg",
"Skd7jRDSO3q",
"L1pNPiQqIp",
"BbsiKczmCHt",
"el3YLkCHNv9",
"O8lLlMaEsTJ",
"of7nAy1zpch",
"YmNQ9N7daYR",
"-4tHqeNSct",
"yyTFb7rFl4z"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper studies how regularization affects the robust population risk in the overparameterization regime. The authors study the behavior of linear regression and logistic regression under sufficient statistical settings. In particular, even in the noiseless regime, the population robust risk benefits from the r... | [
7,
-1,
-1,
7,
5,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
2,
-1,
-1,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_nNfj0pVn4Q",
"KUPQ03ObRg",
"el3YLkCHNv9",
"nips_2021_nNfj0pVn4Q",
"nips_2021_nNfj0pVn4Q",
"of7nAy1zpch",
"qNXPWsoQmWm",
"Skd7jRDSO3q",
"yyTFb7rFl4z",
"L1pNPiQqIp",
"nips_2021_nNfj0pVn4Q",
"nips_2021_nNfj0pVn4Q"
] |
nips_2021_npUxA--_nyX | On the Equivalence between Neural Network and Support Vector Machine | Yilan Chen, Wei Huang, Lam Nguyen, Tsui-Wei Weng | accept | This paper establishes the equivalence between neural network and support vector machines, following the recent trend in identifying equivalence between infinite-width neural networks and kernel machines. In particular, neural network classification trained by soft margin loss with subgradient descent and extension to general regularized loss functions (including L2 regularization) and show infinite-width limit falls in the NTK regime. As applications of the developed theory, authors obtain non-vacuous generalization bound of NN via the corresponding kernel machines as well as robustness certificates.
After the discussion, the reviewer’s final recommendations are 2 weak accepts, 1 weak reject and 1 reject. Given that the reject recommendation comes with low confidence and was disengaged despite the author's significant attempt to clarify, the AC is inclined to significantly downweight the suggestion. Even so the paper remains borderline.
In the end, the majority of reviewers agreed there’s merit in the paper (new equivalence, L2 regularization, application to robustness and generalization bound) and should be published. The question remained whether NeurIPS is the appropriate venue. Given the paper tackles an important question, providing non-trivial insights and equivalence they established are deemed to be correct to the expert reviewers, the AC believes the paper’s claim is significant enough and would benefit NeurIPS participants.
As a side: One of the reviewers in the end had concern/question on the generalization bound plots (provided in anonymous link). It appears that on the right panel the train/test curves follow the same line which is questionable. Also the experiment should be compared to much further in the training time (near convergence).
| train | [
"frqtp93R5T9",
"Vo2I1TgrPL",
"cqZSivK2EFZ",
"BQOXUjyYAG",
"MR4w2zWTyB",
"jHM7ZmxzOhI",
"wk_yaYojMyc",
"2HiE_ghm3d",
"bB4KuRTMpGZ",
"wdigK5xNj-",
"ykznAN_Tbh",
"ZDBzjiq_PEL",
"yX2l0iHiFAD",
"gkrdVOB1zW",
"pjE-cxCj1Nw",
"M6uFoNmQ0eK",
"386axLasfX",
"jSU9EnFi8aU",
"yOyVEBcAm_O"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer fPqE,\n\nThank you for the new feedback and increasing the score. We are glad that our clarification and the additional details are helpful, and we will include above discussion in the revised manuscript accordingly. Thank you very much for your time and helping us to improve the manuscript!",
" T... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"Vo2I1TgrPL",
"wdigK5xNj-",
"nips_2021_npUxA--_nyX",
"cqZSivK2EFZ",
"pjE-cxCj1Nw",
"wk_yaYojMyc",
"bB4KuRTMpGZ",
"nips_2021_npUxA--_nyX",
"ZDBzjiq_PEL",
"M6uFoNmQ0eK",
"pjE-cxCj1Nw",
"yX2l0iHiFAD",
"gkrdVOB1zW",
"2HiE_ghm3d",
"jSU9EnFi8aU",
"cqZSivK2EFZ",
"yOyVEBcAm_O",
"nips_2021_... |
nips_2021_oIhzg4GJeOf | Learning Semantic Representations to Verify Hardware Designs | Verification is a serious bottleneck in the industrial hardware design cycle, routinely requiring person-years of effort. Practical verification relies on a "best effort" process that simulates the design on test inputs. This suggests a new research question: Can this simulation data be exploited to learn a continuous representation of a hardware design that allows us to predict its functionality? As a first approach to this new problem, we introduce Design2Vec, a deep architecture that learns semantic abstractions of hardware designs. The key idea is to work at a higher level of abstraction than the gate or the bit level, namely the Register Transfer Level (RTL), which is somewhat analogous to software source code, and can be represented by a graph that incorporates control and data flow. This allows us to learn representations of RTL syntax and semantics using a graph neural network. We apply these representations to several tasks within verification, including predicting what cover points of the design will be exercised by a test, and generating new tests that will exercise desired cover points. We evaluate Design2Vec on three real-world hardware designs, including an industrial chip used in commercial data centers. Our results demonstrate that Design2Vec dramatically outperforms baseline approaches that do not incorporate the RTL semantics, scales to industrial designs, and can generate tests that exercise design points that are currently hard to cover with manually written tests by design verification experts.
| accept | The idea of predictive coverage/testing for RTL designs using modern representation learning methods is interesting and potentially useful. Most of the reviewer discussion surrounded the best ways to evaluate such a method; eventually reviewers agreed that the authors did as well as could be expected given the state of open tools for verification -- this broader community issue should be mentioned in the revision. Some reviewers also requested clarifications and details that should be incorporated in the revision. Overall a good step in this application space. | train | [
"mPTkr6Z6nm",
"fXmL6icY8Ss",
"vce2wFYfdH",
"2YYJxqptwnW",
"i_4DCfjaaLd",
"Z8xqx87DQCz",
"k709KdSJqfl",
"c2lpqsbLS-",
"Ku8VsgUlkm7",
"Ms4PUfGMlzH",
"ZwGRV83Gv41"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper introduces Design2Vec, which models the semantics of RTL programs using a graph neural network. The representation is used for 2 tasks: coverage prediction and test generation. The experiments on different designs show that Design2Vec can achieve high accuracy on the coverage prediction task and can als... | [
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
5,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_oIhzg4GJeOf",
"vce2wFYfdH",
"c2lpqsbLS-",
"nips_2021_oIhzg4GJeOf",
"k709KdSJqfl",
"2YYJxqptwnW",
"2YYJxqptwnW",
"mPTkr6Z6nm",
"nips_2021_oIhzg4GJeOf",
"ZwGRV83Gv41",
"nips_2021_oIhzg4GJeOf"
] |
nips_2021_Ja-hVQrfeGZ | Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training | Conditional Generative Adversarial Networks (cGAN) generate realistic images by incorporating class information into GAN. While one of the most popular cGANs is an auxiliary classifier GAN with softmax cross-entropy loss (ACGAN), it is widely known that training ACGAN is challenging as the number of classes in the dataset increases. ACGAN also tends to generate easily classifiable samples with a lack of diversity. In this paper, we introduce two cures for ACGAN. First, we identify that gradient exploding in the classifier can cause an undesirable collapse in early training, and projecting input vectors onto a unit hypersphere can resolve the problem. Second, we propose the Data-to-Data Cross-Entropy loss (D2D-CE) to exploit relational information in the class-labeled dataset. On this foundation, we propose the Rebooted Auxiliary Classifier Generative Adversarial Network (ReACGAN). The experimental results show that ReACGAN achieves state-of-the-art generation results on CIFAR10, Tiny-ImageNet, CUB200, and ImageNet datasets. We also verify that ReACGAN benefits from differentiable augmentations and that D2D-CE harmonizes with StyleGAN2 architecture. Model weights and a software package that provides implementations of representative cGANs and all experiments in our paper are available at https://github.com/POSTECH-CVLab/PyTorch-StudioGAN.
| accept | This paper proposes two improvements to address the low diversity problem of auxiliary classifiers GANs. First, the classifier is projected to a unit hypersphere to avoid early training collapse caused by gradient exploding. Second, a data to data cross entropy loss, which is similar to the contrastive gan loss, is used to better explore class information. Though the proposed method is rather heuristic, it shows impressive results in conditional image generation on real natural images. The reviewers agree that the proposed method is technically sound, novel, and the paper is well written and organized. However, as pointed out by reviewer KmwA, the proposed method does not mitigate the theoretical problem of ACGAN when the class distributions have support overlaps, which is also verified on the additional experiments on the simulated data. I would recommend acceptance of this paper given its novelty and impressive performance, and I highly suggest the authors add simulations as done in the TAC paper and report the results of the combination of TAC in their real experiments (with proper discussions), as suggested by reviewer KmwA. | train | [
"RLiTuCLubcH",
"lYoC1RFkvwp",
"3nKbsaLbDRj",
"R_dGUMae-wY",
"xNLbcPWhrGQ",
"q3NAR34DWj5",
"1-TS4P4rBR4",
"7XSI2u2WhVq",
"gjjQNnir0q",
"wexyTwG5xP4",
"ZYi32GyDYVG",
"bHSdo9wkUQh"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks very much for the response. I like the added analysis, which would make the rationality of the proposed method stronger.",
" Thanks for your response which address my concerns. I think incorporating the discussions and experimental results during the rebuttal period to the final draft could significantly... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"q3NAR34DWj5",
"3nKbsaLbDRj",
"R_dGUMae-wY",
"xNLbcPWhrGQ",
"ZYi32GyDYVG",
"bHSdo9wkUQh",
"wexyTwG5xP4",
"gjjQNnir0q",
"nips_2021_Ja-hVQrfeGZ",
"nips_2021_Ja-hVQrfeGZ",
"nips_2021_Ja-hVQrfeGZ",
"nips_2021_Ja-hVQrfeGZ"
] |
nips_2021_kFJoj7zuDVi | Towards a Theoretical Framework of Out-of-Distribution Generalization | Generalization to out-of-distribution (OOD) data is one of the central problems in modern machine learning. Recently, there is a surge of attempts to propose algorithms that mainly build upon the idea of extracting invariant features. Although intuitively reasonable, theoretical understanding of what kind of invariance can guarantee OOD generalization is still limited, and generalization to arbitrary out-of-distribution is clearly impossible. In this work, we take the first step towards rigorous and quantitative definitions of 1) what is OOD; and 2) what does it mean by saying an OOD problem is learnable. We also introduce a new concept of expansion function, which characterizes to what extent the variance is amplified in the test domains over the training domains, and therefore give a quantitative meaning of invariant features. Based on these, we prove an OOD generalization error bound. It turns out that OOD generalization largely depends on the expansion function. As recently pointed out by Gulrajani & Lopez-Paz (2020), any OOD learning algorithm without a model selection module is incomplete. Our theory naturally induces a model selection criterion. Extensive experiments on benchmark OOD datasets demonstrate that our model selection criterion has a significant advantage over baselines.
| accept | The authors give upper and lower bounds on OOD generalization error in terms of an "expansion function". They propose a model selection technique based on these, and validate with experiments. Most reviewers (MoMA, ZmeN, wFmp) agreed that the paper addresses an important problem, namely formalizing notions of OOD generalization. A major concern (from reviewer MoMA, also wFmp somewhat) is that the bounds given aren't particularly insightful, e.g. towards understanding methods which make OOD generalization possible through structural assumptions (e.g. ICP, IRM). Reviewer MoMA also felt that the paper overclaims throughout, e.g. in claiming a "complete characterization" of OOD generalization. I agree with both of these criticisms, the authors' rebuttal notwithstanding, and moreover I think they're important enough to prevent acceptance at this time. | val | [
"SAEacBAnLb",
"SMSctFEbFG",
"WK2M1i_iClW",
"MskoxP9E9QZ",
"MTduMCYAgeT",
"AmXE6esCjBi",
"liRVsE27x1",
"piZD8FYipoV",
"4dpvj_rbik",
"us7nEcELp46",
"NSJmUZl1vru",
"TqkyUeGFuQT",
"fnOIwZH6kOX",
"B38fer5ShYL"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your prompt reply and valuable feedback. We will improve the manuscript based on the feedback given in the comments.",
" I think this comment does a slightly better job of addressing my concerns; the original response was quite dismissive and felt like it was basically just restating the paper res... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
2,
4,
3
] | [
"SMSctFEbFG",
"MTduMCYAgeT",
"4dpvj_rbik",
"piZD8FYipoV",
"us7nEcELp46",
"NSJmUZl1vru",
"nips_2021_kFJoj7zuDVi",
"B38fer5ShYL",
"fnOIwZH6kOX",
"liRVsE27x1",
"TqkyUeGFuQT",
"nips_2021_kFJoj7zuDVi",
"nips_2021_kFJoj7zuDVi",
"nips_2021_kFJoj7zuDVi"
] |
nips_2021_X4_aAfxsOoE | Slice Sampling Reparameterization Gradients | Many probabilistic modeling problems in machine learning use gradient-based optimization in which the objective takes the form of an expectation. These problems can be challenging when the parameters to be optimized determine the probability distribution under which the expectation is being taken, as the na\"ive Monte Carlo procedure is not differentiable. Reparameterization gradients make it possible to efficiently perform optimization of these Monte Carlo objectives by transforming the expectation to be differentiable, but the approach is typically limited to distributions with simple forms and tractable normalization constants. Here we describe how to differentiate samples from slice sampling to compute \textit{slice sampling reparameterization gradients}, enabling a richer class of Monte Carlo objective functions to be optimized. Slice sampling is a Markov chain Monte Carlo algorithm for simulating samples from probability distributions; it only requires a density function that can be evaluated point-wise up to a normalization constant, making it applicable to a variety of inference problems and unnormalized models. Our approach is based on the observation that when the slice endpoints are known, the sampling path is a deterministic and differentiable function of the pseudo-random variables, since the algorithm is rejection-free. We evaluate the method on synthetic examples and apply it to a variety of applications with reparameterization of unnormalized probability distributions.
| accept | This paper introduces a differentiable slice sampler and considered its application to variational inference. The reviewers unanimously agreed that this was an important contribution. One reviewer was concerned that the use of uniform search directions were a draw back of the method. Many reviewers felt that the paper could benefit from fewer, more focused experiments.
After discussion, the reviewers unanimously agreed that the paper should be accepted, as long as the authors make the changes that they propose (including additional discussions on the method and reducing the density of the experiments). I am happy to recommend acceptance. | train | [
"l1oW0vXP9J",
"F6D45mA90vm",
"dOUAbkyCO03",
"wchNrNICrV8",
"ji30C8V-41K",
"tkUlKxv4kOe",
"d_hkr_q6xQ8",
"VL-9WT6ZYZu",
"Xl_VBLE_pz",
"q2NsC2-_CdN"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a technique to compute reparameterized gradients w.r.t. parameters of an unnormalized variational density based on slice sampling.\nInstead of performing slice sampling with the common \"stepping out & shrinkage\" procedure the authors use a numeric root finding procedure to determine the slice... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_X4_aAfxsOoE",
"VL-9WT6ZYZu",
"nips_2021_X4_aAfxsOoE",
"tkUlKxv4kOe",
"q2NsC2-_CdN",
"dOUAbkyCO03",
"Xl_VBLE_pz",
"l1oW0vXP9J",
"nips_2021_X4_aAfxsOoE",
"nips_2021_X4_aAfxsOoE"
] |
nips_2021_ICBPhB079dQ | Multi-Label Learning with Pairwise Relevance Ordering | Precisely annotating objects with multiple labels is costly and has become a critical bottleneck in real-world multi-label classification tasks. Instead, deciding the relative order of label pairs is obviously less laborious than collecting exact labels. However, the supervised information of pairwise relevance ordering is less informative than exact labels. It is thus an important challenge to effectively learn with such weak supervision. In this paper, we formalize this problem as a novel learning framework, called multi-label learning with pairwise relevance ordering (PRO). We show that the unbiased estimator of classification risk can be derived with a cost-sensitive loss only from PRO examples. Theoretically, we provide the estimation error bound for the proposed estimator and further prove that it is consistent with respective to the commonly used ranking loss. Empirical studies on multiple datasets and metrics validate the effectiveness of the proposed method.
| accept | The paper presents a new framework for multi-label learning that does not require annotators to precisely assign class labels for each instance and thus is cost-effective. The theory part supports the algorithm design and the experiments show good results. The author's responses addressed the questions of the reviewers. | val | [
"YWHWg3ROzOf",
"hZGlxwyrxoo",
"HSAgH4tgI3L",
"N61GrnMfuzf",
"FaRARyyveg",
"aYKM9Ch9zM",
"AjnMibdglvx",
"6YRJFRNlgLS",
"qrw1HqyYINg"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper attempts to solve the multi-label learning problem with only pairwise relevance ordering. Authors argue that deciding the relative order of label pairs is obviously less laborious than collecting exact labels. A method is also presented to learn multi-label model with only pairwise relevance ordering an... | [
6,
-1,
-1,
-1,
-1,
-1,
8,
8,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_ICBPhB079dQ",
"AjnMibdglvx",
"YWHWg3ROzOf",
"qrw1HqyYINg",
"6YRJFRNlgLS",
"AjnMibdglvx",
"nips_2021_ICBPhB079dQ",
"nips_2021_ICBPhB079dQ",
"nips_2021_ICBPhB079dQ"
] |
nips_2021_B2cyX_ht4VI | Sampling with Trusthworthy Constraints: A Variational Gradient Framework | Sampling-based inference and learning techniques, especially Bayesian inference, provide an essential approach to handling uncertainty in machine learning (ML). As these techniques are increasingly used in daily life, it becomes essential to safeguard the ML systems with various trustworthy-related constraints, such as fairness, safety, interpretability. Mathematically, enforcing these constraints in probabilistic inference can be cast into sampling from intractable distributions subject to general nonlinear constraints, for which practical efficient algorithms are still largely missing. In this work, we propose a family of constrained sampling algorithms which generalize Langevin Dynamics (LD) and Stein Variational Gradient Descent (SVGD) to incorporate a moment constraint specified by a general nonlinear function. By exploiting the gradient flow structure of LD and SVGD, we derive two types of algorithms for handling constraints, including a primal-dual gradient approach and the constraint controlled gradient descent approach. We investigate the continuous-time mean-field limit of these algorithms and show that they have O(1/t) convergence under mild conditions. Moreover, the LD variant converges linearly assuming that a log Sobolev like inequality holds. Various numerical experiments are conducted to demonstrate the efficiency of our algorithms in trustworthy settings.
| accept | The reviewers agreed that this paper should be accepted. For the camera ready: aside from generally going through the reviews/responses and ensuring that all suggested changes are made, the reviewers had one more remark for the authors. In particular, while the reviewers appreciated the effort to compare Langevin/SVGD, they still strongly suggest that the authors suitably nuance the limitations of the SVGD theory. One idea brought forward by a reviewer that the others thought was acceptable is to state results for Langevin and defer SVGD to the appendix with careful statements of limitations. This highlights that they can be obtained in a very similar manner, but that they are less likely to hold in practice.
| train | [
"w9JJ_He7g4",
"ZqWA-8Czea6",
"DqrrakuMjb",
"d6ri066YQ7L",
"ukCXfd9y_dr",
"NhGhOg3_aKN",
"sFSSKe8klMG",
"teYAyDN2bhK",
"YPKHgn7LTFB",
"5L8xumQ9-jm",
"36nU_rJofqJ",
"dJ7d6k_RyFD",
"uBFSTeC5ptH",
"dfofz1sOvE",
"jiH6CtQ8mKU"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer, thanks a lot for your efforts in reviewing the draft. We will clarify the issue of LSI and SVGD.",
" Dear reviewer, thank you for your reply. As we said in the rebuttal, we will extend the discussion on primal-dual vs. CCGFs and polish the draft. Among others, we think one of the key practical (a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"NhGhOg3_aKN",
"d6ri066YQ7L",
"ukCXfd9y_dr",
"YPKHgn7LTFB",
"teYAyDN2bhK",
"sFSSKe8klMG",
"dfofz1sOvE",
"jiH6CtQ8mKU",
"uBFSTeC5ptH",
"dJ7d6k_RyFD",
"nips_2021_B2cyX_ht4VI",
"nips_2021_B2cyX_ht4VI",
"nips_2021_B2cyX_ht4VI",
"nips_2021_B2cyX_ht4VI",
"nips_2021_B2cyX_ht4VI"
] |
nips_2021_VjQw3v3FpJx | Robust and Decomposable Average Precision for Image Retrieval | In image retrieval, standard evaluation metrics rely on score ranking, e.g. average precision (AP). In this paper, we introduce a method for robust and decomposable average precision (ROADMAP) addressing two major challenges for end-to-end training of deep neural networks with AP: non-differentiability and non-decomposability.Firstly, we propose a new differentiable approximation of the rank function, which provides an upper bound of the AP loss and ensures robust training. Secondly, we design a simple yet effective loss function to reduce the decomposability gap between the AP in the whole training set and its averaged batch approximation, for which we provide theoretical guarantees.Extensive experiments conducted on three image retrieval datasets show that ROADMAP outperforms several recent AP approximation methods and highlight the importance of our two contributions. Finally, using ROADMAP for training deep models yields very good performances, outperforming state-of-the-art results on the three datasets.Code and instructions to reproduce our results will be made publicly available at https://github.com/elias-ramzi/ROADMAP.
| accept | The paper focuses on the problem of learning to rank and introduces a method, ROADMAP (RObust And DecoMposable Average Precision), consisting of a new surrogate loss for average precision (AP) and a "calibration" loss to be used together with the AP loss.
All reviewers recommended to accept.
Accept. | train | [
"GfeIO3OFSQl",
"2KPdcPeJmr3",
"UKkbLnwVFb5",
"sKqJvJbK-qk",
"NgkYfLOyL5x",
"pLnDWibPr9k",
"ip9FLtx0BBg",
"0g3laourHSl",
"YEl87yc63hu",
"TRYgyHJYdrA",
"CF-L6o2ddrM",
"vtJEZUr8mPj"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you the authors for the response. I think they have provided adequate responses to all reviewers' concerns without the need to significantly edit the draft. The extra results on ROxf-RPar also add a significant impact on the landmark retrieval side of the community. Also, all reviewers agree that the theore... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
3
] | [
"pLnDWibPr9k",
"NgkYfLOyL5x",
"ip9FLtx0BBg",
"0g3laourHSl",
"vtJEZUr8mPj",
"YEl87yc63hu",
"TRYgyHJYdrA",
"CF-L6o2ddrM",
"nips_2021_VjQw3v3FpJx",
"nips_2021_VjQw3v3FpJx",
"nips_2021_VjQw3v3FpJx",
"nips_2021_VjQw3v3FpJx"
] |
nips_2021_7m6qvNqFjr | Fast rates for prediction with limited expert advice | El Mehdi Saad, Gilles Blanchard | accept | The reviewers are excited about the results of the paper and unanimously recommend acceptance. | train | [
"zTgzfSWk2qR",
"qWmcIoiK2GN",
"hs6GzojNCgl",
"PUic_XpdUn-",
"gdgIH1oU9K",
"22BR4DrqmT6",
"SmsGZ_-VcG6",
"VBIZM9-85W",
"pHY-y7_2fGQ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"UPDATE: The authors have addressed my questions in their response, and confirmed my initial positive view of the paper. Their plan for improving the presentation of the introduction seems satisfactory.\n\n---\n\nThe paper considers aggregation of a finite set of K predictors\n(experts) in the (batch) statistical l... | [
8,
7,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"nips_2021_7m6qvNqFjr",
"nips_2021_7m6qvNqFjr",
"pHY-y7_2fGQ",
"zTgzfSWk2qR",
"qWmcIoiK2GN",
"VBIZM9-85W",
"nips_2021_7m6qvNqFjr",
"nips_2021_7m6qvNqFjr",
"nips_2021_7m6qvNqFjr"
] |
nips_2021_HfpNVDg3ExA | Probabilistic Transformer For Time Series Analysis | Generative modeling of multivariate time series has remained challenging partly due to the complex, non-deterministic dynamics across long-distance timesteps. In this paper, we propose deep probabilistic methods that combine state-space models (SSMs) with transformer architectures. In contrast to previously proposed SSMs, our approaches use attention mechanism to model non-Markovian dynamics in the latent space and avoid recurrent neural networks entirely. We also extend our models to include several layers of stochastic variables organized in a hierarchy for further expressiveness. Compared to transformer models, ours are probabilistic, non-autoregressive, and capable of generating diverse long-term forecasts with uncertainty estimates. Extensive experiments show that our models consistently outperform competitive baselines on various tasks and datasets, including time series forecasting and human motion prediction.
| accept | Two reviewers advocate strongly for acceptance, one reviewer has been convinced to favor acceptance, and one reviewer recommends rejection. I agree with these first two reviewers about the novelty of the proposed method, and I shared their enthusiasm. I share the concern of reviewer jdmt about the omission of a comparison to convolutional SSMs. Because the authors’ empirical results are already fairly extensive and impressive, I do not favor rejection on these grounds, but I do encourage the authors to include a comparison to convolutional SSMs in the camera-ready version of their manuscript. | test | [
"DMNZUEMcboe",
"v--hsjMRdTM",
"MfjC2Wqts6s",
"DlHd_wS7JPT",
"N0gG_F2jVYr",
"Oq9AVrvhq0k",
"wow78gpz0NY",
"w0DCpSNUyc"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"## Summary\nThe authors propose a probabilistic generative non-autoregressive transformer based state space model (SSMs) and corresponding inference procedures for them. Starting with a detailed description of the related probabilities within their models. Followed by a description of the architecture split up int... | [
6,
5,
8,
-1,
-1,
-1,
-1,
9
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
5
] | [
"nips_2021_HfpNVDg3ExA",
"nips_2021_HfpNVDg3ExA",
"nips_2021_HfpNVDg3ExA",
"MfjC2Wqts6s",
"v--hsjMRdTM",
"w0DCpSNUyc",
"DMNZUEMcboe",
"nips_2021_HfpNVDg3ExA"
] |
nips_2021_2F_wnaioS6 | A Hierarchical Reinforcement Learning Based Optimization Framework for Large-scale Dynamic Pickup and Delivery Problems | The Dynamic Pickup and Delivery Problem (DPDP) is an essential problem in the logistics domain, which is NP-hard. The objective is to dynamically schedule vehicles among multiple sites to serve the online generated orders such that the overall transportation cost could be minimized. The critical challenge of DPDP is the orders are not known a priori, i.e., the orders are dynamically generated in real-time. To address this problem, existing methods partition the overall DPDP into fixed-size sub-problems by caching online generated orders and solve each sub-problem, or on this basis to utilize the predicted future orders to optimize each sub-problem further. However, the solution quality and efficiency of these methods are unsatisfactory, especially when the problem scale is very large. In this paper, we propose a novel hierarchical optimization framework to better solve large-scale DPDPs. Specifically, we design an upper-level agent to dynamically partition the DPDP into a series of sub-problems with different scales to optimize vehicles routes towards globally better solutions. Besides, a lower-level agent is designed to efficiently solve each sub-problem by incorporating the strengths of classical operational research-based methods with reinforcement learning-based policies. To verify the effectiveness of the proposed framework, real historical data is collected from the order dispatching system of Huawei Supply Chain Business Unit and used to build a functional simulator. Extensive offline simulation and online testing conducted on the industrial order dispatching system justify the superior performance of our framework over existing baselines.
| accept | All reviewers are positive to this paper. The pickup and deliver problem is important and the bi-level optimization solution (or hierarchical RL) shows nontrivial technical depth.
Most concerns or confusions have been addressed or clarified by the rebuttal and the multiple round in the interaction. This paper is recommended to be accepted. Please follow the post review to appropriately polish this paper, for example, the generalization issue needs to be discussed in the revision. | train | [
"uU_6aTQi6bb",
"RCa6C4R4i7",
"H0kbTISpWg",
"FBGQbaP1Xr-",
"SVCYtdIn0yk",
"KinDeZTihK",
"BrDfqi5vjDV",
"J3KOG2LZXsR",
"5L5V6457c",
"P5_A-oqz_e7",
"tsIVAufoU8",
"2GS19EBfhaj",
"rLtCldEGcon",
"56P-PHxhSazW",
"MyK2epT6s9Z"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer"
] | [
"The authors propose an approach to solve the Dynamic Pickup and Delivery Problem. The main challenge of this problem is that the orders are not known a priori. To solve this, the authors propose a bi-level approach composed of: (1) an upper-level agent to decide whether to solve the problem for current orders in a... | [
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
7
] | [
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
3
] | [
"nips_2021_2F_wnaioS6",
"SVCYtdIn0yk",
"KinDeZTihK",
"nips_2021_2F_wnaioS6",
"5L5V6457c",
"tsIVAufoU8",
"uU_6aTQi6bb",
"FBGQbaP1Xr-",
"FBGQbaP1Xr-",
"MyK2epT6s9Z",
"rLtCldEGcon",
"56P-PHxhSazW",
"nips_2021_2F_wnaioS6",
"rLtCldEGcon",
"nips_2021_2F_wnaioS6"
] |
nips_2021_rJhCP_vC6T | Spatio-Temporal Variational Gaussian Processes | We introduce a scalable approach to Gaussian process inference that combines spatio-temporal filtering with natural gradient variational inference, resulting in a non-conjugate GP method for multivariate data that scales linearly with respect to time. Our natural gradient approach enables application of parallel filtering and smoothing, further reducing the temporal span complexity to be logarithmic in the number of time steps. We derive a sparse approximation that constructs a state-space model over a reduced set of spatial inducing points, and show that for separable Markov kernels the full and sparse cases exactly recover the standard variational GP, whilst exhibiting favourable computational properties. To further improve the spatial scaling we propose a mean-field assumption of independence between spatial locations which, when coupled with sparsity and parallelisation, leads to an efficient and accurate method for large spatio-temporal problems.
| accept | The authors of this paper improve on the computational complexity of inferring spatiotemporal GPs. Specifically, via a combination of sparse priors, inducing points, and Markov separability they reduce a cubic cost to a linear cost. The reviewers all agree that the work is clear and stands to benefit the community. Moreover the few remarks raised by the reviewers seem to be readily addressed in revision. I therefore am happy to recommend this paper be accepted at NeurIPS. | train | [
"FvsE8py93rk",
"g_arZ7lKYP",
"_eo8OyirC71",
"bCeAiXIFubU",
"yp01NRqDmSS",
"ORsczb1cVd6",
"IHeoFVs8Zln",
"XPNKZMGDtu",
"mw_Nm9Y7ih5",
"V6OAjZaMlry"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the reply. \nAdding the references and some discussion of the related works [1] and [2] will definitely benefit the manuscript.",
" Thank you for your answers. My concerns are addressed. My recommendation remains the same.",
" We are very grateful for this careful analysis of the notation and ma... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"yp01NRqDmSS",
"_eo8OyirC71",
"V6OAjZaMlry",
"mw_Nm9Y7ih5",
"XPNKZMGDtu",
"IHeoFVs8Zln",
"nips_2021_rJhCP_vC6T",
"nips_2021_rJhCP_vC6T",
"nips_2021_rJhCP_vC6T",
"nips_2021_rJhCP_vC6T"
] |
nips_2021_CRFSrgYtV7m | MERLOT: Multimodal Neural Script Knowledge Models | As humans, we understand events in the visual world contextually, performing multimodal reasoning across time to make inferences about the past, present, and future. We introduce MERLOT, a model that learns multimodal script knowledge by watching millions of YouTube videos with transcribed speech -- in an entirely label-free, self-supervised manner. By pretraining with a mix of both frame-level (spatial) and video-level (temporal) objectives, our model not only learns to match images to temporally corresponding words, but also to contextualize what is happening globally over time. As a result, MERLOT exhibits strong out-of-the-box representations of temporal commonsense, and achieves state-of-the-art performance on 12 different video QA datasets when finetuned. It also transfers well to the world of static images, allowing models to reason about the dynamic context behind visual scenes. On Visual Commonsense Reasoning, MERLOT~answers questions correctly with 80.6\% accuracy, outperforming state-of-the-art models of similar size by over 3\%, even those that make heavy use of auxiliary supervised data (like object bounding boxes).Ablation analyses demonstrate the complementary importance of: 1) training on videos versus static images; 2) scaling the magnitude and diversity of the pretraining video corpus; and 3) using diverse objectives that encourage full-stack multimodal reasoning, from the recognition to cognition level.
| accept | This work proposes to jointly learn a visual-language model from a large amount of uncurated videos for (YT-Temporal-180M) with transcribed speech. The authors combine a set of pretraining tasks: image-text contrastive learning, MLM (with a special twist of using attention for selecting words), and temporal predictions of video frames. Reviewers unanimously this is good paper with strong SOTA results across many benchmarks and thoughtful analyses. The authors also thoughtfully respond to ethic reviewers' comments. On my side, I particularly appreciate the additional interesting ablation tests on data processing and ASR that the authors provided during rebuttal. The YT-Temporal-180M seems like a valuable resource for researchers and I hope that a careful release strategy of the dataset will work. I hope the authors can add more details to "Section 5.2 Qualitative analysis: MERLOT learns representations spanning distant frames" and make the figure more viewable. Last point, the title refers to "script knowledge" and to be convincing on that, it would be helpful to have a qualitative analysis on the model understanding of one scenario, e.g., restaurant in details. Given all the plus points of this paper, I recommend Accept. | train | [
"OQAfAlROLP",
"g-xmaU7LmEw",
"UepOD8JY0ZL",
"JAf9Ar4TPON",
"w9qRxCoeHm6",
"NenJGEnUOQ",
"LNXtc1M9max",
"7_sRg6w2J04",
"sYlR_kAhm_y",
"YPbjiKCUbwB",
"2B6mNzc-kCs",
"7nlCbeShk5W",
"NX27gVKjRx",
"M2xrjlExcq3"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a new approach to learn from large collections of uncurated video and text obtained from Automatic Speech Recognition Transcript. The authors introduce a dataset of 6M YouTube videos that lead to better results than existing large scale video and language datasets. The proposed architecture c... | [
8,
-1,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
5,
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_CRFSrgYtV7m",
"sYlR_kAhm_y",
"nips_2021_CRFSrgYtV7m",
"nips_2021_CRFSrgYtV7m",
"LNXtc1M9max",
"NX27gVKjRx",
"nips_2021_CRFSrgYtV7m",
"UepOD8JY0ZL",
"OQAfAlROLP",
"nips_2021_CRFSrgYtV7m",
"M2xrjlExcq3",
"JAf9Ar4TPON",
"nips_2021_CRFSrgYtV7m",
"nips_2021_CRFSrgYtV7m"
] |
nips_2021_ZW-ZsleMIWk | Fast Approximate Dynamic Programming for Infinite-Horizon Markov Decision Processes | Mohamad Amin Sharifi Kolarijani, Gyula Max, Peyman Mohajerin Mohajerin Esfahani | accept | The paper suggests a fast method for solving a class of optimal control problems by introducing a variant of the Value Iteration (VI) algorithm that benefits from the convex conjugate transform. This improves the computational complexity of solving each step of VI from O(|X||U|) to O(|X| + |U|).
Most reviewers are positive about this work. One of the reviewers (mb6w) is on the negative side, but I believe their concerns and questions are adequately answered (the reviewer did not acknowledge the authors' response). Therefore, I would recommend acceptance of this paper.
I would encourage the authors to incorporate reviewers' comments and suggestions in order to improve their paper. Some of improvements can be along the followings:
- Emphasizing more how this work can be relevant to NeurIPS community, as this paper is different from a typical paper on MDP or RL appearing at NeurIPS (no focus on learning; focusing on a special class of problems, etc.)
- Clarifying and emphasizing the technical contributions of this work compared to [18].
- Including larger-scale experiments.
I would also note that the acronym of CVI has already been used in the context of Value Iteration:
*Conservative Value Iteration* (Kozuno, Uchibe, Donya, "Theoretical Analysis of Efficiency and Robustness of Softmax and Gap-Increasing Operators in Reinforcement Learning," AISTATS, 2019) and *Characteristic Value Iteration* (Farahmand, "Value Function in Frequency Domain and the Characteristic Value Iteration Algorithm," NeurIPS, 2019), so the authors may want to use another acronym to reduce possible confusion. There is no need to cite these work, as they are not related enough. | train | [
"5Ggv3uUrnDS",
"3pg_JShzCsF",
"xupT2UXxIF",
"4_oHJ5Smtkv",
"-p2YHDWzima",
"JXSRZrrONqQ",
"mqMQdocJ04",
"5hDxCxspxa",
"-91v8YafHvg",
"GoG8b6wVGS9",
"tnKqlcntED1",
"AItvpwvIf2u",
"MZOXana4DOj",
"F6dKJTnccij",
"6Ue4w9Y_SyH",
"sjF23J5O47",
"hpK-RgShPSM"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for his/her encouraging words. \n\nWe agree with the reviewer. \nAs the reviewer also noticed, the only generic case, that we are also aware of, in which the convexity is preserved, is for linear dynamics $x^+ = Ax+Bu+w$, with convex stage cost $C$ (jointly in $x$ and $u$) and convex state a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
3
] | [
"xupT2UXxIF",
"4_oHJ5Smtkv",
"MZOXana4DOj",
"GoG8b6wVGS9",
"mqMQdocJ04",
"-91v8YafHvg",
"AItvpwvIf2u",
"nips_2021_ZW-ZsleMIWk",
"hpK-RgShPSM",
"5hDxCxspxa",
"sjF23J5O47",
"6Ue4w9Y_SyH",
"F6dKJTnccij",
"nips_2021_ZW-ZsleMIWk",
"nips_2021_ZW-ZsleMIWk",
"nips_2021_ZW-ZsleMIWk",
"nips_20... |
nips_2021_-zgb2v8vV_w | Adaptive Risk Minimization: Learning to Adapt to Domain Shift | A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution. However, this assumption is violated in almost all practical applications: machine learning systems are regularly tested under distribution shift, due to changing temporal correlations, atypical end users, or other factors. In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts, corresponding to new domains or domain distributions. Most prior methods aim to learn a single robust model or invariant feature space that performs well on all domains. In contrast, we aim to learn models that adapt at test time to domain shift using unlabeled test points. Our primary contribution is to introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains. Compared to prior methods for robustness, invariance, and adaptation, ARM methods provide performance gains of 1-4% test accuracy on a number of image classification problems exhibiting domain shift.
| accept | This paper proposes to apply a meta-learning method (similar to MAML) to solve the domain generalization problem. After spending some time reading the paper and trying to understand the main idea, I agree with Reviewer dMsf, sJVX, and dgsw that there is a limited novelty in terms of both methodology and theory. Firstly, from the domain generalization perspective, the idea of learning models that can adapt to a collection of unlabeled data at test time has appeared as early as in [1, 2] and subsequent works in this direction. In these works, the models also take as input the empirical marginal distributions through the kernel function on them. Hence, the key distinction here is the use of deep learning trained in a meta-learning style as a way to equip the models with adaptation capability. Secondly, from the meta-learning perspective, the proposed idea is very similar to the classic MAML, as also pointed out by Reviewer dgsw. The key difference from the standard setting is the use of unlabeled data alone. Lastly, the authors seem to miss several important works in both domain generalization and meta-learning when discussing the related works in Section 3. I encourage the authors to properly conduct a literature review as this is an increasingly important area of research. In the final version, I encourage the authors to improve the clarity and to better highlight the key contributions of this work.
Nevertheless, there is a positive consensus among the expert reviewers that the experimental results are promising, and that the rebuttal has adequately addressed their concerns. The authors are willing to incorporate the reviewers' suggestions in improving the final version of this work. Hence, I recommend acceptance for publication at NeurIPS2021 as a poster.
Remark: Line 75-76: "Invariance methods would fare no better, as there is no feature space that can separate the ambiguous points and make them classifiable". This is indeed misleading. One can in fact separate the ambiguous points if the group/domain information is incorporated into the feature representation as has been commonly done in the literature.
- [1] G. Blanchard, G. Lee, and C. Scott. Generalizing from several related classification tasks to a new unlabeled sample. In Advances in Neural Information Processing Systems 24, pages 2178–2186, 2011.
- [2] Muandet, K., Balduzzi, D., Schölkopf, B., Domain generalization via invariant feature representation. In: Proceedings of the 30th International Conference on Machine Learning (ICML13). pages 10–18, 2013. | train | [
"3haqV6lyjJJ",
"o2JiyqNUzC6",
"3DOfyU2WGVP",
"sqdP0TmLEBT",
"-9zPpApEty4",
"7GsYZr1uRMf",
"zDqs2oevZR4",
"RxJZxRRM7C2",
"5EfFt6DIZgh",
"A4twKAGEG3d",
"5sB9tmMoFd7",
"T0-c6JX-lP",
"W_ZGBa8rcdi"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper proposes a framework for unsupervised multi-source domain adaptation in which training samples are grouped according to which domain they come from. (The paper uses \"groups\" to refer to the domains, but I will simply use \"domains\".) The authors assume that predictions can be adapted using the inform... | [
7,
6,
-1,
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_-zgb2v8vV_w",
"nips_2021_-zgb2v8vV_w",
"7GsYZr1uRMf",
"nips_2021_-zgb2v8vV_w",
"A4twKAGEG3d",
"o2JiyqNUzC6",
"nips_2021_-zgb2v8vV_w",
"5EfFt6DIZgh",
"5sB9tmMoFd7",
"sqdP0TmLEBT",
"zDqs2oevZR4",
"3haqV6lyjJJ",
"o2JiyqNUzC6"
] |
nips_2021_nJqCQUzpvS | Learning State Representations from Random Deep Action-conditional Predictions | Our main contribution in this work is an empirical finding that random General Value Functions (GVFs), i.e., deep action-conditional predictions---random both in what feature of observations they predict as well as in the sequence of actions the predictions are conditioned upon---form good auxiliary tasks for reinforcement learning (RL) problems. In particular, we show that random deep action-conditional predictions when used as auxiliary tasks yield state representations that produce control performance competitive with state-of-the-art hand-crafted auxiliary tasks like value prediction, pixel control, and CURL in both Atari and DeepMind Lab tasks. In another set of experiments we stop the gradients from the RL part of the network to the state representation learning part of the network and show, perhaps surprisingly, that the auxiliary tasks alone are sufficient to learn state representations good enough to outperform an end-to-end trained actor-critic baseline. We opensourced our code at https://github.com/Hwhitetooth/random_gvfs.
| accept | The paper presents a surprising result that random deep action-conditional predictions when used as auxiliary tasks yield state representations that produce control performance competitive with state-of-the-art hand-crafted value prediction and pixel control auxiliary tasks in both Atari and DeepMind Lab tasks. All reviewers unanimously vote to accept the paper which I agree with.
In the camera-ready version, I suggest the authors to address the reviewer's comments. It would also be nice to see how the performance gap changes as a function of how of powerful the on-policy learner is. E.g., with A3C / PPO, do we have a similar performance gap? This will help verify whether the same advantage can be gained by improving the policy gradients or not. | train | [
"geRNB5cbE8_",
"Pc74MbbVUuA",
"DRh2rwXUp0t",
"uzWyTwEPgS1",
"PK2bab9MvtH",
"PBIWdbl85Cy",
"0vhMa9mKqM7",
"VumPFdf6c26",
"KoOXluERDwI",
"dbY_IkXECV6",
"rEvMWdridmw",
"7D1f_qKX1pq",
"fySVYyjRC48",
"hQ0c5RkJuak"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary\n-------\n\nOwing to the importance of state representation in RL, this paper\nprovides an empirical investigation of random General Value Functions\n(GVFs) for shaping state representation in RL. The paper briefly\noutlines the random generation process for their GVFs before conducting\nextensive experime... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_nJqCQUzpvS",
"DRh2rwXUp0t",
"hQ0c5RkJuak",
"fySVYyjRC48",
"PBIWdbl85Cy",
"0vhMa9mKqM7",
"rEvMWdridmw",
"nips_2021_nJqCQUzpvS",
"VumPFdf6c26",
"fySVYyjRC48",
"geRNB5cbE8_",
"hQ0c5RkJuak",
"nips_2021_nJqCQUzpvS",
"nips_2021_nJqCQUzpvS"
] |
nips_2021_ejmqyWW0MK6 | Mixability made efficient: Fast online multiclass logistic regression | Rémi Jézéquel, Pierre Gaillard, Alessandro Rudi | accept | This paper makes significant progress on the problem of computationally efficiently achieving logarithmic regret for multi-class logistic regression. The closest comparator work of Foster et al. (2008) achieves regret $O(d K \log (B n))$ but at the prohibitive $O(n^{37})$ total cost. In contrast, ignoring other factors, the present paper devises the algorithm Efficient-GAF, which obtains regret $O(d K (B^2 + (\log(K) + B) \log(n))$ at a total runtime that is $O(n^4)$, ignoring other factors. Although the regret of the present work is exponentially higher in $B$ relative to (Foster et al., 2008), the regret is also exponentially lower than the $O(e^B \log n)$ regret achieved by Online Newton Step, while still keeping the runtime reasonable. Although Efficient-GAF's $O(n^4)$ runtime is not yet practical by some standards, this is still immense progress towards the goal of a practical runtime.
This work contains several techniques that appear to be original and also could see use in future works. Also, as the authors mentioned in the discussion phase, there is a fundamental roadblock in trying to generalize the results of Jézéquel, Gaillard, and Rudi (COLT 2020), hereafter referred to as (JGR), to the multi-class setting, and the reviewers agreed with the authors' assessment here. Given that all four reviewers asked whether such a generalization is possible, I would like to suggest that the authors, either in the paper, or in an appendix, try to mention the issue with extending JGR to the multi-class setting. Future readers may wonder about this question otherwise.
One side note: For clean comparison to results in the binary logistic regression setting (such as the work of JGR), it is worth mentioning that both of the results of Foster et al. (2008) and the present paper hold with high probability and the runtime dependence is logarithmic in $1/\delta$, whereas the work of JGR gives guarantees that hold deterministically. This was not really apparent until far into the paper. Therefore, it seems that another question is whether something close to practical efficiency can be achieved deterministically in the multi-class case.
In summary, this is a strong paper that I expect to be influential, and it deserves to be accepted at NeurIPS 2021. | train | [
"2QgzPtTXUI2",
"Ha1FIKuSGsn",
"OKVFA95gTr-",
"fOTPlNI5aPZ",
"Apz5LTXGBTk",
"g2f19ABm1fd",
"7gKKVOk3K8",
"y7tWN6Nko66",
"Yixir6bEJmo",
"cEao0XLARvK",
"NwgKvUXPjL"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. This helps in understanding the difficulties in obtaining a generalization of the algorithm of Jézéquel et al. ",
"The authors propose a new computationally efficient algorithm for multi-class logistic regression. The propose method is based on a tight quadratic approximation of the ... | [
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"7gKKVOk3K8",
"nips_2021_ejmqyWW0MK6",
"y7tWN6Nko66",
"nips_2021_ejmqyWW0MK6",
"g2f19ABm1fd",
"NwgKvUXPjL",
"cEao0XLARvK",
"fOTPlNI5aPZ",
"Ha1FIKuSGsn",
"nips_2021_ejmqyWW0MK6",
"nips_2021_ejmqyWW0MK6"
] |
nips_2021_NP-9Ppxdca | Tracking People with 3D Representations | We present a novel approach for tracking multiple people in video. Unlike past approaches which employ 2D representations, we focus on using 3D representations of people, located in three-dimensional space. To this end, we develop a method, Human Mesh and Appearance Recovery (HMAR) which in addition to extracting the 3D geometry of the person as a SMPL mesh, also extracts appearance as a texture map on the triangles of the mesh. This serves as a 3D representation for appearance that is robust to viewpoint and pose changes. Given a video clip, we first detect bounding boxes corresponding to people, and for each one, we extract 3D appearance, pose, and location information using HMAR. These embedding vectors are then sent to a transformer, which performs spatio-temporal aggregation of the representations over the duration of the sequence. The similarity of the resulting representations is used to solve for associations that assigns each person to a tracklet. We evaluate our approach on the Posetrack, MuPoTs and AVA datasets. We find that 3D representations are more effective than 2D representations for tracking in these settings, and we obtain state-of-the-art performance. Code and results are available at: https://brjathu.github.io/T3DP.
| accept | This paper is somewhere in between traditional tracking, person re-identification and 3D human modeling. As claimed by the authors, the use of 3D representations in a tracking context has rarely been done before (mostly due to computational constraints). Since the method requires high-res input (or at least large humans), it may not be suited for any tracking application, which makes comparison against some state-of-the-art methods harder. The paper is accepted, but authors are expected to clearly mention and discuss limitations of the proposed method (e.g. include computational costs mentioned in rebuttal) and why some standard tracking benchmarks like the MOTchallenge were not considered. | train | [
"XOn3-zoj1Ve",
"x0G5LZ9-taS",
"Lw6mCvO-rJj",
"fuDYdLThtWt",
"nVPXXI805L2",
"EcBW9_Dsx5l",
"KAO8g8Y_Vi5",
"EmNMSqva8d6",
"hPLP5Nda0AP",
"YpNzFZo5B6Z",
"F-MEF7MUu9X",
"OjSNy3tjLu3"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks! The rebuttal has addressed most of my previous concerns. I will keep my rating as acceptance. Releasing code of the proposed method could also make this work more impactful.",
" Thanks for the rebuttal! My concerns have been addressed, and I am keeping my original rating.",
" I thank the authors for t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
4
] | [
"fuDYdLThtWt",
"nVPXXI805L2",
"KAO8g8Y_Vi5",
"OjSNy3tjLu3",
"F-MEF7MUu9X",
"YpNzFZo5B6Z",
"hPLP5Nda0AP",
"nips_2021_NP-9Ppxdca",
"nips_2021_NP-9Ppxdca",
"nips_2021_NP-9Ppxdca",
"nips_2021_NP-9Ppxdca",
"nips_2021_NP-9Ppxdca"
] |
nips_2021_u9RvlvaBC7 | Off-Policy Risk Assessment in Contextual Bandits | Audrey Huang, Liu Leqi, Zachary Lipton, Kamyar Azizzadenesheli | accept | This paper considers the off-policy Lipschitz risk functional estimation in contextual bandit setting. The authors considered plug-in estimator based on the CDF-estimator, with guarantees from DKW-inequality. The paper is well-written: the motivation is clear and the logic flow is easy to follow. The method is also novel.
As the reviewers suggested, the experiment part is relatively weak. Please consider to add the extra empirical study into final version.
Another issue is that there are some important related work missing, e.g., [1, 2]. Therefore, the method is not well-positioned in literature. Please consider the comparison with [1, 2] and add the table in the response to Reviewer 8N59.
[1] Faury, Louis, Ugo Tanielian, Elvis Dohmatob, Elena Smirnova, and Flavian Vasile. "Distributionally robust counterfactual risk minimization." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, pp. 3850-3857. 2020.
[2] Karampatziakis, Nikos, John Langford, and Paul Mineiro. "Empirical likelihood for contextual bandits." arXiv preprint arXiv:1906.03323 (2019). | train | [
"3MqCUxrwkz0",
"J2E7ToKOFlK",
"Ro-li5iW9dH",
"P0jFdAp8rVU",
"ONzH7EnIlSL",
"b57-3f4m2G",
"afPVTpxak2Z",
"D2iYpa16K4g",
"RjyFrwrp-Km",
"qwpbiUF9ILK"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" My questions have been adequately addressed. I thank the authors for the detailed response and encourage them to add the results they presented here (and promise to add) in the final draft, specifically, the Simglucose results and comparison of error bounds for mean, variance, CVaR from existing results. I retain... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"ONzH7EnIlSL",
"qwpbiUF9ILK",
"D2iYpa16K4g",
"RjyFrwrp-Km",
"afPVTpxak2Z",
"nips_2021_u9RvlvaBC7",
"nips_2021_u9RvlvaBC7",
"nips_2021_u9RvlvaBC7",
"nips_2021_u9RvlvaBC7",
"nips_2021_u9RvlvaBC7"
] |
nips_2021_0BHU7WvZ29 | Adaptive Denoising via GainTuning | Deep convolutional neural networks (CNNs) for image denoising are typically trained on large datasets. These models achieve the current state of the art, but they do not generalize well to data that deviate from the training distribution. Recent work has shown that it is possible to train denoisers on a single noisy image. These models adapt to the features of the test image, but their performance is limited by the small amount of information used to train them. Here we propose "GainTuning'', a methodology by which CNN models pre-trained on large datasets can be adaptively and selectively adjusted for individual test images. To avoid overfitting, GainTuning optimizes a single multiplicative scaling parameter (the “Gain”) of each channel in the convolutional layers of the CNN. We show that GainTuning improves state-of-the-art CNNs on standard image-denoising benchmarks, boosting their denoising performance on nearly every image in a held-out test set. These adaptive improvements are even more substantial for test images differing systematically from the training data, either in noise level or image type. We illustrate the potential of adaptive GainTuning in a scientific application to transmission-electron-microscope images, using a CNN that is pre-trained on synthetic data. In contrast to the existing methodology, GainTuning is able to faithfully reconstruct the structure of catalytic nanoparticles from these data at extremely low signal-to-noise ratios.
| accept | To solve the image denoising problem, this paper proposed to tune a single gain parameter for each channel on a single test image. The tuning is done via minimizing an unsupervised loss. Compared to non-fine-tuning methods, tuning can provide better adaptation to out-of-distribution images. Compared to other tuning-methods, this approach is designed to avoid the overfitting issue of fine-tuning all hyperparameters. The idea is simple and natural, and this work has presented many experiments to show that it outperforms existing methods. After rebuttal, all reviewers think that the rebuttal has addressed their most concerns, and agree with the acceptance (though some are on the borderline). Thus I recommend acceptance.
Note that one reviewer requested to add experimental comparison with conditioned models such as DVDnet. The goal of the reviewer is to understand the difference with the works that study image restoration by providing info about the degradation to the restoration neural-net. It will be good to see which method is better. | val | [
"vHMa_2xOGiW",
"Uk6ocJnFR1p",
"APwyo0po2xD",
"y4TV6rR6p9_",
"FzikGZf3G1j",
"m41Bn8yikGw",
"LU0x6415FUb",
"Uuh0TgCnpN0",
"-KqZYU-EgM7",
"ij2XsBpALN9",
"TlzOS_2aS7N",
"zzuSJgaUE6x",
"VcEfYomtsT",
"x4H5ZXASNtZ",
"TClKn24ob3x",
"Plqqz-Gjhy",
"_prMVIWrlH1"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for responding to our rebuttal. \n+ We will discuss connections with additional references in our final version - we have included a summary of this discussion in the rebuttal. \n+ FastDVDnet is designed for video denoising (unlike our paper which concentrates on image denoising), but thank you for this... | [
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"Uk6ocJnFR1p",
"nips_2021_0BHU7WvZ29",
"y4TV6rR6p9_",
"TClKn24ob3x",
"nips_2021_0BHU7WvZ29",
"LU0x6415FUb",
"Uuh0TgCnpN0",
"-KqZYU-EgM7",
"ij2XsBpALN9",
"zzuSJgaUE6x",
"nips_2021_0BHU7WvZ29",
"Plqqz-Gjhy",
"Uk6ocJnFR1p",
"TlzOS_2aS7N",
"FzikGZf3G1j",
"_prMVIWrlH1",
"nips_2021_0BHU7Wv... |
nips_2021_rxud5HYKX55 | Optimal Sketching for Trace Estimation | Shuli Jiang, Hai Pham, David Woodruff, Richard Zhang | accept | This paper considers the fundamental problem of estimating the trace of a PSD matrix from matrix vector product queries. Prior results had a gap between the adaptive and non-adaptive versions of the problem. In this paper, the authors propose a non-adaptive algorithm with nearly the same performance as the best adaptive algorithm. The paper also makes nice contributions in establishing lower bounds on the number of queries. The reviewers all felt that this paper should be published in NeuRIPS. | train | [
"jkxm3_Jflp",
"HGgznLu6VJ6",
"UkXTnPTyiAV",
"VN-lCJhaQnU",
"LcKv8J8nrqP",
"no6bz8JJ5EV",
"rXgr5Hrlw32",
"McYtPq9Q7n3"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper contains two main contributions. First, they provided an improved analysis on the NA-Hutch++ algorithm to bridge the gap between adaptive and non-adaptive algorithm. Secondly, they also provided a nearly matching lower-bound on the complexity of the problem, closing the gap. I acknowledge that I am not... | [
7,
8,
6,
-1,
-1,
-1,
-1,
7
] | [
2,
3,
4,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_rxud5HYKX55",
"nips_2021_rxud5HYKX55",
"nips_2021_rxud5HYKX55",
"HGgznLu6VJ6",
"McYtPq9Q7n3",
"UkXTnPTyiAV",
"jkxm3_Jflp",
"nips_2021_rxud5HYKX55"
] |
nips_2021_oz3t1BrfNO | Estimating Multi-cause Treatment Effects via Single-cause Perturbation | Most existing methods for conditional average treatment effect estimation are designed to estimate the effect of a single cause - only one variable can be intervened on at one time. However, many applications involve simultaneous intervention on multiple variables, which leads to multi-cause treatment effect problems. The multi-cause problem is challenging because one needs to overcome the confounding bias for a large number of treatment groups, each with a different cause combination. The combinatorial nature of the problem also leads to severe data scarcity - we only observe one factual outcome out of many potential outcomes. In this work, we propose Single-cause Perturbation (SCP), a novel two-step procedure to estimate the multi-cause treatment effect. SCP starts by augmenting the observational dataset with the estimated potential outcomes under single-cause interventions. It then performs covariate adjustment on the augmented dataset to obtain the estimator. SCP is agnostic to the exact choice of algorithm in either step. We show formally that the procedure is valid under standard assumptions in causal inference. We demonstrate the performance gain of SCP on extensive synthetic and semi-synthetic experiments.
| accept | The authors propose a data augmentation method for learning conditional average treatment effects with multiple treatments. Assuming sequential ignorability and sequential overlap, the authors propose to first fit models that describe how changing a single cause affects causal descendants; then to augment the data set with new observations; and finally to fit a regression model on the augmented data. This is a strong paper. All reviewers agree that it should be accepted. Reviewer yyUp praises how well the method was explored, including discussion of limitations. Reviewer f9co states that the paper has "a very impressive experimental assessment" that not only shows superiority to competing methods but also investigates why the method works. Reviewer 1Xus states that the paper is “enjoyable to read”. Reviewers 1Xus, sokR and YNQn had some reservations about the method, in particular how well it performs under higher-order interactions. The authors have provided thorough responses, with additional simulations and high-level guidelines to understand failure modes. The authors state that they will expand the discussion with the additional insights. | train | [
"hd9pgCFzTIg",
"_KEsOVxK4uZ",
"ikRpnB58Cp6",
"jMwPjo4_GlW",
"5ZciNgeZQgF",
"-BiXk_nRsV",
"b2y_63gQ0Xg",
"7Yb31clsHqa",
"Skhaa8BfFNV",
"l3ZzDeyU4FJ",
"tH3cvEAA_H",
"BMFyNyY7alZ",
"dlYtRYzn12",
"alyKB3UIyRp",
"s8BFJaBT_nN",
"1Qg64_KxECW",
"-s2943PxsAR",
"go9XhA-LmPy",
"EYnw7nsV4k2"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_re... | [
" I'd like to thank the authors for your detailed responses, which have clarified my doubts. I will keep my positive score.",
"In this paper, the authors aim to the estimation of CATE when there are multiple treatment variables and the causal relations between them are known in advance. The main innovation in met... | [
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
9,
8
] | [
-1,
3,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"1Qg64_KxECW",
"nips_2021_oz3t1BrfNO",
"b2y_63gQ0Xg",
"-BiXk_nRsV",
"nips_2021_oz3t1BrfNO",
"7Yb31clsHqa",
"l3ZzDeyU4FJ",
"tH3cvEAA_H",
"go9XhA-LmPy",
"s8BFJaBT_nN",
"alyKB3UIyRp",
"_KEsOVxK4uZ",
"5ZciNgeZQgF",
"5ZciNgeZQgF",
"_KEsOVxK4uZ",
"US0SM01iXhI",
"elVl-vgbwD",
"EYnw7nsV4k2... |
nips_2021_9c-IsSptbmA | Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration | Despite Graph Neural Networks (GNNs) have achieved remarkable accuracy, whether the results are trustworthy is still unexplored. Previous studies suggest that many modern neural networks are over-confident on the predictions, however, surprisingly, we discover that GNNs are primarily in the opposite direction, i.e., GNNs are under-confident. Therefore, the confidence calibration for GNNs is highly desired. In this paper, we propose a novel trustworthy GNN model by designing a topology-aware post-hoc calibration function. Specifically, we first verify that the confidence distribution in a graph has homophily property, and this finding inspires us to design a calibration GNN model (CaGCN) to learn the calibration function. CaGCN is able to obtain a unique transformation from logits of GNNs to the calibrated confidence for each node, meanwhile, such transformation is able to preserve the order between classes, satisfying the accuracy-preserving property. Moreover, we apply the calibration GNN to self-training framework, showing that more trustworthy pseudo labels can be obtained with the calibrated confidence and further improve the performance. Extensive experiments demonstrate the effectiveness of our proposed model in terms of both calibration and accuracy.
| accept | This paper explores an interesting problem that lies at the intersection of two hot topics: graph neural networks and the reliability of NNs.
The observation that GNNs under-estimate themselves, as opposed to regular NNs, is also interesting, and the technical contributions to solving this problem are also solid, and thus can be expected to be of interest to the NeurIPS community. | train | [
"Ehd3FuNvSqL",
"SUVviRnNem",
"IMrdnZZphs-",
"aeTAYqtGYK_",
"q2f2Yq57qbD",
"Bh_Nf_xmvd",
"2-HKnmZwhmW"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely thank the Reviewer for spending time and providing valuable feedback. We appreciate all of your suggestions and we have addressed all your questions below by providing our responses as well as our additional experimental results.\n\n1. > The authors think that the predictive performance of GNNs is un... | [
-1,
-1,
-1,
-1,
8,
5,
6
] | [
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"2-HKnmZwhmW",
"Bh_Nf_xmvd",
"Bh_Nf_xmvd",
"q2f2Yq57qbD",
"nips_2021_9c-IsSptbmA",
"nips_2021_9c-IsSptbmA",
"nips_2021_9c-IsSptbmA"
] |
nips_2021_qZpOqPbwhy | Learning Riemannian metric for disease progression modeling | Linear mixed-effect models provide a natural baseline for estimating disease progression using longitudinal data. They provide interpretable models at the cost of modeling assumptions on the progression profiles and their variability across subjects. A significant improvement is to embed the data in a Riemannian manifold and learn patient-specific trajectories distributed around a central geodesic. A few interpretable parameters characterize subject trajectories at the cost of a prior choice of the metric, which determines the shape of the trajectories. We extend this approach by learning the metric from the data allowing more flexibility while keeping the interpretability. Specifically, we learn the metric as the push-forward of the Euclidean metric by a diffeomorphism. This diffeomorphism is estimated iteratively as the composition of radial basis functions belonging to a reproducible kernel Hilbert space. The metric update allows us to improve the forecasting of imaging and clinical biomarkers in the Alzheimer’s Disease Neuroimaging Initiative (ADNI) cohort. Our results compare favorably to the 56 methods benchmarked in the TADPOLE challenge.
| accept | Thanks to the authors for their engaging submission on an important topic. There was a lot of positive response in the reviews to this work, and all reviewers were positive about the underlying idea. However, there are areas in which this submission could be strengthened.
A major reviewer concern is understanding the strengths and weaknesses of the proposed method. An explicitly stated motivating factor is that the parametric assumptions of standard mixed effects models are too restrictive, and that the flexibility afforded by learning the Reimannian manifold will allow models to better fit the data and form better trajectories. And intuitively this makes sense. But the empirical evidence of this is lacking. As reviewer w1hQ points out, a standard linear mixed effects model would make for an informative baseline to empirically justify these claims. As the abstract introduces this method as an improvement upon (and alternative to) a standard linear mixed effects model, having a direct empirical comparison would help identify what parts of the proposed method are most performant, and in what situations the proposed method may be inappropriate. Further, a simulation study where aspects of the data generating process are known and manipulated could enhance such a direct comparison to a standard mixed effects model. | val | [
"OItJ4hrTy0C",
"_-gUICTUDR1",
"_LfUtkEXhzv",
"r-QmYAtlMwQ",
"ZOQgDnsqgct",
"3rzBgtH3Df",
"LbbNmdOlz4c",
"6M_4nB2zisb",
"SiWnGGQFoRS",
"vBJjVyh0CuY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a mixed effects model for modeling disease trajectories. The method builds on prior work that considers geometric models (which represent trajectories as geodesics on prespecified Riemannian manifolds), by allowing for the Riemannian metric to be learned from data. The proposed method combines ... | [
7,
5,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5
] | [
2,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"nips_2021_qZpOqPbwhy",
"nips_2021_qZpOqPbwhy",
"LbbNmdOlz4c",
"nips_2021_qZpOqPbwhy",
"SiWnGGQFoRS",
"OItJ4hrTy0C",
"_-gUICTUDR1",
"vBJjVyh0CuY",
"nips_2021_qZpOqPbwhy",
"nips_2021_qZpOqPbwhy"
] |
nips_2021_7rYDxRb1eSa | Bias and variance of the Bayesian-mean decoder | Perception, in theoretical neuroscience, has been modeled as the encoding of external stimuli into internal signals, which are then decoded. The Bayesian mean is an important decoder, as it is optimal for purposes of both estimation and discrimination. We present widely-applicable approximations to the bias and to the variance of the Bayesian mean, obtained under the minimal and biologically-relevant assumption that the encoding results from a series of independent, though not necessarily identically-distributed, signals. Simulations substantiate the accuracy of our approximations in the small-noise regime. The bias of the Bayesian mean comprises two components: one driven by the prior, and one driven by the precision of the encoding. If the encoding is 'efficient', the two components have opposite effects; their relative strengths are determined by the objective that the encoding optimizes. The experimental literature on perception reports both 'Bayesian' biases directed towards prior expectations, and opposite, 'anti-Bayesian' biases. We show that different tasks are indeed predicted to yield such contradictory biases, under a consistently-optimal encoding-decoding model. Moreover, we recover Wei and Stocker's "law of human perception", a relation between the bias of the Bayesian mean and the derivative of its variance, and show how the coefficient of proportionality in this law depends on the task at hand. Our results provide a parsimonious theory of optimal perception under constraints, in which encoding and decoding are adapted both to the prior and to the task faced by the observer.
| accept | This is a very clearly written paper that introduces a "parsimonious theory for the performance of a Bayes mean estimator under optimal encoder-decoder model. After identifying the equations for bias and variance, the authors explain bayesian and anti-bayesian biases that depend on task, and the relationship between bias and derivative of variance." (Reviewer hpiu)
The main results are derived combining existing work, but the connections to psychophysics are novel. The findings (for example the emergence of contradictory biases under a consistently-optimal encoding-decoding model, that depend on the cost of errors in the task) are very interesting. Supported also by the highly positive sentiment of the reviews, I anticipate that the paper will be perceived with interest by a broad audience, hence I suggest a spotlight. | train | [
"P-3CLXHl_WV",
"JGqHQb6p3Ki",
"76AGjGlgxUU",
"V3pAZYTsp_E",
"Yq0rFjX5gt5",
"YhFaO0JMNE_",
"ymPNyFPoQSJ",
"LFGLjmuFgU"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank Reviewer hpiu for the positive comments on our work. \n\nThe constraint on the Fisher information that we consider has been used in the literature, in particular in Ref. [6] for a similar optimization problem. This constraint on the possible variation in the Fisher information over the sensory space aris... | [
-1,
-1,
-1,
-1,
7,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
3,
3,
2
] | [
"YhFaO0JMNE_",
"LFGLjmuFgU",
"ymPNyFPoQSJ",
"Yq0rFjX5gt5",
"nips_2021_7rYDxRb1eSa",
"nips_2021_7rYDxRb1eSa",
"nips_2021_7rYDxRb1eSa",
"nips_2021_7rYDxRb1eSa"
] |
nips_2021_GzeqcAUFGl0 | MIRACLE: Causally-Aware Imputation via Learning Missing Data Mechanisms | Missing data is an important problem in machine learning practice. Starting from the premise that imputation methods should preserve the causal structure of the data, we develop a regularization scheme that encourages any baseline imputation method to be causally consistent with the underlying data generating mechanism. Our proposal is a causally-aware imputation algorithm (MIRACLE). MIRACLE iteratively refines the imputation of a baseline by simultaneously modeling the missingness generating mechanism, encouraging imputation to be consistent with the causal structure of the data. We conduct extensive experiments on synthetic and a variety of publicly available datasets to show that MIRACLE is able to consistently improve imputation over a variety of benchmark methods across all three missingness scenarios: at random, completely at random, and not at random.
| accept | The reviewers have provided thoughtful and constructive comments. They have responded to the authors' feedback and the most active reviewers who are championing this manuscript continue to lean towards acceptance. I hope the authors will take the reviewers' comments to heart and encourage them to incorporate their thoughts in preparing the camera-ready version of their manuscript. | train | [
"2A5RozLFZN_",
"tQSqKwFh_QA",
"hGb5giVuxMP",
"uOsNvWzAJq",
"fEnroyQLtOE",
"27LjODB1ECQ",
"O9uYvRy_jx",
"gBoWJQJvT2",
"6m9gN0vptr",
"kYHFqKzMHyU",
"dlQrlAQz2DY",
"alYGNQ6B-nI",
"lqMPJIo052J",
"SUVIjn79eoR",
"UMedYsjousi"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a new framework to impute missing values while preserving the causal structure of the data. The algorithm works by refining the imputations of a baseline method and simultaneously learning the missingness generating mechanisms (which cover different scenarios such as MCAR, MAR, MNAR). The propo... | [
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
9
] | [
2,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_GzeqcAUFGl0",
"SUVIjn79eoR",
"fEnroyQLtOE",
"nips_2021_GzeqcAUFGl0",
"6m9gN0vptr",
"uOsNvWzAJq",
"SUVIjn79eoR",
"nips_2021_GzeqcAUFGl0",
"uOsNvWzAJq",
"SUVIjn79eoR",
"UMedYsjousi",
"2A5RozLFZN_",
"nips_2021_GzeqcAUFGl0",
"nips_2021_GzeqcAUFGl0",
"nips_2021_GzeqcAUFGl0"
] |
nips_2021_SCN8UaetXx | Efficient Training of Visual Transformers with Small Datasets | Visual Transformers (VTs) are emerging as an architectural paradigm alternative to Convolutional networks (CNNs). Differently from CNNs, VTs can capture global relations between image elements and they potentially have a larger representation capacity. However, the lack of the typical convolutional inductive bias makes these models more data hungry than common CNNs. In fact, some local properties of the visual domain which are embedded in the CNN architectural design, in VTs should be learned from samples. In this paper, we empirically analyse different VTs, comparing their robustness in a small training set regime, and we show that, despite having a comparable accuracy when trained on ImageNet, their performance on smaller datasets can be largely different. Moreover, we propose an auxiliary self-supervised task which can extract additional information from images with only a negligible computational overhead. This task encourages the VTs to learn spatial relations within an image and makes the VT training much more robust when training data is scarce. Our task is used jointly with the standard (supervised) training and it does not depend on specific architectural choices, thus it can be easily plugged in the existing VTs. Using an extensive evaluation with different VTs and datasets, we show that our method can improve (sometimes dramatically) the final accuracy of the VTs. Our code is available at: https://github.com/yhlleo/VTs-Drloc.
| accept | I enjoyed reading this paper and I find it generally solid. I find that the most important element of this paper is a new, presumably novel, self-supervised learning task that regularizes the vision transformer. Although the results presented in the paper were reasonably strong, the reviewers found several issues in the experiments. The authors provided answers which unfortunately did not satisfy the reviewers enough to increase their scores. Although this paper is not very far from meeting the bar for acceptance, I think it is still slightly below the expected level. Some things that in my opinion could tip the balance in the next version: (1) improved experiments, (2) a better justification for the proposed self-supervised task. | train | [
"NpkzFg1wqa2",
"jilEaT_D1FT",
"mXpFVCOIp4",
"4oF8AccCrMe",
"ZcrfUttOcrN",
"UILZn1fYC9",
"uTYsT7QNv-q",
"ZTr0GtICqfl",
"xZdDWhGCYkV",
"93ATgFs4ORv",
"ZzL_NYN97g",
"dtdiEt3mYt",
"3QVW3EqSKVH",
"13EV8uKLUs",
"rJm5S46CsX4"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" As mentioned in our previous comment, and following your suggestions, we tuned the alpha hyperparameter of the Gaussian prior loss. As shown in the following table, it seems that the default value (0.001), recommended in [17], is the best option.\n\n|0.01|0.001(default)|0.0001|\n|:-----:|:-----:|:-----:|\n|83.18|... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
3,
4
] | [
"jilEaT_D1FT",
"4oF8AccCrMe",
"rJm5S46CsX4",
"mXpFVCOIp4",
"UILZn1fYC9",
"xZdDWhGCYkV",
"13EV8uKLUs",
"3QVW3EqSKVH",
"dtdiEt3mYt",
"ZzL_NYN97g",
"nips_2021_SCN8UaetXx",
"nips_2021_SCN8UaetXx",
"nips_2021_SCN8UaetXx",
"nips_2021_SCN8UaetXx",
"nips_2021_SCN8UaetXx"
] |
nips_2021_rsRq--gsiE | Small random initialization is akin to spectral learning: Optimization and generalization guarantees for overparameterized low-rank matrix reconstruction | Recently there has been significant theoretical progress on understanding the convergence and generalization of gradient-based methods on nonconvex losses with overparameterized models. Nevertheless, many aspects of optimization and generalization and in particular the critical role of small random initialization are not fully understood. In this paper, we take a step towards demystifying this role by proving that small random initialization followed by a few iterations of gradient descent behaves akin to popular spectral methods. We also show that this implicit spectral bias from small random initialization, which is provably more prominent for overparameterized models, also puts the gradient descent iterations on a particular trajectory towards solutions that are not only globally optimal but also generalize well. Concretely, we focus on the problem of reconstructing a low-rank matrix from a few measurements via a natural nonconvex formulation. In this setting, we show that the trajectory of the gradient descent iterations from small random initialization can be approximately decomposed into three phases: (I) a spectral or alignment phase where we show that that the iterates have an implicit spectral bias akin to spectral initialization allowing us to show that at the end of this phase the column space of the iterates and the underlying low-rank matrix are sufficiently aligned, (II) a saddle avoidance/refinement phase where we show that the trajectory of the gradient iterates moves away from certain degenerate saddle points, and (III) a local refinement phase where we show that after avoiding the saddles the iterates converge quickly to the underlying low-rank matrix. Underlying our analysis are insights for the analysis of overparameterized nonconvex optimization schemes that may have implications for computational problems beyond low-rank reconstruction.
| accept | This paper provides and analyzes a setting where small random initialization has a certain implicit spectral bias. Reviewers are positive and I recommend acceptance. That said, the reviewers had many detailed concerns, which led to extensive feedback discussions as below; I request the authors address these points carefully in their revisions. | train | [
"NPWAyGzJhWG",
"nKd9zfxGVB2",
"pIqdv9cxvCJ",
"4EF-UkcSKL_",
"LuLGd6g36S",
"jsgKSqYw5Bz",
"m1_jWlzUqVH",
"jYjdwUsSj4S",
"oZF5aBK1NjD",
"NcqqfLlBRLa",
"x-ofHxgU88r",
"FFPpWey69uQ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for increasing your score. We briefly address the points you raised.\n\n1. We particularly chose our title as we did try to emphasize this. However, we will further highlight it, following your suggestion.\n2. We agree that it is not clear from our proof that the distance increases (although simulations... | [
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"pIqdv9cxvCJ",
"nips_2021_rsRq--gsiE",
"oZF5aBK1NjD",
"jsgKSqYw5Bz",
"nips_2021_rsRq--gsiE",
"NcqqfLlBRLa",
"FFPpWey69uQ",
"x-ofHxgU88r",
"nKd9zfxGVB2",
"LuLGd6g36S",
"nips_2021_rsRq--gsiE",
"nips_2021_rsRq--gsiE"
] |
nips_2021_BFYlnDtJSqW | Efficient Combination of Rematerialization and Offloading for Training DNNs | Rematerialization and offloading are two well known strategies to save memory during the training phase of deep neural networks, allowing data scientists to consider larger models, batch sizes or higher resolution data. Rematerialization trades memory for computation time, whereas Offloading trades memory for data movements. As these two resources are independent, it is appealing to consider the simultaneous combination of both strategies to save even more memory. We precisely model the costs and constraints corresponding to Deep Learning frameworks such as PyTorch or Tensorflow, we propose optimal algorithms to find a valid sequence of memory-constrained operations and finally, we evaluate the performance of proposed algorithms on realistic networks and computation platforms. Our experiments show that the possibility to offload can remove one third of the overhead of rematerialization, and that together they can reduce the memory used for activations by a factor 4 to 6, with an overhead below 20%.
| accept | This paper proposes an optimized algorithm to compute a sequence of forward / backward / offload / prefetch operations on activations that optimizes training throughput of linearized DNNs under memory constraints. The rebuttal solves the reviewers' concerns, and the reviewers unanimously agrees to accept the paper. | val | [
"oCBdSDpWbv4",
"90n6CSp0-3F",
"VxkFgkRQmA2",
"q1kkRmEuhFq",
"ybQEZdWA0Z2",
"uyCuwkQUTBP",
"dfHClMZqEqm",
"gZLLEOyzZS",
"DQgDpWekghU",
"6yvTiokJhX",
"MN5AofIcO-E",
"PgAkB5ieuYk",
"7lCST_oGqv",
"spliLGq1lP"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. I wanted to leave a note here acknowledging that we have read and considered your response to our queries. I do believe that your paper will benefit from explaining _why_ and _when_ certain optimizations work better - it will only help readers appreciate your techniques. In the lon... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"6yvTiokJhX",
"nips_2021_BFYlnDtJSqW",
"q1kkRmEuhFq",
"ybQEZdWA0Z2",
"uyCuwkQUTBP",
"dfHClMZqEqm",
"90n6CSp0-3F",
"spliLGq1lP",
"PgAkB5ieuYk",
"7lCST_oGqv",
"nips_2021_BFYlnDtJSqW",
"nips_2021_BFYlnDtJSqW",
"nips_2021_BFYlnDtJSqW",
"nips_2021_BFYlnDtJSqW"
] |
nips_2021_iorEu783qJ5 | Particle Cloud Generation with Message Passing Generative Adversarial Networks | In high energy physics (HEP), jets are collections of correlated particles produced ubiquitously in particle collisions such as those at the CERN Large Hadron Collider (LHC). Machine learning (ML)-based generative models, such as generative adversarial networks (GANs), have the potential to significantly accelerate LHC jet simulations. However, despite jets having a natural representation as a set of particles in momentum-space, a.k.a. a particle cloud, there exist no generative models applied to such a dataset. In this work, we introduce a new particle cloud dataset (JetNet), and apply to it existing point cloud GANs. Results are evaluated using (1) 1-Wasserstein distances between high- and low-level feature distributions, (2) a newly developed Fréchet ParticleNet Distance, and (3) the coverage and (4) minimum matching distance metrics. Existing GANs are found to be inadequate for physics applications, hence we develop a new message passing GAN (MPGAN), which outperforms existing point cloud GANs on virtually every metric and shows promise for use in HEP. We propose JetNet as a novel point-cloud-style dataset for the ML community to experiment with, and set MPGAN as a benchmark to improve upon for future generative models. Additionally, to facilitate research and improve accessibility and reproducibility in this area, we release the open-source JetNet Python package with interfaces for particle cloud datasets, implementations for evaluation and loss metrics, and more tools for ML in HEP development.
| accept | All of the reviewers really liked this paper. They especially noted the clarity of exposition and the fact that the authors intend on open sourcing their data. During the rebuttal phase, the authors added new experimental results using TreeGAN that helped to address some concerns of the reviewers. Overall this seems like a timely and interesting contribution to the literature on point clouds and GANs. | train | [
"b43qnSQt78M",
"AvB-lkM9uKF",
"6x2SkdbUFZe",
"CX6mm8J5wQ",
"p0LPfDktfTh",
"D3xxmSknGZI",
"rGVS0tomr3z"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a new particle cloud dataset (JetNet) for high-energy physics. The authors set up a benchmark for particle cloud generation with a few physics-inspired and vision-related metrics. As existing point cloud GANs are not suitable for physics applications, they have also developed a new message-pa... | [
6,
-1,
-1,
-1,
-1,
6,
7
] | [
3,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_iorEu783qJ5",
"rGVS0tomr3z",
"b43qnSQt78M",
"nips_2021_iorEu783qJ5",
"D3xxmSknGZI",
"nips_2021_iorEu783qJ5",
"nips_2021_iorEu783qJ5"
] |
nips_2021_HhUmPH22Vpn | CoFiNet: Reliable Coarse-to-fine Correspondences for Robust PointCloud Registration | We study the problem of extracting correspondences between a pair of point clouds for registration. For correspondence retrieval, existing works benefit from matching sparse keypoints detected from dense points but usually struggle to guarantee their repeatability. To address this issue, we present CoFiNet - Coarse-to-Fine Network which extracts hierarchical correspondences from coarse to fine without keypoint detection. On a coarse scale and guided by a weighting scheme, our model firstly learns to match down-sampled nodes whose vicinity points share more overlap, which significantly shrinks the search space of a consecutive stage. On a finer scale, node proposals are consecutively expanded to patches that consist of groups of points together with associated descriptors. Point correspondences are then refined from the overlap areas of corresponding patches, by a density-adaptive matching module capable to deal with varying point density. Extensive evaluation of CoFiNet on both indoor and outdoor standard benchmarks shows our superiority over existing methods. Especially on 3DLoMatch where point clouds share less overlap, CoFiNet significantly outperforms state-of-the-art approaches by at least 5% on Registration Recall, with at most two-third of their parameters.
| accept | This papers introduces CoFiNet, a coarse-to-fine methods to extract hierarchical correspondences between 3D point clouds, which relaxes the need for keypoints detection. Experiments are conducted on datasets with high (3DMatch) and low (3DLoMatch) level of correspondence.
The paper initially received three accept and two reject recommendation. The main concerns pointed out by reviewers were the lack of novelty of the proposed method, and the modest improvement compared to recent baselines (e.g. PREDATOR).
After rebuttal, the initially positive reviewers were convinced by the rebuttal and recommended acceptance. Especially, R5BJR who did a detailed review supported paper acceptance. On the other hand, Rem8a and R1Jc9 still considered the novelty insufficient and the experiments not convincing enough, and stuck on their rejection recommendation.
The AC carefully read the submission and authors' feedback. The AC considers that the idea of adapting keypoint-free coarse-to-fine refinement methods from the 2D to 3D is interesting. Although closely based on prior works, the adaptations and contributions are overall convincing, i.e. the weighting scheme and the differentiable density-adaptive matching. The experiments also show the relevance of the approach compared to recent baselines, e.g. PREDATOR.
The AC thus recommends acceptance, but highly encourages the authors to carefully take into account and reviewers' comments and there feedback to improve the final version of the paper. Especially, the clarity of the paper should be improved to clarify and highlight the contribution and the positioning of the proposed method with respect to related works. The submission has been discussed with the senior area chair who agrees with the recommendation.
| train | [
"tZGNnq9FK_Z",
"uRY1ihbaWT7",
"1tJKlZhzJoO",
"3qj1wW5URTo",
"n5sAHTdzVef",
"bSrs7QOsZcp",
"y_Vr-CMcJ8U",
"nwvXuJDknQr",
"_Y88Nd9Qav3",
"m6cCOV-RgSs"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors present a coarse-to-fine approach to extract correspondences that can be used for a RANSAC-based point cloud registration.\nThey demonstrate on standardized datasets that they achieve a higher or similar inlier ratio, registration recall and feature matching recall than current state of the art with fe... | [
6,
-1,
-1,
-1,
-1,
-1,
5,
4,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
5,
5,
1,
2
] | [
"nips_2021_HhUmPH22Vpn",
"nwvXuJDknQr",
"tZGNnq9FK_Z",
"m6cCOV-RgSs",
"_Y88Nd9Qav3",
"y_Vr-CMcJ8U",
"nips_2021_HhUmPH22Vpn",
"nips_2021_HhUmPH22Vpn",
"nips_2021_HhUmPH22Vpn",
"nips_2021_HhUmPH22Vpn"
] |
nips_2021_QkljT4mrfs | Partial success in closing the gap between human and machine vision | A few years ago, the first CNN surpassed human performance on ImageNet. However, it soon became clear that machines lack robustness on more challenging test cases, a major obstacle towards deploying machines "in the wild" and towards obtaining better computational models of human visual perception. Here we ask: Are we making progress in closing the gap between human and machine vision? To answer this question, we tested human observers on a broad range of out-of-distribution (OOD) datasets, recording 85,120 psychophysical trials across 90 participants. We then investigated a range of promising machine learning developments that crucially deviate from standard supervised CNNs along three axes: objective function (self-supervised, adversarially trained, CLIP language-image training), architecture (e.g. vision transformers), and dataset size (ranging from 1M to 1B).Our findings are threefold. (1.) The longstanding distortion robustness gap between humans and CNNs is closing, with the best models now exceeding human feedforward performance on most of the investigated OOD datasets. (2.) There is still a substantial image-level consistency gap, meaning that humans make different errors than models. In contrast, most models systematically agree in their categorisation errors, even substantially different ones like contrastive self-supervised vs. standard supervised models. (3.) In many cases, human-to-model consistency improves when training dataset size is increased by one to three orders of magnitude. Our results give reason for cautious optimism: While there is still much room for improvement, the behavioural difference between human and machine vision is narrowing. In order to measure future progress, 17 OOD datasets with image-level human behavioural data and evaluation code are provided as a toolbox and benchmark at: https://github.com/bethgelab/model-vs-human/
| accept | The reviewers were enthusiastic about the breadth and quality of this work. I agree; this is an exceptional paper that will be an important resource for the community in comparing human and machine vision, especially with the open-source evaluation software.
The reviewers offered important suggestions about the wording of some of the conclusions, which the authors were receptive to in the rebuttal. The regression analysis suggested by R-bjmw is also very useful. These changes will further improve the final manuscript. | train | [
"xqLPGjwiInS",
"NIi8GfhT_1l",
"upWdep3ZNVV",
"_VC5MB9xSCO",
"acFFa_vKxf",
"RnPaIWKFlbo",
"4MmCQv19SL",
"1iAX1-WKuLY",
"20SeiP7NuI",
"Ggga_wruH1",
"aG2SSCiFD2d",
"5iCP09mruEo",
"xEMG09AWHU",
"M4Z9JCnjSZp",
"4z__SZ7vOde",
"JpQiXY336zn",
"NDl8KxbIEXD",
"Ugj-h8Lz2F",
"xD3F5gjs-5V",
... | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"... | [
" We would like to thank all four reviewers for their time and valuable feedback!\n\nWe appreciate their assessment of our work as a **“very strong paper with many valuable contributions and insights”** (bmjw), **\"very well written\"** and addressing **\"one of the most important problems in computer vision\"** (m... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
9,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
5
] | [
"nips_2021_QkljT4mrfs",
"upWdep3ZNVV",
"_VC5MB9xSCO",
"RnPaIWKFlbo",
"nips_2021_QkljT4mrfs",
"aG2SSCiFD2d",
"xEMG09AWHU",
"20SeiP7NuI",
"NDl8KxbIEXD",
"nips_2021_QkljT4mrfs",
"5iCP09mruEo",
"Ugj-h8Lz2F",
"WjsKuS14_2x",
"JpQiXY336zn",
"nips_2021_QkljT4mrfs",
"4z__SZ7vOde",
"Ggga_wruH1... |
nips_2021_oiq92o1EFg1 | LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes | Aditya Kusupati, Matthew Wallingford, Vivek Ramanujan, Raghav Somani, Jae Sung Park, Krishna Pillutla, Prateek Jain, Sham Kakade, Ali Farhadi | accept | The problem considered in this paper, namely, of learning compression schemes for high dimensional neural networks, is a very natural and important problem. The paper achieves new, state-of-the-art results with new techniques for this natural problem. While the proposed technique lacks theoretical grounding, the empirical results are quite impressive. In discussions the authors sufficiently clarified concerns the reviewers had about the lack of good baselines for this field. Overall I think this is an interesting paper on an interesting topic and worthy of inclusion. | train | [
"Ybdgrw_5h2S",
"2X5YSX1SiEM",
"tbE7-vOxlBe",
"xFoB-rcOH2X",
"Y23ms_Q1LK",
"ix6EcKQG5Ly",
"ZhG7Rxmyy48",
"R30PZwei07",
"OFh5yzXRvWG",
"SPojnwQfg_c",
"nGHBbhdZlz",
"NjikK_wdEtH",
"xZqa7SV1lu",
"V4nDRm1_YTB"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nThank you for engaging in the discussion and increasing the score. Thanks for your time and effort.\n\nRegarding ERM beating all the baselines: Yes, it is a simple solution but we found it to be super effective as the results suggest. We can further improve and build on these ideas while not rel... | [
-1,
7,
-1,
-1,
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
7,
4
] | [
-1,
2,
-1,
-1,
-1,
-1,
4,
-1,
2,
-1,
-1,
-1,
3,
4
] | [
"tbE7-vOxlBe",
"nips_2021_oiq92o1EFg1",
"ix6EcKQG5Ly",
"Y23ms_Q1LK",
"R30PZwei07",
"2X5YSX1SiEM",
"nips_2021_oiq92o1EFg1",
"OFh5yzXRvWG",
"nips_2021_oiq92o1EFg1",
"ZhG7Rxmyy48",
"xZqa7SV1lu",
"V4nDRm1_YTB",
"nips_2021_oiq92o1EFg1",
"nips_2021_oiq92o1EFg1"
] |
nips_2021_otDgw7LM7Nn | Analytic Insights into Structure and Rank of Neural Network Hessian Maps | The Hessian of a neural network captures parameter interactions through second-order derivatives of the loss. It is a fundamental object of study, closely tied to various problems in deep learning, including model design, optimization, and generalization. Most prior work has been empirical, typically focusing on low-rank approximations and heuristics that are blind to the network structure. In contrast, we develop theoretical tools to analyze the range of the Hessian map, which provide us with a precise understanding of its rank deficiency and the structural reasons behind it. This yields exact formulas and tight upper bounds for the Hessian rank of deep linear networks --- allowing for an elegant interpretation in terms of rank deficiency. Moreover, we demonstrate that our bounds remain faithful as an estimate of the numerical Hessian rank, for a larger class of models such as rectified and hyperbolic tangent networks. Further, we also investigate the implications of model architecture (e.g.~width, depth, bias) on the rank deficiency. Overall, our work provides novel insights into the source and extent of redundancy in overparameterized neural networks.
| accept | This paper gives an exact characterization on the rank of the Hessian for linear networks, and show that in many settings the rank can be much smaller than the number of parameters. The paper then verify these claims empirically in nonlinear networks and show that the Hessian is still approximately low rank. Although the reviewers had some concerns initially, their opinions have improved during the author response period where it became clear that the result holds throughout training (not just for minima) and has interesting empirical observations. The authors should really incorporate all the comments given by the reviewers to improve the paper, including explaining why the rank of the Hessian is intuitively related to the number of parameters and explaining connections with related works. | train | [
"XbIpmqlg9YT",
"EoOwAnRGN2q",
"0-J3VYfjwdM",
"iuvZgLw1mhF",
"ipkWA7kvjK",
"ZTuaRX3BsrP",
"5vUCaMb26Rh",
"m0_WpOinZg",
"oZsnny4OwbM",
"N25OjuIetI_",
"1CY2a-e0a5h",
"mYKBFQtBtzU",
"QQwgFiV46Fm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper provides several results on the rank of the Hessian of neural networks. The main results are:\n\n(A) upper bounds on the rank of the Hessian of a linear neural network (i.e., identity activations). These are proved by separating the Hessian into the sum of the “outer-product Hessian” and the “functional... | [
8,
-1,
6,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
3,
-1,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_otDgw7LM7Nn",
"oZsnny4OwbM",
"nips_2021_otDgw7LM7Nn",
"nips_2021_otDgw7LM7Nn",
"ZTuaRX3BsrP",
"nips_2021_otDgw7LM7Nn",
"ZTuaRX3BsrP",
"ZTuaRX3BsrP",
"0-J3VYfjwdM",
"XbIpmqlg9YT",
"ZTuaRX3BsrP",
"QQwgFiV46Fm",
"nips_2021_otDgw7LM7Nn"
] |
nips_2021_d3k38LTDCyO | Well-tuned Simple Nets Excel on Tabular Datasets | Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures. In this paper, we hypothesize that the key to boosting the performance of neural networks lies in rethinking the joint and simultaneous application of a large set of modern regularization techniques. As a result, we propose regularizing plain Multilayer Perceptron (MLP) networks by searching for the optimal combination/cocktail of 13 regularization techniques for each dataset using a joint optimization over the decision on which regularizers to apply and their subsidiary hyperparameters. We empirically assess the impact of these regularization cocktails for MLPs in a large-scale empirical study comprising 40 tabular datasets and demonstrate that (i) well-regularized plain MLPs significantly outperform recent state-of-the-art specialized neural network architectures, and (ii) they even outperform strong traditional ML methods, such as XGBoost.
| accept | The reviewers found this paper somewhat "unsurprising", but potentially high-impact and useful to the community. They also noted that papers like this one can be judged harshly, subjecting the authors to a potentially infinite number of experimental comparison. In this case, the consensus was that the revisions/responses from the authors did a good job addressing the concerns of the reviewers. We recommend adopting the suggestions arising from the discussion phase, particularly around the choice of title. | val | [
"Fsu9z5sVqyV",
"LXNaaY9nLGy",
"1ukm_Lar_uA",
"DBtZw9UEPKf",
"mJasJIRLDe",
"r13FteYnRPn",
"DvCJWK8Nxh",
"7UXfPTFTDx-",
"FfWcIu2acD",
"ieB9E8Rk5J2",
"eSPQDAHtTeI",
"qM3vNVmuSYn"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for the considered rebuttal. The reviewers agree that these results are valuable to the community. I have upgraded my score.",
" I appreciate the authors' extra experiments, and no longer have any big concerns about this paper.",
" **On the title change:** We agree with the reviewer regarding potent... | [
-1,
-1,
-1,
-1,
7,
6,
6,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
-1,
-1,
4,
4,
3,
-1,
-1,
-1,
-1,
3
] | [
"7UXfPTFTDx-",
"1ukm_Lar_uA",
"DBtZw9UEPKf",
"DvCJWK8Nxh",
"nips_2021_d3k38LTDCyO",
"nips_2021_d3k38LTDCyO",
"nips_2021_d3k38LTDCyO",
"r13FteYnRPn",
"DvCJWK8Nxh",
"mJasJIRLDe",
"qM3vNVmuSYn",
"nips_2021_d3k38LTDCyO"
] |
nips_2021_sfzseGUqFrd | POODLE: Improving Few-shot Learning via Penalizing Out-of-Distribution Samples | In this work, we propose to use out-of-distribution samples, i.e., unlabeled samples coming from outside the target classes, to improve few-shot learning. Specifically, we exploit the easily available out-of-distribution samples to drive the classifier to avoid irrelevant features by maximizing the distance from prototypes to out-of-distribution samples while minimizing that of in-distribution samples (i.e., support, query data). Our approach is simple to implement, agnostic to feature extractors, lightweight without any additional cost for pre-training, and applicable to both inductive and transductive settings. Extensive experiments on various standard benchmarks demonstrate that the proposed method consistently improves the performance of pretrained networks with different architectures.
| accept | This paper proposes using unlabeled samples outside of target classes to improve few-shot learning. The main idea is that discriminating these samples from in-distribution samples would allow the model to improve feature learning.
Reviewers recognize that the proposed method improves over SOTA but they point to limited technical novelty regarding the use of large margin, marginal improvements over SOTA methods, the reliance of methods on extra data as the shortcomings of this work. However, all reviewers believe that after improving the presentation of the paper and the empirical results, this paper would make a great publication.
Given the above concerns, I recommend rejecting the paper and resubmitting it after taking reviewers' suggestions into account. | train | [
"KYx1LnLkGm_",
"XxDR9iJtwRX",
"fA-r_6n63Y",
"1sGSP6FB9-a",
"IpMWfP05PTG",
"Tj15sXCGkTs",
"vadCuk0bz3",
"v_bB9cr2xpI",
"5HH_Br9yDcC",
"GzfN-JkfY_Z",
"FN-B-uIFch-",
"PXWvcLF4_aa",
"ZVEkKc-CO3",
"Cd2GSg4jgSs"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
" Thanks for your response. Though the reviewer has made the decision, we find it would be better to clarify some of your concerns below.\n\n### 1. The technical novelty is somehow limited regarding the use of large margin classifiers for few-shot classification, as mentioned by other reviewers as well.\n\nWe would... | [
-1,
5,
6,
-1,
-1,
-1,
-1,
5,
-1,
4,
-1,
-1,
-1,
-1
] | [
-1,
4,
3,
-1,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1
] | [
"XxDR9iJtwRX",
"nips_2021_sfzseGUqFrd",
"nips_2021_sfzseGUqFrd",
"IpMWfP05PTG",
"vadCuk0bz3",
"XxDR9iJtwRX",
"PXWvcLF4_aa",
"nips_2021_sfzseGUqFrd",
"FN-B-uIFch-",
"nips_2021_sfzseGUqFrd",
"v_bB9cr2xpI",
"fA-r_6n63Y",
"nips_2021_sfzseGUqFrd",
"GzfN-JkfY_Z"
] |
nips_2021_mVt55ZQqfTl | Combinatorial Pure Exploration with Bottleneck Reward Function | Yihan Du, Yuko Kuroki, Wei Chen | accept | This paper examines a combinatorial exploration problem in which the learner aims to identify the best "slate" of arms, with a "bottleneck" criterion that defines the reward of each slate as the minimum possible reward of its constituent elements. The authors propose a series of tailor-made algorithms that exploit the problem's bottleneck structure to achieve optimal guarantees, both in a fixed-confidence and fixed-budget setting.
One of the reviewers expressed concerns about the writing style of the paper and the strength of the authors' experiments. During the discussion committee phase, these concerns were, to a large extent, alleviated by the authors' replies, so a decision was reached to accept the paper. At the same time, I would urge the authors to implement a version of their exchanges the reviewers and adjust the denser parts of the paper accordingly, as this would significantly strengthen their contribution. | train | [
"ek8hx7j_a9Y",
"JMtFzI443iB",
"2RPTKOsvHJ_",
"j-RpHuVwJ_D",
"2zefihY8qk6",
"OH3ThoXWPv",
"hN-XIw5NaVG",
"C6AjqQgM42n",
"KsjYzWpGADC",
"gdv90SQbzV",
"IgD7KrkVJpG",
"BBwtqBCJXb2",
"4DH8FNfl9oK",
"PQmuKgiZso8",
"FUx_ufQaX_",
"rxsaSYDUB2M",
"ebz-BJR9Ni0",
"nDWTC65pW1x"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your further response. \nWe are happy to report additional experimental results for the fixed-budget setting as follows. \n\nWe perform 3000 independent runs for each algorithm to show their error probabilitiy results aross runs.\nThe following table shows the error probability results on the path i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
-1,
-1,
-1,
-1,
4,
4
] | [
"j-RpHuVwJ_D",
"2RPTKOsvHJ_",
"gdv90SQbzV",
"2zefihY8qk6",
"hN-XIw5NaVG",
"ebz-BJR9Ni0",
"OH3ThoXWPv",
"IgD7KrkVJpG",
"PQmuKgiZso8",
"rxsaSYDUB2M",
"nips_2021_mVt55ZQqfTl",
"nips_2021_mVt55ZQqfTl",
"IgD7KrkVJpG",
"BBwtqBCJXb2",
"4DH8FNfl9oK",
"nDWTC65pW1x",
"nips_2021_mVt55ZQqfTl",
... |
nips_2021_JNSwviqJhS | Densely connected normalizing flows | Normalizing flows are bijective mappings between inputs and latent representations with a fully factorized distribution. They are very attractive due to exact likelihood evaluation and efficient sampling. However, their effective capacity is often insufficient since the bijectivity constraint limits the model width. We address this issue by incrementally padding intermediate representations with noise. We precondition the noise in accordance with previous invertible units, which we describe as cross-unit coupling. Our invertible glow-like modules increase the model expressivity by fusing a densely connected block with Nyström self-attention. We refer to our architecture as DenseFlow since both cross-unit and intra-module couplings rely on dense connectivity. Experiments show significant improvements due to the proposed contributions and reveal state-of-the-art density estimation under moderate computing budgets.
| accept | The paper proposes a normalizing-flow architecture that includes latent variables and is geared towards image modelling. The paper demonstrates good empirical performance of the proposed architecture at modest computational budgets.
There is a clear consensus among reviewers about the strengths and weaknesses of the paper: the main strength is the good empirical performance of the proposed architecture; the main drawback is the incremental novelty. Overall, the paper seems a useful contribution for generative-modelling practitioners, so I'm happy to recommend acceptance. | test | [
"DEFP-8Uo8oL",
"M9WASzboBRc",
"gmGFYbumR3B",
"6cjDRoQ0idV",
"ZQJBk4INpJ0",
"NJc2FE0oAvL",
"Fl45oCY_0Wv",
"aHi8Vmg89_w",
"DXCyNdwcd6z",
"bnqdgmpSxz4",
"_lEpAVMBcm-",
"sWmVbsKyU_S"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an approach to improve the modelling capacity of normalising flow based models. The proposed approach incrementally pads intermediate representations with noise, which is conditioned on the output of previous invertible units using affine transformations. The paper reports promising results de... | [
7,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
4,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_JNSwviqJhS",
"Fl45oCY_0Wv",
"DXCyNdwcd6z",
"aHi8Vmg89_w",
"nips_2021_JNSwviqJhS",
"bnqdgmpSxz4",
"DEFP-8Uo8oL",
"sWmVbsKyU_S",
"_lEpAVMBcm-",
"ZQJBk4INpJ0",
"nips_2021_JNSwviqJhS",
"nips_2021_JNSwviqJhS"
] |
nips_2021_REjT_c1Eejk | Snowflake: Scaling GNNs to high-dimensional continuous control via parameter freezing | Recent research has shown that graph neural networks (GNNs) can learn policies for locomotion control that are as effective as a typical multi-layer perceptron (MLP), with superior transfer and multi-task performance. However, results have so far been limited to training on small agents, with the performance of GNNs deteriorating rapidly as the number of sensors and actuators grows. A key motivation for the use of GNNs in the supervised learning setting is their applicability to large graphs, but this benefit has not yet been realised for locomotion control. We show that poor scaling in GNNs is a result of increasingly unstable policy updates, caused by overfitting in parts of the network during training. To combat this, we introduce Snowflake, a GNN training method for high-dimensional continuous control that freezes parameters in selected parts of the network. Snowflake significantly boosts the performance of GNNs for locomotion control on large agents, now matching the performance of MLPs while offering superior transfer properties.
| accept | There was quite an extensive discussion between the reviewers on this paper after the author response. There is no doubt that the the paper proposes a simple (in a good way) fix to train GNNs with PPO. So it makes a valuable contribution to the community. There was also agreement that the paper could potentially be made a lot stronger with additional work, e.g., the paper is mainly based on empirical evidence for the proposed method so evaluating freezing part by part (reviewer PQSJ) would be very interesting, more comparisons to other work, etc. The main difference was in the weighting of these two aspects. | test | [
"qkosu-Iks9c",
"kFGevVex-il",
"KU1nFlcwsnm",
"0yetMovishw",
"Jf9mBX5zaJ",
"XrYnO763XvV",
"mANFEVKZOfo",
"ImaMfAe8R1",
"WDnLj_Z420N",
"QViUEI5Va6o",
"eKhcN6Ogijs"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers training RL agents by using graph neural networks (GNNs) as the function approximator, therefore inducing a useful inductive bias. Training GNNs is not without its hurdles and thus far previous work has focused on relatively small agents, putting into question scalability. This work attempts to... | [
7,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_REjT_c1Eejk",
"KU1nFlcwsnm",
"XrYnO763XvV",
"nips_2021_REjT_c1Eejk",
"QViUEI5Va6o",
"eKhcN6Ogijs",
"qkosu-Iks9c",
"0yetMovishw",
"nips_2021_REjT_c1Eejk",
"nips_2021_REjT_c1Eejk",
"nips_2021_REjT_c1Eejk"
] |
nips_2021_StbpmmlJbH | Subgame solving without common knowledge | In imperfect-information games, subgame solving is significantly more challenging than in perfect-information games, but in the last few years, such techniques have been developed. They were the key ingredient to the milestone of superhuman play in no-limit Texas hold'em poker. Current subgame-solving techniques analyze the entire common-knowledge closure of the player's current information set, that is, the smallest set of nodes within which it is common knowledge that the current node lies. While this is acceptable in games like poker where the common-knowledge closure is relatively small, many practical games have more complex information structure, which renders the common-knowledge closure impractically large to enumerate or even reasonably approximate. We introduce an approach that overcomes this obstacle, by instead working with only low-order knowledge. Our approach allows an agent, upon arriving at an infoset, to basically prune any node that is no longer reachable, thereby massively reducing the game tree size relative to the common-knowledge subgame. We prove that, as is, our approach can increase exploitability compared to the blueprint strategy. However, we develop three avenues by which safety can be guaranteed. First, safety is guaranteed if the results of subgame solves are incorporated back into the blueprint. Second, we provide a method where safety is achieved by limiting the infosets at which subgame solving is performed. Third, we prove that our approach, when applied at every infoset reached during play, achieves a weaker notion of equilibrium, which we coin affine equilibrium, and which may be of independent interest. We show that affine equilibria cannot be exploited by any Nash strategy of the opponent, so an opponent who wishes to exploit must open herself to counter-exploitation. Even without the safety-guaranteeing additions, experiments on medium-sized games show that our approach always reduced exploitability in practical games even when applied at every infoset, and a depth-limited version of it led to---to our knowledge---the first strong AI for the challenge problem dark chess.
| accept | This appears to be a solid paper with strong reviews. The reviewer concerns were addressed with clarity and detail by the authors. I hope that any final version of the paper will incorporate improvements as a result of the reviewer feedback.
| test | [
"mfbrOyV31Y",
"gdX7wWqjlWt",
"2gwTxoGvJtv",
"ivrvkaqmidC",
"0W00WOH1vMR",
"IVz77ulNPk9",
"ZjnI6VjruF",
"N7MuXIZ7h_m",
"z3HK7J0NwSM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes a new method for subgame solving in two-player zero-sum extensive-form games with imperfect information. Traditional subgame solvers rely on the so-called common-knowledge closure that guarantees the opponent can not exploit the strategy computed in the subgame in the whole game. The problem wit... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
7,
8
] | [
4,
-1,
5,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_StbpmmlJbH",
"ZjnI6VjruF",
"nips_2021_StbpmmlJbH",
"z3HK7J0NwSM",
"2gwTxoGvJtv",
"mfbrOyV31Y",
"N7MuXIZ7h_m",
"nips_2021_StbpmmlJbH",
"nips_2021_StbpmmlJbH"
] |
nips_2021_AlD5WD2ANIQ | Fair Algorithms for Multi-Agent Multi-Armed Bandits | Safwan Hossain, Evi Micha, Nisarg Shah | accept | The paper considers a variant of the MAB problem where there are N agents who may perceive the "reward" from the K arms differently. The goal considered in this paper is to obtain a fair distribution over the arms in terms of Nash social welfare. This application is new, even if the algorithms and analysis do not seem to be that different from the standard MAB literature. The classical lower bound is also shown to hold in this case. | test | [
"-IBsFDd-GA",
"aA_pKamvuOm",
"qdyGP99c6YA",
"U7XRZlk8mWQ",
"s7aRGC4WbGr",
"JylMLxWfaZS",
"4F22TqPwzkN",
"NV72OEIXPP",
"-43_jlz1lR9",
"IZKg4ZV42Vk",
"2S6iCdJ2rE4",
"IGkiocqEgIV",
"pKTbCHiTXEY"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for reading our response. Just to further clarify why Hoeffding's inequality cannot be applied even after conditioning on the number of pulls, let us borrow the example from Section 1.3.1 of the book (https://arxiv.org/pdf/1904.07272.pdf) that we referenced in our previous response. \n\nConsider the usu... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"aA_pKamvuOm",
"IZKg4ZV42Vk",
"U7XRZlk8mWQ",
"JylMLxWfaZS",
"nips_2021_AlD5WD2ANIQ",
"4F22TqPwzkN",
"pKTbCHiTXEY",
"s7aRGC4WbGr",
"IGkiocqEgIV",
"2S6iCdJ2rE4",
"nips_2021_AlD5WD2ANIQ",
"nips_2021_AlD5WD2ANIQ",
"nips_2021_AlD5WD2ANIQ"
] |
nips_2021_hyJKKIhfxxT | VAST: Value Function Factorization with Variable Agent Sub-Teams | Value function factorization (VFF) is a popular approach to cooperative multi-agent reinforcement learning in order to learn local value functions from global rewards. However, state-of-the-art VFF is limited to a handful of agents in most domains. We hypothesize that this is due to the flat factorization scheme, where the VFF operator becomes a performance bottleneck with an increasing number of agents. Therefore, we propose VFF with variable agent sub-teams (VAST). VAST approximates a factorization for sub-teams which can be defined in an arbitrary way and vary over time, e.g., to adapt to different situations. The sub-team values are then linearly decomposed for all sub-team members. Thus, VAST can learn on a more focused and compact input representation of the original VFF operator. We evaluate VAST in three multi-agent domains and show that VAST can significantly outperform state-of-the-art VFF, when the number of agents is sufficiently large.
| accept | The reviewers in most part recognise the novelty/originality of the presented approach. The experiments, through which a strong case is made for the scalability of the method to large number of agents, seem solid and competitive with state of the art. Some concerns raised by Reviewer t76M about the clarity of section 4.2 on metagradient subroutine which needs to be addressed in the final version. | test | [
"PRH5eX2pA1x",
"dmTdBc7odc1",
"KGBU3GOxxlr",
"2lvIcaHybMK",
"ruSAp1Icu6B",
"34Ry6d7T3bY",
"GEnZRKI1E_U",
"si94QPj23nz"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your positive review and feedback regarding highly related work:\n\n**Coordination Graphs (CG)**\n\nThe motivation of CG is similar to VAST, where the MAS is structured as a (connected) graph instead of disjoint sub-teams which is then exploited for VFF. Basically, CG enriches VFF with agent relatio... | [
-1,
-1,
-1,
-1,
7,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
3,
5,
3,
4
] | [
"34Ry6d7T3bY",
"ruSAp1Icu6B",
"GEnZRKI1E_U",
"si94QPj23nz",
"nips_2021_hyJKKIhfxxT",
"nips_2021_hyJKKIhfxxT",
"nips_2021_hyJKKIhfxxT",
"nips_2021_hyJKKIhfxxT"
] |
nips_2021_Z2ZWIvNeVUl | On the Stochastic Stability of Deep Markov Models | Deep Markov models (DMM) are generative models which are scalable and expressive generalization of Markov models for representation, learning, and inference problems. However, the fundamental stochastic stability guarantees of such models have not been thoroughly investigated. In this paper, we present a novel stability analysis method and provide sufficient conditions of DMM's stochastic stability. The proposed stability analysis is based on the contraction of probabilistic maps modeled by deep neural networks. We make connections between the spectral properties of neural network's weights and different types of used activation function on the stability and overall dynamic behavior of DMMs with Gaussian distributions. Based on the theory, we propose a few practical methods for designing constrained DMMs with guaranteed stability. We empirically substantiate our theoretical results via intuitive numerical experiments using the proposed stability constraints.
| accept | This paper analyzes the stability properties of _deep Markov models_ (DMMs), which are nonlinear dynamical systems that are parameterized by deep neural networks (basically, replacing the transition/emission functions of an HMM with neural nets). The form of stability used in this paper is _stochastic stability_, which essentially means that the first and second moments of a dynamical process asymptotically converge. (Note that this is different from the notion of _algorithmic stability_ used in learning theory.) The paper provides sufficient conditions under which a DMM is stochastically stable, and then analyzes how various architectural considerations (activation, weights, depth, etc.) affect stability. A numerical study rounds out the picture.
The reviews were unanimously positive, with 7s (Accept) across the board. The theory is sound; the work feels complete; and the paper is well written and easy to follow, with minor typos. None of the reviewers were all that confident in their assessments (perhaps this is a niche subject for the NeurIPS audience), but all agreed that it's definitely a solid paper.
In my opinion, the main weakness (echoed by Reviewer fyK8) is that not much discussion is devoted to motivating stability. Why is stochastic stability important in applications of dynamical systems? The introduction ignores this question and starts from the assumption that stability is important and, therefore, worth studying. As far as I can tell, the only discussion of why stability is important comes up at the end, in the Broader Impact statement, which only says, "Stability is the major concern of many safety-critical systems. ... [T]he methods presented in this paper have the potential for practical impact in many real-world applications such as unmanned autonomous vehicles, robotics, or process control applications." OK, but why? What would happen if an _unstable_ system were deployed in an autonomous vehicle? How specifically does stochastic stability ensure safety in an autonomous vehicle? I can kind of fill in the blanks myself, but I'd like the paper to be more explicit about how this highly theoretical study has practical impact.
One other thing I'll note is that the term "stability," in the context of ML Theory, typically implies something different from stochastic stability (which comes from Control Theory). Readers familiar with _algorithmic stability_ (see Bousquet & Elisseeff, JMLR 2002) may be confused or misled by the title/abstract. I recommend adding something in the abstract and intro that (a) briefly describes stochastic stability and (b) makes it clear that it is not the same as algorithmic stability. | val | [
"oHvglrNkwUR",
"C58Ba2SNC9P",
"QH0MRiijIFK",
"PiWuGfL5zE",
"yQxRZkFENfe",
"7Pi3VVHXCX"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper discusses stable deep Markov models (DMM), which can be used to model stochastic dynamical systems, restricting to the case where state transition probabilities are given by normal distributions with mean and covariance matrices parameterized by two deep neural nets.\nOne of the main contributions of thi... | [
7,
7,
-1,
-1,
-1,
7
] | [
3,
3,
-1,
-1,
-1,
3
] | [
"nips_2021_Z2ZWIvNeVUl",
"nips_2021_Z2ZWIvNeVUl",
"C58Ba2SNC9P",
"7Pi3VVHXCX",
"oHvglrNkwUR",
"nips_2021_Z2ZWIvNeVUl"
] |
nips_2021_LZDiWaC9CGL | Multiwavelet-based Operator Learning for Differential Equations | Gaurav Gupta, Xiongye Xiao, Paul Bogdan | accept | This paper investigates the problem of approximating the solution operator of a partial differential equation (PDE) and proposes a novel neural architecture based on the multi-wavelet transform to approximate the kernel of the solution operator. The key innovation of this paper is a clever application of the multi-wavelet transform that allows for a simpler solution. There is a general consensus among the expert reviewers that this paper is of high quality and offers a new solution to the important problem that would have a high impact on the PDE community. Some reviewers raise a criticism that the paper didn't provide the results on complex real-world scenarios but focus instead on the simple settings. I consider this weakness to be minor for the NeurIPS audience, especially after the authors have satisfactorily described potential directions to improve this work in their responses.
As a result, I recommend this paper for publication at NeurIPS2021. | train | [
"3N-TVJuy2FA",
"hqJuAhz5wY0",
"CoJtqK1Vzme",
"n_YG4-6CiaI",
"cR_541hzwQU",
"U1CXJIeiW-",
"EKJ1qbkh1i",
"grMX6O-89_q",
"iPuM1RxbGIS",
"19Br4TjPhIi",
"dDnF2HVzqr",
"bPN5Bxc0Ot",
"fBBVpdhq2hU",
"MH_fN4OKRHq",
"R3nr1kYsR1X",
"3IeKrHBQG_j",
"HPwOK3aWHqu",
"QxqUtTC3i2i",
"BDkp6iOoqEn"
... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"... | [
" Thank you to the authors for their answer, I keep my rating.",
" We are thankful to the reviewer for raising the computational points and are happy to see that the concerns are addressed. Indeed, a better computational approach for multiwavelets is a promising research direction. Finally, we are in the process ... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
8
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"MH_fN4OKRHq",
"n_YG4-6CiaI",
"U1CXJIeiW-",
"3IeKrHBQG_j",
"nips_2021_LZDiWaC9CGL",
"R3nr1kYsR1X",
"fBBVpdhq2hU",
"iPuM1RxbGIS",
"dDnF2HVzqr",
"nips_2021_LZDiWaC9CGL",
"19Br4TjPhIi",
"nips_2021_LZDiWaC9CGL",
"BDkp6iOoqEn",
"QxqUtTC3i2i",
"cR_541hzwQU",
"HPwOK3aWHqu",
"nips_2021_LZDiW... |
nips_2021_M5j42PvY65V | Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning | We show that bringing intermediate layers' representations of two augmented versions of an image closer together in self-supervised learning helps to improve the momentum contrastive (MoCo) method. To this end, in addition to the contrastive loss, we minimize the mean squared error between the intermediate layer representations or make their cross-correlation matrix closer to an identity matrix. Both loss objectives either outperform standard MoCo, or achieve similar performances on three diverse medical imaging datasets: NIH-Chest Xrays, Breast Cancer Histopathology, and Diabetic Retinopathy. The gains of the improved MoCo are especially large in a low-labeled data regime (e.g. 1% labeled data) with an average gain of 5% across three datasets. We analyze the models trained using our novel approach via feature similarity analysis and layer-wise probing. Our analysis reveals that models trained via our approach have higher feature reuse compared to a standard MoCo and learn informative features earlier in the network. Finally, by comparing the output probability distribution of models fine-tuned on small versus large labeled data, we conclude that our proposed method of pre-training leads to lower Kolmogorov–Smirnov distance, as compared to a standard MoCo. This provides additional evidence that our proposed method learns more informative features in the pre-training phase which could be leveraged in a low-labeled data regime.
| accept | This paper proposes a straight-forward but clever and useful extension of MoCo by adding intermediate features.
A similar approach has been proposed earlier (Gong et al., withdrawn from ICLR, on openreview, cited in the proposed paper) this paper makes a great case for it. The paper tests on three very different, biomedical and openly available datasets and achieve very good results. They have addressed and promised to include details which the reviewers requested from the original submission so overall this paper was viewed positively by all reviewers and meta-reviewer.
| train | [
"M1r7U3glKLZ",
"i7LlIbj11Xs",
"gcZE8HNRtjD",
"l019mPzpW-9",
"k8DzdRjZPGR",
"XP5dcSbCEz6",
"jxLGIXz9cnf",
"rIsFnNSfdTN",
"5Gr3DNz4PJ-",
"9agPhxn9th",
"teIifEFWlhb"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for reconsidering the rating. Please find below our responses to your comments:\n\n1. As mentioned in the above response, typically in the medical domain, the models are fully finetuned after the SSL stage. One of the reasons for doing so is because only finetuning the last layer is not enou... | [
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"gcZE8HNRtjD",
"nips_2021_M5j42PvY65V",
"5Gr3DNz4PJ-",
"nips_2021_M5j42PvY65V",
"rIsFnNSfdTN",
"teIifEFWlhb",
"l019mPzpW-9",
"i7LlIbj11Xs",
"9agPhxn9th",
"nips_2021_M5j42PvY65V",
"nips_2021_M5j42PvY65V"
] |
nips_2021_luGpyKCzOPI | An Efficient Pessimistic-Optimistic Algorithm for Stochastic Linear Bandits with General Constraints | Xin Liu, Bin Li, Pengyi Shi, Lei Ying | accept | This paper looks at linear stochastic bandits with unknown, non-linear constraints.
We had a discussion about the novelty of this paper, compared to the existing ones (with linear constraints). Even if the global idea is roughly the same, we believed that this paper still has new interesting methodology and that the non-linearity of constraints is actually quite challenging.
At the end, we believe this paper should pass the acceptance bar. | train | [
"uaP88gzZhyp",
"Uj4908Cxx_V",
"o6PGh8eTD7p",
"dwgIbYTqwdO",
"C2tRCLP1HL",
"EJHt0mZUIua",
"FTEsN9aURHP",
"Jbijy6OsYKA",
"V0cA4jm8OWe",
"yq96yp3g0f2",
"WNUnC-SwmS6",
"ArE8hoW0UaM",
"yWinUMIvF1",
"1zBs1UHui8",
"RsS_WgRSwpj",
"om_7MSdR3Q",
"aF51l1TxfpF",
"zRbAP9Pr4Cj",
"ZMaFt8K93k",
... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_re... | [
" Yes, let's not get into an endless loop :-) While we respectfully disagree with the score, we greatly appreciate your detailed comments! We will incorporate these valuable comments in the revision, and we agree these changes will indeed further highlight our contributions and make the paper stronger. Again, our s... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"o6PGh8eTD7p",
"nips_2021_luGpyKCzOPI",
"dwgIbYTqwdO",
"C2tRCLP1HL",
"aF51l1TxfpF",
"FTEsN9aURHP",
"RsS_WgRSwpj",
"nips_2021_luGpyKCzOPI",
"yq96yp3g0f2",
"yWinUMIvF1",
"ArE8hoW0UaM",
"om_7MSdR3Q",
"2Uq8LWSTDxN",
"ZMaFt8K93k",
"Jbijy6OsYKA",
"zRbAP9Pr4Cj",
"Uj4908Cxx_V",
"nips_2021_... |
nips_2021_oZg-aOyHL-h | Efficiently Learning One Hidden Layer ReLU Networks From Queries | Sitan Chen, Adam Klivans, Raghu Meka | accept | A solid progress in a well studied line of research | val | [
"MKFwAtfaim4",
"bdEQBpcGXed",
"_M58iWxjfcc",
"6DtBBQQGlNS",
"e_6rKz1s5TV",
"4VLj7Bl2C1M",
"VqpUmfDZQbr",
"kHxPHUbg79",
"vswlP5uxYi",
"_eCO_CG38B3",
"Ptu1rLv21wm"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers the problem of learning depth-2 ReLU networks w.r.t. Gaussian inputs, where the learner has query (black-box) access to the target network. The problem of learning depth-2 networks (without queries) has been extensively studied in recent years. All known efficient algorithms require assumptions... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"nips_2021_oZg-aOyHL-h",
"nips_2021_oZg-aOyHL-h",
"nips_2021_oZg-aOyHL-h",
"Ptu1rLv21wm",
"_eCO_CG38B3",
"vswlP5uxYi",
"MKFwAtfaim4",
"nips_2021_oZg-aOyHL-h",
"nips_2021_oZg-aOyHL-h",
"nips_2021_oZg-aOyHL-h",
"nips_2021_oZg-aOyHL-h"
] |
nips_2021_FyI2-YoHHd | Learning Nonparametric Volterra Kernels with Gaussian Processes | This paper introduces a method for the nonparametric Bayesian learning of nonlinear operators, through the use of the Volterra series with kernels represented using Gaussian processes (GPs), which we term the nonparametric Volterra kernels model (NVKM). When the input function to the operator is unobserved and has a GP prior, the NVKM constitutes a powerful method for both single and multiple output regression, and can be viewed as a nonlinear and nonparametric latent force model. When the input function is observed, the NVKM can be used to perform Bayesian system identification. We use recent advances in efficient sampling of explicit functions from GPs to map process realisations through the Volterra series without resorting to numerical integration, allowing scalability through doubly stochastic variational inference, and avoiding the need for Gaussian approximations of the output processes. We demonstrate the performance of the model for both multiple output regression and system identification using standard benchmarks.
| accept | The reviewers found the paper interesting and well written, with a thorough and clear literature review, and methodologically sound. All reviewers recommend acceptance, and the remaining concerns are possible to address in the final camera-ready version. I recommend that the authors carefully go through the manuscript in light of the reviewer comments.
| train | [
"YAkCjPaTi3E",
"hPHHee2-rjG",
"kYxDZrEx90",
"vU_waviemrl",
"_AsaEXHA6kX",
"QlXfDD2kcT8",
"seBn9TT_fMh",
"miQXtOckfP7",
"4rMQZF4YqrG",
"uz2Sa9pqsEX",
"g4whwewBwf"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We appreciate the clarification from the reviewer, and plan to implement their suggestions for the colour scheme and inset for the plot in the revised manuscript. ",
"The paper places GP priors over the kernels of a Volterra series which is driven by an input process drawn from a GP, which may optionally have a... | [
-1,
8,
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
7
] | [
-1,
4,
-1,
4,
-1,
5,
-1,
-1,
-1,
-1,
3
] | [
"kYxDZrEx90",
"nips_2021_FyI2-YoHHd",
"miQXtOckfP7",
"nips_2021_FyI2-YoHHd",
"uz2Sa9pqsEX",
"nips_2021_FyI2-YoHHd",
"vU_waviemrl",
"hPHHee2-rjG",
"g4whwewBwf",
"QlXfDD2kcT8",
"nips_2021_FyI2-YoHHd"
] |
nips_2021_YqYt54gU-XV | DiBS: Differentiable Bayesian Structure Learning | Bayesian structure learning allows inferring Bayesian network structure from data while reasoning about the epistemic uncertainty---a key element towards enabling active causal discovery and designing interventions in real world systems. In this work, we propose a general, fully differentiable framework for Bayesian structure learning (DiBS) that operates in the continuous space of a latent probabilistic graph representation. Contrary to existing work, DiBS is agnostic to the form of the local conditional distributions and allows for joint posterior inference of both the graph structure and the conditional distribution parameters. This makes our formulation directly applicable to posterior inference of nonstandard Bayesian network models, e.g., with nonlinear dependencies encoded by neural networks. Using DiBS, we devise an efficient, general purpose variational inference method for approximating distributions over structural models. In evaluations on simulated and real-world data, our method significantly outperforms related approaches to joint posterior inference.
| accept | Reviewers agree that the proposed use of continuous embeddings for Bayesian learning of directed graphical models is innovative and potentially high-impact. Reviews generally have very positive comments about the manuscript's clear presentation and promising experimental results. In future revisions, please do better discuss the motivations for the "DiBS+" variant, and clarify the points about technical details and evaluation metrics that were discussed with reviewers. | train | [
"X3C24F66kTJ",
"K6obtX78sp9",
"o_3PV-vW597",
"nLfG89kcPeC",
"J1BN0QR-qid",
"wrPCGP2Pf9_",
"1KR3FGrqRM",
"ihyH3ETNtHp",
"omDrKWMq2nx",
"3OnVAvzF0H9",
"L6bgJc0eJ6W",
"6d23Jsl2lf",
"k-Bz6xwVOu"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I will keep my original score (increasing the score wouId not be unfair either) and I strongly support that this manuscript will be accepted.",
" Thank you to the authors for this detailed response! I am maintaining my recommendation for acceptance.\n\nA couple brief comments in response:\n\n> Hamiltonian Monte... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
8
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"omDrKWMq2nx",
"ihyH3ETNtHp",
"J1BN0QR-qid",
"nips_2021_YqYt54gU-XV",
"1KR3FGrqRM",
"k-Bz6xwVOu",
"nLfG89kcPeC",
"6d23Jsl2lf",
"L6bgJc0eJ6W",
"nips_2021_YqYt54gU-XV",
"nips_2021_YqYt54gU-XV",
"nips_2021_YqYt54gU-XV",
"nips_2021_YqYt54gU-XV"
] |
nips_2021_MGHO3xLMohC | Nonparametric estimation of continuous DPPs with kernel methods | Determinantal Point Process (DPPs) are statistical models for repulsive point patterns. Both sampling and inference are tractable for DPPs, a rare feature among models with negative dependence that explains their popularity in machine learning and spatial statistics. Parametric and nonparametric inference methods have been proposed in the finite case, i.e. when the point patterns live in a finite ground set. In the continuous case, only parametric methods have been investigated, while nonparametric maximum likelihood for DPPs -- an optimization problem over trace-class operators -- has remained an open question. In this paper, we show that a restricted version of this maximum likelihood (MLE) problem falls within the scope of a recent representer theorem for nonnegative functions in an RKHS. This leads to a finite-dimensional problem, with strong statistical ties to the original MLE. Moreover, we propose, analyze, and demonstrate a fixed point algorithm to solve this finite-dimensional problem. Finally, we also provide a controlled estimate of the correlation kernel of the DPP, thus providing more interpretability.
| accept | This paper studies nonparametric maximum likelihood estimation for continuous DPPs. In contrast to discrete DPPs, for continuous ones there are many open questions and new challenges. The main results are based on recent representer theorems for continuous DPPs as nonnegative functions in an RKHS. The paper uses this machinery to develop a new fixed point algorithm which is guaranteed to increase the MLE objective after each step. Furthermore they give strong statistical guarantees – bounds on the error of their estimator in the appropriate metric, depending on the sample complexity. All the reviewers agreed that the results were interesting and original, and that the paper was well-written and accessible. | train | [
"bpdAvuVOCQR",
"xIebigsg_-4",
"8MBLUfUGOeV",
"T3r7mmyUxiS",
"rKYbQyUYO30",
"mC9v1GCsXB8",
"_3rjsttAWBf",
"Zy9P2weXaKm",
"4s2_YhXeRa6"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their rebuttal; my score remains unchanged (7) and I recommend accepting this paper.",
" Thank you for the detailed response to my questions/comments. I’ve read through the other reviews and rebuttal comments, and am satisfied with the remarks. I maintain my review score of 7 (Good pap... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
3,
3
] | [
"mC9v1GCsXB8",
"rKYbQyUYO30",
"nips_2021_MGHO3xLMohC",
"_3rjsttAWBf",
"4s2_YhXeRa6",
"Zy9P2weXaKm",
"8MBLUfUGOeV",
"nips_2021_MGHO3xLMohC",
"nips_2021_MGHO3xLMohC"
] |
nips_2021_QZpx42n0BWr | FINE Samples for Learning with Noisy Labels | Modern deep neural networks (DNNs) become frail when the datasets contain noisy (incorrect) class labels. Robust techniques in the presence of noisy labels can be categorized into two folds: developing noise-robust functions or using noise-cleansing methods by detecting the noisy data. Recently, noise-cleansing methods have been considered as the most competitive noisy-label learning algorithms. Despite their success, their noisy label detectors are often based on heuristics more than a theory, requiring a robust classifier to predict the noisy data with loss values. In this paper, we propose a novel detector for filtering label noise. Unlike most existing methods, we focus on each data's latent representation dynamics and measure the alignment between the latent distribution and each representation using the eigen decomposition of the data gram matrix. Our framework, coined as filtering noisy instances via their eigenvectors (FINE), provides a robust detector with derivative-free simple methods having theoretical guarantees. Under our framework, we propose three applications of the FINE: sample-selection approach, semi-supervised learning approach, and collaboration with noise-robust loss functions. Experimental results show that the proposed methods consistently outperform corresponding baselines for all three applications on various benchmark datasets.
| accept | The paper proposes a method for identifying noisy examples by looking at their alignment with top eigenvector of the class specific feature covariance matrix. Reviewers have appreciated the simplicity of the method. Author responses were able to address the reviewers' concerns well and the paper is suitable to appear at the conference. | train | [
"nzOJOfTb-q",
"QSXZMZzXfu9",
"fSgOaTOU_vJ",
"v7uOJh4E9Z",
"q7kn78H_cwj",
"lbdxBcz56e0",
"CBDvvWwPEiY",
"zme6nYeSbpk",
"nKTJTXhAQGh",
"gLx75rjTG2K",
"Olj3Eu_yuZZ",
"7jaPLzbSE4D"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. To some degree, I agree with the claim that \"the situation in which ground-truth is not major is unnatural\". However, in my view, this unnatural situation could still happen in practice; for example, the labels could be maliciously flipped by some attachers.\n\nI think the authors h... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"QSXZMZzXfu9",
"fSgOaTOU_vJ",
"CBDvvWwPEiY",
"Olj3Eu_yuZZ",
"nKTJTXhAQGh",
"gLx75rjTG2K",
"7jaPLzbSE4D",
"nips_2021_QZpx42n0BWr",
"nips_2021_QZpx42n0BWr",
"nips_2021_QZpx42n0BWr",
"nips_2021_QZpx42n0BWr",
"nips_2021_QZpx42n0BWr"
] |
nips_2021_AHFfYwM7WGP | Residual2Vec: Debiasing graph embedding with random graphs | Graph embedding maps a graph into a convenient vector-space representation for graph analysis and machine learning applications. Many graph embedding methods hinge on a sampling of context nodes based on random walks. However, random walks can be a biased sampler due to the structural properties of graphs. Most notably, random walks are biased by the degree of each node, where a node is sampled proportionally to its degree. The implication of such biases has not been clear, particularly in the context of graph representation learning. Here, we investigate the impact of the random walks' bias on graph embedding and propose residual2vec, a general graph embedding method that can debias various structural biases in graphs by using random graphs. We demonstrate that this debiasing not only improves link prediction and clustering performance but also allows us to explicitly model salient structural properties in graph embedding.
| accept | The paper presents Residual2Vec, a technique for reducing structural biases for graph embedding. Existing methods use random walk-based sampling, which has a strong preference towards "hub" nodes. The paper draws an insightful analogy of how debasing effectively happens in negative sampling in word2vec, which inspired the authors to develop Residual2Vec to compensate structural biases in random walks. The reviewers are overall happy with the paper, with questions and concerns mostly regarding the presentation of the work. These questions are mostly clarified during the rebuttal, which the authors should include in the final version of the paper. The work makes a useful contribution for learning effective graph node embeddings with less dependency on node degrees. | test | [
"NNLkfTbtyvm",
"iSH9eiNX3nT",
"fchnHPuAoQU",
"1eInjROVhL",
"oPO_JQd6BLV",
"aqdSomnKJ5k",
"BUTeNlYRDMR",
"iLNlJJEute8",
"27l23TP1Ryz",
"NhHI1kOBI0C",
"WIBXte2y2q6",
"oJKLnTkC-Gu"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their response. I have increased my score (7: Good paper, accept). Overall, the paper makes a good contribution. Still, the paper could be enhanced with the addition of a more diverse set of baseline models.",
"The paper proposes an unsupervised graph representation learnin... | [
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"NhHI1kOBI0C",
"nips_2021_AHFfYwM7WGP",
"oPO_JQd6BLV",
"nips_2021_AHFfYwM7WGP",
"aqdSomnKJ5k",
"iLNlJJEute8",
"oJKLnTkC-Gu",
"1eInjROVhL",
"WIBXte2y2q6",
"iSH9eiNX3nT",
"nips_2021_AHFfYwM7WGP",
"nips_2021_AHFfYwM7WGP"
] |
nips_2021_0O5jpovbdHO | Benign Overfitting in Multiclass Classification: All Roads Lead to Interpolation | The growing literature on "benign overfitting" in overparameterized models has been mostly restricted to regression or binary classification settings; however, most success stories of modern machine learning have been recorded in multiclass settings. Motivated by this discrepancy, we study benign overfitting in multiclass linear classification. Specifically, we consider the following popular training algorithms on separable data: (i) empirical risk minimization (ERM) with cross-entropy loss, which converges to the multiclass support vector machine (SVM) solution; (ii) ERM with least-squares loss, which converges to the min-norm interpolating (MNI) solution; and, (iii) the one-vs-all SVM classifier. Our first key finding is that under a simple sufficient condition, all three algorithms lead to classifiers that interpolate the training data and have equal accuracy. When the data is generated from Gaussian mixtures or a multinomial logistic model, this condition holds under high enough effective overparameterization. Second, we derive novel error bounds on the accuracy of the MNI classifier, thereby showing that all three training algorithms lead to benign overfitting under sufficient overparameterization. Ultimately, our analysis shows that good generalization is possible for SVM solutions beyond the realm in which typical margin-based bounds apply.
| accept | This paper proves the equivalence between multi-class logistic regression and multi-class SVM, based on which it proves the benign overfitting for multi-class interpolator. The main concern from one of the reviewers is that the results derived for multi-class benign overfitting are worse than using the standard technique (i.e., converting $k$-class classification to $k(k-1)/2$ 1 vs. 1 binary classification problems.) The authors have made a great effort to address the reviewers’ questions and concerns. After the author response and reviewer discussion, the paper gathers enough support from the reviewers. Thus, I recommend acceptance. | test | [
"0LmZuGOqkfM",
"7ev3sl7ubQR",
"Ia0dDttWGxt",
"umYHjANoq1Q",
"6CXXFkOGSN",
"fR7RAVQSTTD",
"rk0BSqSmHE-",
"x0yTIj20wmG",
"AbhTRzLsP2",
"hVVubtfA6wU",
"QbJHgUhy1So",
"zN6bmYoy0E",
"taWWKWW0Z6c",
"n5P3pX1p0Ia",
"HfHmQ_bIsy",
"lu9I9grAt_R",
"GbyCTr4Boa",
"t61LppC95dL",
"LGy3RU1Fdm",
... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
... | [
" Dear Reviewer,\n \nWe appreciate your time spent on our submission and engaging in the discussion below. \n \nWe believe that we have answered your question regarding implicit bias of CE and its connection to multiclass-SVM (established by Soudry et al. 2018), which motivates our investigations. As per your sugge... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"fhQK4SiHZxn",
"Ia0dDttWGxt",
"AbhTRzLsP2",
"6b_diGsAoz9",
"fhQK4SiHZxn",
"x0yTIj20wmG",
"nips_2021_0O5jpovbdHO",
"LGy3RU1Fdm",
"hVVubtfA6wU",
"n5P3pX1p0Ia",
"zN6bmYoy0E",
"HfHmQ_bIsy",
"nips_2021_0O5jpovbdHO",
"wh6M4I3H50Z",
"lu9I9grAt_R",
"GbyCTr4Boa",
"t61LppC95dL",
"xF6vvpTHS-"... |
nips_2021_yrqn9rQO2YT | Instance-Dependent Bounds for Zeroth-order Lipschitz Optimization with Error Certificates | Francois Bachoc, Tom Cesari, Sébastien Gerchinovitz | accept | This paper considers the zeroth-order optimization problem with error certificates for Lipschitz functions. It generalized existing work to higher dimensions and showed that the algorithm is certifiable. The technical contribution is solid. I am happy to recommend acceptance, but I also note that the reviewers have pointed out that the technical novelty is limited in the sense that it owes a lot to Bouttier et al. (2020) which obtained the crucial quantity S_C(f,\varepsilon). | train | [
"7l1KT0laEs",
"U4gAqh2zVbV",
"m7SRTZwxyiN",
"G6eSf6OQ0z1",
"c6VdWAkVdU9",
"Fx3nH8W9cao",
"1200G0TKcEr",
"8XRuhKIGEtu",
"xXGpudnoGFO",
"-qZt-k5Jgr",
"fOMAOT9QT8_",
"r10OHR2wiK",
"mkFzL7PDnow",
"0OPK4rcIuH5",
"yw8oaTD8ffj",
"muwdWt8tQ8L",
"-cemEFnjnJV",
"9OzAD6ZnIDM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_rev... | [
" Thanks a lot for carefully answering my questions. I believe that incorporating these answers to the paper will improve the presentation of the paper.",
" In this paper, the authors consider the problem of finding global solutions\n to a possibly non-convex optimization problem under possibly non-convex\n co... | [
-1,
7,
7,
7,
-1,
7,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
3,
4,
4,
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"8XRuhKIGEtu",
"nips_2021_yrqn9rQO2YT",
"nips_2021_yrqn9rQO2YT",
"nips_2021_yrqn9rQO2YT",
"0OPK4rcIuH5",
"nips_2021_yrqn9rQO2YT",
"r10OHR2wiK",
"U4gAqh2zVbV",
"m7SRTZwxyiN",
"nips_2021_yrqn9rQO2YT",
"nips_2021_yrqn9rQO2YT",
"9OzAD6ZnIDM",
"-qZt-k5Jgr",
"-cemEFnjnJV",
"G6eSf6OQ0z1",
"Fx... |
nips_2021_Uwh-v1HSw-x | Training Neural Networks with Fixed Sparse Masks | Yi-Lin Sung, Varun Nair, Colin A. Raffel | accept | The paper studies an approach to pre-selecting a subset of parameters of a neural network to update during training. The idea is to look at (an empirical approximation to) the Fisher information of each parameter, and choose the parameters with the k-largest Fisher information values. Only these parameters are updated during training. The paper evaluates this idea for transfer learning (updating only a subset of the parameters for new tasks), sparse training from scratch, distributed training, and efficient checkpointing (where we save a sequence of sparse updates to the model). The proposed method achieves performance similar to dense training, with fewer updated parameters (between 0.5% and 2% depending on the task), and some performance improvements compared to several previous sparse mask generation techniques.
The reviewers all appreciated the paper’s simplicity, clarity and reasonably thorough experimental validation. Questions raised during the review included the novelty of the work (given that the Fisher information has been widely used as a measure of parameter importance, including for neural network pruning), and the relationship to pruning / sparsification methods. The authors’ response points out that although the use of Fisher information is not a novelty of this paper, the setting is different: this paper selects a subset of parameters to update, rather than selecting a subset of parameters to retain, which makes it applicable to transfer learning in dense models.
After several rounds of discussion, the reviewers converged to a recommendation to accept. While there are some limitations to the novelty of the submission, the proposed approach is practical, clearly described and well-substantiated. | train | [
"vTOxJX5ZQ7",
"qK5nbPtcHA4",
"zi7jCGIALOJ",
"pwY_BcYvkxD",
"eRkEqe2iQO",
"OVIDdLex9xD",
"SfZiyD2vqS6",
"EwcCSJ1o-47",
"5Jg-GS14TcN",
"2esTEHzojKS",
"uaYtcRlPsiR"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their feedback and clarification. \nI think the proposed modifications to the figure will improve the quality of the manuscript and I keep my recommendation to accept.\n\nAs for **L1**, I thank the authors for the clarification. Adding a small remark saying that all task in GLUE are singl... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"OVIDdLex9xD",
"SfZiyD2vqS6",
"nips_2021_Uwh-v1HSw-x",
"eRkEqe2iQO",
"zi7jCGIALOJ",
"uaYtcRlPsiR",
"2esTEHzojKS",
"5Jg-GS14TcN",
"nips_2021_Uwh-v1HSw-x",
"nips_2021_Uwh-v1HSw-x",
"nips_2021_Uwh-v1HSw-x"
] |
nips_2021_RzYrn625bu8 | VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text | We present a framework for learning multimodal representations from unlabeled data using convolution-free Transformer architectures. Specifically, our Video-Audio-Text Transformer (VATT) takes raw signals as inputs and extracts multimodal representations that are rich enough to benefit a variety of downstream tasks. We train VATT end-to-end from scratch using multimodal contrastive losses and evaluate its performance by the downstream tasks of video action recognition, audio event classification, image classification, and text-to-video retrieval. Furthermore, we study a modality-agnostic single-backbone Transformer by sharing weights among the three modalities. We show that the convolution-free VATT outperforms state-of-the-art ConvNet-based architectures in the downstream tasks. Especially, VATT's vision Transformer achieves the top-1 accuracy of 82.1% on Kinetics-400, 83.6% on Kinetics-600, 72.7% on Kinetics-700, and 41.1% on Moments in Time, new records while avoiding supervised pre-training. Transferring to image classification leads to 78.7% top-1 accuracy on ImageNet compared to 64.7% by training the same Transformer from scratch, showing the generalizability of our model despite the domain gap between videos and images. VATT's audio Transformer also sets a new record on waveform-based audio event recognition by achieving the mAP of 39.4% on AudioSet without any supervised pre-training.
| accept | The merits and weaknesses of this submission were discussed at length in back-and-forth conversations with the authors. The overall sense is that, although the technical novelty of the submission is limited, the paper provides a valuable large-scale empirical study on sharing a single transformer backbone architecture across different modalities. Several reviewers comment on the importance of sharing such results with the community. The ACs find that this is particularly relevant given that few labs are equipped with the resources to carry out such a study (thus here the ACs dissent from the opposite point made by some reviewers). The submission also introduces DropToken, a simple strategy to significantly reduce the computational cost of training large-capacity transformers. Based on these contributions, the ACs agree to accept this submission but encourage the authors to incorporate the feedback provided by reviewers, especially as it concerns the motivation for using transformers and the novelty claims. | train | [
"scf0b3ih0U_",
"zbQV-G1D0iq",
"iM7DieeX3y6",
"XNGLWVlLxSf",
"l7h1i4M-Ux",
"Dc7fbf7ugPc",
"MTikccJUqW",
"iKH7Daqp24d",
"1LDqZoRlWC8",
"ddoOwgYzH3d",
"V1pelnVD9TD",
"PHHcbtfCO2T",
"R_LaqJY8pcM",
"XwsHZMkYR_",
"YyaJfUZZ3L5",
"q7Q2I5pp5Ob",
"ciWwg7gqCPC",
"NifVr0KSCmm",
"WrMz-J29WdZ"... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"a... | [
" ### One architecture and one backbone\nBy \"...share one architecture...\" we mean Transformer, since Transformers are more efficient on the advanced accelerators because of their MatMul-heavy operations. In our work, both modality-specific and modality-agnostic settings use one architecture, Transformer.\n\nOn t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"zbQV-G1D0iq",
"iM7DieeX3y6",
"R_LaqJY8pcM",
"MTikccJUqW",
"1LDqZoRlWC8",
"XwsHZMkYR_",
"NifVr0KSCmm",
"nips_2021_RzYrn625bu8",
"ciWwg7gqCPC",
"nips_2021_RzYrn625bu8",
"nips_2021_RzYrn625bu8",
"cpwayDjgvk",
"WrMz-J29WdZ",
"YyaJfUZZ3L5",
"q7Q2I5pp5Ob",
"oVlYKXsffHh",
"ddoOwgYzH3d",
... |
nips_2021_t-7Jx48oaG | Analyzing the Generalization Capability of SGLD Using Properties of Gaussian Channels | Optimization is a key component for training machine learning models and has a strong impact on their generalization. In this paper, we consider a particular optimization method---the stochastic gradient Langevin dynamics (SGLD) algorithm---and investigate the generalization of models trained by SGLD. We derive a new generalization bound by connecting SGLD with Gaussian channels found in information and communication theory. Our bound can be computed from the training data and incorporates the variance of gradients for quantifying a particular kind of "sharpness" of the loss landscape. We also consider a closely related algorithm with SGLD, namely differentially private SGD (DP-SGD). We prove that the generalization capability of DP-SGD can be amplified by iteration. Specifically, our bound can be sharpened by including a time-decaying factor if the DP-SGD algorithm outputs the last iterate while keeping other iterates hidden. This decay factor enables the contribution of early iterations to our bound to reduce with time and is established by strong data processing inequalities---a fundamental tool in information theory. We demonstrate our bound through numerical experiments, showing that it can predict the behavior of the true generalization gap.
| accept | The paper provides new generalization guarantees for SGLD, which incorporate the gradient variance, giving a more adaptive flavor. The paper also considers differentially private stochastic optimization as an application of their results.
We had quite a bit of discussion on this paper and we thank the authors for providing the reviewers with additional details, which helped to clear up some confusion. After deliberation, we agreed that the paper should be accepted.
Please do carefully go over the reviews and discussion and incorporate any suggestions into the final manuscript. | train | [
"TLicLTjG-5P",
"nCjtlyxvEFQ",
"_cnKsqwySSg",
"DkkXveRv5BK",
"LfvOtUOpHMZ",
"mpFIG0ms2oG",
"EgBml9_MMXR",
"XUmIA7Oosyx",
"smZH5QrkyQ2",
"hQJCePcWtjt",
"kmmL8x55VST",
"6W5ukfvby3w"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper presents some new results on bounding the generalization error of the SGLD algorithm using the properties of Gaussian channels. For SGLD without sample replacement, strong data processing inequality for Gaussian channel is used to show a time-wise contraction of the upper bound. For the general SGLD wit... | [
6,
-1,
-1,
-1,
6,
6,
-1,
7,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
4,
3,
-1,
4,
-1,
-1,
-1,
-1
] | [
"nips_2021_t-7Jx48oaG",
"_cnKsqwySSg",
"DkkXveRv5BK",
"kmmL8x55VST",
"nips_2021_t-7Jx48oaG",
"nips_2021_t-7Jx48oaG",
"6W5ukfvby3w",
"nips_2021_t-7Jx48oaG",
"LfvOtUOpHMZ",
"mpFIG0ms2oG",
"TLicLTjG-5P",
"XUmIA7Oosyx"
] |
nips_2021_fEImgFxKU63 | Learning to Schedule Heuristics in Branch and Bound | Primal heuristics play a crucial role in exact solvers for Mixed Integer Programming (MIP). While solvers are guaranteed to find optimal solutions given sufficient time, real-world applications typically require finding good solutions early on in the search to enable fast decision-making. While much of MIP research focuses on designing effective heuristics, the question of how to manage multiple MIP heuristics in a solver has not received equal attention. Generally, solvers follow hard-coded rules derived from empirical testing on broad sets of instances. Since the performance of heuristics is problem-dependent, using these general rules for a particular problem might not yield the best performance. In this work, we propose the first data-driven framework for scheduling heuristics in an exact MIP solver. By learning from data describing the performance of primal heuristics, we obtain a problem-specific schedule of heuristics that collectively find many solutions at minimal cost. We formalize the learning task and propose an efficient algorithm for computing such a schedule. Compared to the default settings of a state-of-the-art academic MIP solver, we are able to reduce the average primal integral by up to 49% on two classes of challenging instances.
| accept | In combinatorial optimization, the design and selection of branching heuristics play a key role. Such heuristics are often carefully hand crafted and their performance is very much dependent on the particular instances. In fact, the time performance of branching heuristics can vary exponentially from instance to instance. The paper proposes a data driven approach for learning heuristic schedules for exact (i.e., with optimality guarantees) Mixed Integer Programming. They formulate the scheduling problem and propose an efficient data collection strategy. The schedule is problem/instance dependent and results show significant improvement with respect to the primal gap, a key measure used in exact MIPs. There was consensus about this paper and the reviewers carefully consider the authors' feedback. | train | [
"X991soCI4_",
"YKu0t-VExJt",
"VHZkfNJbJMl",
"sz9wybr_7m4",
"ekZ91sLDPZ",
"_cluaFv-Hmg",
"B10VvdC0J34",
"8rOd5-U5EZQ",
"Rx5wnNArljg",
"36YRbs0s3uf",
"dyyowqcKf2",
"chTCrJ2aQLU"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed comments! I'll maintain my evaluation that the paper should be accepted.",
"The paper studies the scheduling of multiple heuristics inside of a mixed-integer programming solver. The authors formulate the learning problem exactly, and then present a heuristic method with greater scalab... | [
-1,
7,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
4,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"36YRbs0s3uf",
"nips_2021_fEImgFxKU63",
"Rx5wnNArljg",
"B10VvdC0J34",
"nips_2021_fEImgFxKU63",
"8rOd5-U5EZQ",
"dyyowqcKf2",
"ekZ91sLDPZ",
"YKu0t-VExJt",
"chTCrJ2aQLU",
"nips_2021_fEImgFxKU63",
"nips_2021_fEImgFxKU63"
] |
nips_2021_3EwcMzmUbNd | On Training Implicit Models | Zhengyang Geng, Xin-Yu Zhang, Shaojie Bai, Yisen Wang, Zhouchen Lin | accept | Many current architectures use "implicit" layers where a solver (e.g. root finder or ODE solver) gives the output. Exactly differentiating these models requires implicit differentiation in the backprop step, which is expensive. This paper introduces a novel "phantom" gradient which is easy to compute, and shows that the inner-product of this gradient with the true gradient is >0, meaning it is a descent direction. An empirical verification shows that optimization using this gradient succeeds.
Reviewers generally felt the paper was novel, well-organized, convincing, clear, and had experiments that show the method truly works. There were a few weaknesses that were identified. First, there were some technical issues in the theoretical results. From the author response, these appear to be relatively minor and specific corrections have been identified. Second, there were some concerns about the rigor of the experiments (e.g. the lack of error bars in the small-scale experiment). Third, there were some concerns about notation and related works. For these latter two concerns the authors have responded at length in the feedback, and this has improved reviewer confidence about these issues. We trust the authors will integrate this content into the main paper, which will strength it considerably. | train | [
"bqNAlxzf4H",
"_ic1AzLnbI2",
"MHoNRDJbu4",
"aR6HPolYMAi",
"1sPU_R61Vq",
"oBS_j7wvuvu",
"_O3I5-7g95a",
"vU9H4j33LU",
"YdnfGlJUEUQ",
"oCYXA2_OsoE"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents a method called phantom gradient to increase the efficiency in training implicit deep models by approximating the exact gradient. The authors first exploit Neumann series for the damped inverse matrix in implicit gradient calculation. They then truncate the series to k terms to arrive at finite ... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"nips_2021_3EwcMzmUbNd",
"vU9H4j33LU",
"_O3I5-7g95a",
"oCYXA2_OsoE",
"bqNAlxzf4H",
"YdnfGlJUEUQ",
"oBS_j7wvuvu",
"nips_2021_3EwcMzmUbNd",
"nips_2021_3EwcMzmUbNd",
"nips_2021_3EwcMzmUbNd"
] |
nips_2021_EI2KOXKdnP | MLP-Mixer: An all-MLP Architecture for Vision | Convolutional Neural Networks (CNNs) are the go-to model for computer vision. Recently, attention-based networks, such as the Vision Transformer, have also become popular. In this paper we show that while convolutions and attention are both sufficient for good performance, neither of them are necessary. We present MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs). MLP-Mixer contains two types of layers: one with MLPs applied independently to image patches (i.e. "mixing" the per-location features), and one with MLPs applied across patches (i.e. "mixing" spatial information). When trained on large datasets, or with modern regularization schemes, MLP-Mixer attains competitive scores on image classification benchmarks, with pre-training and inference cost comparable to state-of-the-art models. We hope that these results spark further research beyond the realms of well established CNNs and Transformers.
| accept | Initially, the paper received scores 7, 7, 6, and 5. All reviewers consider the idea is very interesting, experiments are comprehensive, the paper is well-written, and results are strong given the simplicity of the model. At the same time, multiple concerns were raised, specifically pointing out limitations of the model ( e.g., inability to handle multiple resolutions and tasks such as detection/segmentation; the model is data-hungry, parameter-inefficient, and tailored to specialized hardware/TPUs). During the discussion phase, two reviewers were disappointed by the author response, which did not address well questions such as reporting ImageNet results from other architectures, adopting the same training procedure as in DeiT, and over claimed significance of results. They downgraded their rating from 7 to 6, still recommending borderline acceptance. While the concerns raised by the reviewers are all legitimate, the AC considers that the strengths of the paper outweighs its weaknesses, and agrees with the majority that it passes the acceptance bar of NeurIPS. In particular, the proposed architecture has the potential to make an impact and inspire further research in the field, as it is conceptually simple yet achieves strong results. The authors should include in the camera-ready the discussion based on reviewer’s comments, properly convey the limitations of the architecture, and also address the two ethical concerns pointed out in the ethics review. | train | [
"BsvBcOKgwYk",
"BvKhcVyVBex",
"Nx8d8llfCt",
"4n97LFPOonR",
"NHD0Z0TZQRs",
"3GerExO99b",
"R9_Sje_UfR",
"WJqiQzda4Ss",
"cvQsOb4Xa8",
"wmxfByzc2G3",
"QxrH9hiFr5K",
"dZRE9sOOTSm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response, the paper proposes an interesting idea however the rebuttal partially answer my concerns:\n\n- The answer 2) does not satisfy me because it is easy and fast to report the results presented in previous papers.\n- The answer 3) does not satisfy me either. A lot of papers on vision trans... | [
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
5,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"3GerExO99b",
"nips_2021_EI2KOXKdnP",
"NHD0Z0TZQRs",
"nips_2021_EI2KOXKdnP",
"nips_2021_EI2KOXKdnP",
"4n97LFPOonR",
"BvKhcVyVBex",
"QxrH9hiFr5K",
"dZRE9sOOTSm",
"nips_2021_EI2KOXKdnP",
"nips_2021_EI2KOXKdnP",
"nips_2021_EI2KOXKdnP"
] |
nips_2021_k_w-RCJ9kMw | A Framework to Learn with Interpretation | To tackle interpretability in deep learning, we present a novel framework to jointly learn a predictive model and its associated interpretation model. The interpreter provides both local and global interpretability about the predictive model in terms of human-understandable high level attribute functions, with minimal loss of accuracy. This is achieved by a dedicated architecture and well chosen regularization penalties. We seek for a small-size dictionary of high level attribute functions that take as inputs the outputs of selected hidden layers and whose outputs feed a linear classifier. We impose strong conciseness on the activation of attributes with an entropy-based criterion while enforcing fidelity to both inputs and outputs of the predictive model. A detailed pipeline to visualize the learnt features is also developed. Moreover, besides generating interpretable models by design, our approach can be specialized to provide post-hoc interpretations for a pre-trained neural network. We validate our approach against several state-of-the-art methods on multiple datasets and show its efficacy on both kinds of tasks.
| accept | All the reviewers agree that the paper has some nice and interesting ideas and it is indeed quite promising that the proposed method gives minimal accuracy loss while providing interpretability.
Having said that, a majority of them felt that a more thorough analysis of the method (e.g., ablation studies) and evaluation on at least one dataset on the scale of ImageNet is necessary to properly evaluate the utility of the proposed method. This is particularly so because of the large amount of existing work in this area and the limited novelty of the proposed method. I am sharing some of the key concerns and points that arose during the discussion period.
- As a problem on which there is earlier work (SENN, PrototypeDNN), restricting to small-scale datasets seemed limiting. The new results on CIFAR100 were nice to see, but results on one larger complex dataset (TinyImagenet, CUB, ImageNet) would have made it better to see the usefulness of the work (considering the method by itself has no new components, it's an aggregation of existing pieces).
- The paper seems to be missing analysis (ablation studies in the case of this work) where the results are shown with choice of different layers, or choice of different epochs where loss functions are introduced. These are important decisions in the framework, and how they impact the final result would have been nice to see (even if some of the choices did not yield strong results, and the method worked only for certain choices).
[Update] The authors' response dated 2nd September indeed satisfactorily addresses the two main issues of (i) evaluation on a larger dataset and (ii) analysis of the effect of different choices in the proposed framework. Since the reviewer/AC discussion had concluded by 30th August, this last authors' response was not noticed at the time of the original metareview. Given that all the reviewers agreed that the paper has interesting results, and that the two main criticisms have been satisfactorily addressed, I recommend acceptance of the paper. | test | [
"j2R1DE9Y3UX",
"Jq0cpqX_Ykm",
"F-UyZ83NOdH",
"g0-5xTkc80U",
"XfpKXW7yN8o",
"RVcRjX75-E",
"Czu8FNqregn",
"RMX0qtZq-4Z",
"CBFsnkkBAa",
"dwalii3wxdC",
"LsnnA85KfFp",
"IAt4g_3qCv",
"v6sLd60vO70"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Posting this update as a separate comment to ensure everyone is notified.\n\n**(UPDATE, Sep 01)** We have completed experiments about (1) how choice of hidden layers can affect different metrics and (2) how introduction of $L_{of}, L_{cd}$ at different points of training can affect the metrics. We discuss their r... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3
] | [
"Jq0cpqX_Ykm",
"XfpKXW7yN8o",
"CBFsnkkBAa",
"nips_2021_k_w-RCJ9kMw",
"RMX0qtZq-4Z",
"v6sLd60vO70",
"v6sLd60vO70",
"g0-5xTkc80U",
"IAt4g_3qCv",
"LsnnA85KfFp",
"nips_2021_k_w-RCJ9kMw",
"nips_2021_k_w-RCJ9kMw",
"nips_2021_k_w-RCJ9kMw"
] |
nips_2021_2pJZSVcSZz | One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective | Jiun Tian Hoe, Kam Woh Ng, Tianyu Zhang, Chee Seng Chan, Yi-Zhe Song, Tao Xiang | accept | Authors propose a novel deep hashing model with only a single learning objective which is a simplification from most state of the art papers generally use lots of losses and regularizer. Two of the three reviewers liked its effectiveness and the simplicity, they believe it would bring insightful contributions to our community. Reviewers liked the final formulation with softmax-like loss function. They also found the use of Batch normalization for balancing code elegant.
One of the reviewers raised certain concerns. The authors in rebuttal clarified that most of the negative argument raised by reviewers are misunderstanding and are not valid. It is clear that the concerns raised were already addressed in the main draft.
.
Overall, the merits of the paper justifies the publication. | train | [
"JPemut_LrHz",
"w68ZueuxsQG",
"m60ZbeE_-3l",
"QOEFXuLgS5r",
"2cChhs1A-Dj",
"jkQIsiYF9Y"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" $\\textbf{Larger loss with short code.}$ A very short code indeed challenges most hashing methods. However, in the Supplementary Material, Section D.3 (Page 6), we have shown in Table 4 that explicitly maximizing inter-class hamming distance (maxHD) or Hadamard matrix style for target code generation performs bet... | [
-1,
-1,
-1,
4,
7,
7
] | [
-1,
-1,
-1,
4,
3,
5
] | [
"jkQIsiYF9Y",
"QOEFXuLgS5r",
"2cChhs1A-Dj",
"nips_2021_2pJZSVcSZz",
"nips_2021_2pJZSVcSZz",
"nips_2021_2pJZSVcSZz"
] |
nips_2021_B4szfz7W7LU | Fast and accurate randomized algorithms for low-rank tensor decompositions | Linjian Ma, Edgar Solomonik | accept | This work contributes sketching-based alternating least squares algorithms for fast and accurate low-rank Tucker tensor decomposition. The main novel contributions are relative-error analyses of the error in the solution of the least squares systems when leverage-score sampling or TensorSketches are used to sketch the systems. The experimental evaluation is somewhat lacking, as uses small and/or synthetic data sets, but the algorithms are shown to result in lower error than previous randomized algorithms for Tucker decompositions. | train | [
"VGpvaNE8qKq",
"urGF0IxJ2ru",
"zl0pRx2x0Zi",
"q_arA1FKFcN",
"kcxg9nkf3nY",
"lW0a3elak_",
"rquGS9khlbc",
"BwKFUAGh9F9",
"0ooRvjUEMDZ",
"YTeYmoQ8hZ",
"vQ5wnWF6YmR"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose an Alternating Least Squares (ALS) scheme for fitting the Tucker tensor decomposition, based on sketching techniques. In particular, the proposed strategy is to update one of the factor matrices and the core tensor during each iteration, through the use of sketched rank-constrained linear least... | [
6,
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
3,
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_B4szfz7W7LU",
"nips_2021_B4szfz7W7LU",
"nips_2021_B4szfz7W7LU",
"nips_2021_B4szfz7W7LU",
"urGF0IxJ2ru",
"vQ5wnWF6YmR",
"YTeYmoQ8hZ",
"zl0pRx2x0Zi",
"VGpvaNE8qKq",
"nips_2021_B4szfz7W7LU",
"nips_2021_B4szfz7W7LU"
] |
nips_2021_UpfqzQtZ58 | Communication-efficient SGD: From Local SGD to One-Shot Averaging | Artin Spiridonoff, Alex Olshevsky, Ioannis Paschalidis | accept | This paper studies the oracle (gradient) complexity of the local SGD algorithm (and one-shot averaging as a special case). The paper makes two main contributions (see additional comments below), (1) a slightly improved complexity estimate for local SGD with non-equally spaced (linearly increasing) intermissions between communication rounds, and (2) a novel asymptotic result for one-shot averaging under PL and second order smoothness assumption.
The paper has prompted lengthy discussions, both publicly and internally among the reviewers.
The main focus of the paper is currently on result (1). However, all reviewers unanimously find this result not very significant, since, for example a discussion of a corresponding lower complexity bound is missing and thus a log factor improvement seems very marginal. For the final version, we urge the authors
- to carefully discuss the improvement over Theorem 2 (strongly convex case) in [[Woodworth et al, ICML 2020](https://arxiv.org/pdf/2002.07839.pdf)] (essentially, it seems the new result improves under the condition $\kappa \leq \log (T)$ only.
- the reviewers mentioned the suboptimal dependence on e.g. the condition number and found the author's response (speculating on an inherent trade-off) very interesting. However, I would strongly encourage authors to be very precise when including such claims, as the trade-off may just be a consequence of the proof technique.
The reviewers found result (2) interesting. Although strong assumptions are being made at the moment, the result could possibly stipulate further research.
To conclude, the contribution regarding (2), as well as the proposed modifications in the discussion with the reviewers (including discussion on FedAC and additional numerical results), gave the impetus to argue for acceptance for this work. | train | [
"XmFUABndMvG",
"0t-_gmG1pOH",
"UIN0CoLXZdV",
"8BSEugMV09w",
"uqXbKgEHZsa",
"1wVRMyb-ttv",
"8V6GAQBlAx3",
"gfhVEyV0J-7",
"QZ7sQlNNBLZ",
"s60w7BYVavt",
"NoMntmUwxxL",
"mUbW2MoUaCf",
"Df76kz7aAHG",
"mVzvxnw9HYm",
"Z1d-f_7ZHH5",
"8q_cINtUz7q",
"_N7am_vEkS",
"UeIhyzK74rH",
"vsyi6k_5eT... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_rev... | [
" > number of iterations that does not depend at all on epsilon\n\nShould be: \"number of _communications_ that does not depend at all on epsilon.\"",
" Thank you for your reply. A couple of counterpoints are below:\n\n> For the theoretical analysis, theories in non-iid settings could usually cover or be easily c... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"0t-_gmG1pOH",
"UIN0CoLXZdV",
"i7tbmUUQ6gv",
"nips_2021_UpfqzQtZ58",
"1wVRMyb-ttv",
"gfhVEyV0J-7",
"vsyi6k_5eT",
"s60w7BYVavt",
"mUbW2MoUaCf",
"mUbW2MoUaCf",
"mVzvxnw9HYm",
"mVzvxnw9HYm",
"nips_2021_UpfqzQtZ58",
"Z1d-f_7ZHH5",
"CZ7PwN8YWOx",
"_N7am_vEkS",
"UeIhyzK74rH",
"3wHCbmVxr_... |
nips_2021_x2pF7Tt_S5u | Memory Efficient Meta-Learning with Large Images | Meta learning approaches to few-shot classification are computationally efficient at test time, requiring just a few optimization steps or single forward pass to learn a new task, but they remain highly memory-intensive to train. This limitation arises because a task's entire support set, which can contain up to 1000 images, must be processed before an optimization step can be taken. Harnessing the performance gains offered by large images thus requires either parallelizing the meta-learner across multiple GPUs, which may not be available, or trade-offs between task and image size when memory constraints apply. We improve on both options by proposing LITE, a general and memory efficient episodic training scheme that enables meta-training on large tasks composed of large images on a single GPU. We achieve this by observing that the gradients for a task can be decomposed into a sum of gradients over the task's training images. This enables us to perform a forward pass on a task's entire training set but realize significant memory savings by back-propagating only a random subset of these images which we show is an unbiased approximation of the full gradient. We use LITE to train meta-learners and demonstrate new state-of-the-art accuracy on the real-world ORBIT benchmark and 3 of the 4 parts of the challenging VTAB+MD benchmark relative to leading meta-learners. LITE also enables meta-learners to be competitive with transfer learning approaches but at a fraction of the test-time computational cost, thus serving as a counterpoint to the recent narrative that transfer learning is all you need for few-shot classification.
| accept | The paper is addressing the problem of meta-learning with large-scale images. In this setting, many of the existing algorithms require an intractable amount of memory. The proposed method addresses this challenge by simply sub-sampling the support set in the backward pass. Although the method is sensible, the empirical gain is marginal. All reviewers agree on the merit of the paper to be accepted after an active discussion phase. | train | [
"YIH6TJAUYqJ",
"DYhTcKH_4D8",
"iWn2cphuhy0",
"MvjVwAarxAs",
"_-hXyiF0ZUN",
"PlqLJuqWSF",
"BN5uth2YVDO",
"dXMoYwT0nVc",
"aC213L29nuy",
"8lthDOdsp1p",
"-AeQiNQ9k75",
"qfDfFanUwy",
"NwYIxDWe9G",
"Hl2HrUR2Tyl",
"XsSWeD80ZE4",
"NYMOP-k8qG-"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your positive response. We are pleased to hear that your concerns have been addressed. Indeed, we will include the tables in the finsl version of the paper. Thanks again for raising the issues!",
"Meta-learning algorithms are not memory friendly since they have to load all the support samples during ... | [
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
4,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iWn2cphuhy0",
"nips_2021_x2pF7Tt_S5u",
"MvjVwAarxAs",
"_-hXyiF0ZUN",
"qfDfFanUwy",
"nips_2021_x2pF7Tt_S5u",
"DYhTcKH_4D8",
"aC213L29nuy",
"8lthDOdsp1p",
"NYMOP-k8qG-",
"PlqLJuqWSF",
"DYhTcKH_4D8",
"XsSWeD80ZE4",
"nips_2021_x2pF7Tt_S5u",
"nips_2021_x2pF7Tt_S5u",
"nips_2021_x2pF7Tt_S5u"... |
nips_2021_WYrC0Aentah | On the Power of Differentiable Learning versus PAC and SQ Learning | Emmanuel Abbe, Pritish Kamath, Eran Malach, Colin Sandon, Nathan Srebro | accept | This presents interesting results regarding the power of mini-batch SGD. All reviewers and the AC like the paper and the AC recommend acceptance. | train | [
"ZhbvubqPO8v",
"_pVo8Bf6K3F",
"HE0rRl9ITPg",
"73P4dVFjr1q",
"3bUdE3moYQ",
"QwTDtI2RhDU",
"Ue7C8Ief_N8",
"OhMTrRjRZNq"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors characterize the power of minibatch SGD on differentiable models in comparison to SQ and \"PAC\" (learners with access to samples) models. They build upon previous work of Abbe & Sandon who had shown that online SGD on neural networks (batch size = 1) is 'universal' in the sense that if some function ... | [
7,
7,
-1,
-1,
-1,
-1,
7,
7
] | [
3,
3,
-1,
-1,
-1,
-1,
3,
2
] | [
"nips_2021_WYrC0Aentah",
"nips_2021_WYrC0Aentah",
"ZhbvubqPO8v",
"OhMTrRjRZNq",
"_pVo8Bf6K3F",
"Ue7C8Ief_N8",
"nips_2021_WYrC0Aentah",
"nips_2021_WYrC0Aentah"
] |
nips_2021_4Il6i0jdrvP | Can we globally optimize cross-validation loss? Quasiconvexity in ridge regression | Models like LASSO and ridge regression are extensively used in practice due to their interpretability, ease of use, and strong theoretical guarantees. Cross-validation (CV) is widely used for hyperparameter tuning in these models, but do practical methods minimize the true out-of-sample loss? A recent line of research promises to show that the optimum of the CV loss matches the optimum of the out-of-sample loss (possibly after simple corrections). It remains to show how tractable it is to minimize the CV loss.In the present paper, we show that, in the case of ridge regression, the CV loss may fail to be quasiconvex and thus may have multiple local optima. We can guarantee that the CV loss is quasiconvex in at least one case: when the spectrum of the covariate matrix is nearly flat and the noise in the observed responses is not too high. More generally, we show that quasiconvexity status is independent of many properties of the observed data (response norm, covariate-matrix right singular vectors and singular-value scaling) and has a complex dependence on the few that remain. We empirically confirm our theory using simulated experiments.
| accept | We thank the authors for the additional clarifications provided in the rebuttal. The reviewers were satisfied with the authors' response about theoretical concerns and the appreciated the additional experiments. All reviewers also agreed that the overall problem discussed in this work is of interest to the community and that the paper makes a novel contribution, albeit that its importance is narrow. | train | [
"iFdTU4fIlG5",
"42Q_Bm5q99n",
"9nnlmbeeePT",
"Umkg1aT-Uv9",
"eM1ufttMxaK",
"MMVU8mFiBAk",
"kzP0cpgLIMW",
"hUiM0WoCXc",
"nWU04CMlOYL",
"cr1byefRzDC",
"q8poUiYntbT",
"li3iqO4gMPS",
"BDfSUUPAMH",
"NLbHVdhtl-6"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors consider the problem of analyzing the shape of the leave-one-out cross-validation loss, and establish sufficient conditions in terms of properties of the observation matrix $X$ under which the loss is quasi-convex, in a finite sample setting with assumptions related to the Fisherian ($n \\rightarrow \\... | [
4,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
5,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_4Il6i0jdrvP",
"9nnlmbeeePT",
"kzP0cpgLIMW",
"nips_2021_4Il6i0jdrvP",
"hUiM0WoCXc",
"q8poUiYntbT",
"NLbHVdhtl-6",
"BDfSUUPAMH",
"iFdTU4fIlG5",
"Umkg1aT-Uv9",
"Umkg1aT-Uv9",
"nips_2021_4Il6i0jdrvP",
"nips_2021_4Il6i0jdrvP",
"nips_2021_4Il6i0jdrvP"
] |
nips_2021_Qijzj3WqUl3 | Adaptive Proximal Gradient Methods for Structured Neural Networks | We consider the training of structured neural networks where the regularizer can be non-smooth and possibly non-convex. While popular machine learning libraries have resorted to stochastic (adaptive) subgradient approaches, the use of proximal gradient methods in the stochastic setting has been little explored and warrants further study, in particular regarding the incorporation of adaptivity. Towards this goal, we present a general framework of stochastic proximal gradient descent methods that allows for arbitrary positive preconditioners and lower semi-continuous regularizers. We derive two important instances of our framework: (i) the first proximal version of \textsc{Adam}, one of the most popular adaptive SGD algorithm, and (ii) a revised version of ProxQuant for quantization-specific regularizers, which improves upon the original approach by incorporating the effect of preconditioners in the proximal mapping computations. We provide convergence guarantees for our framework and show that adaptive gradient methods can have faster convergence in terms of constant than vanilla SGD for sparse data. Lastly, we demonstrate the superiority of stochastic proximal methods compared to subgradient-based approaches via extensive experiments. Interestingly, our results indicate that the benefit of proximal approaches over sub-gradient counterparts is more pronounced for non-convex regularizers than for convex ones.
| accept | Reviews were initially highly polarized: while Uvqk and REN2 give a positive score and qualify this as "well-executed" and "authors generally do a good job in arguing the empirical and theoretical advantages of the proposed method", reviewer 3MNN argues that "the convergence analysis is not technically sound". The authors provided a strong rebuttal that increased the score of reviewer 3MNN.
After the discussion period, the reviewers agree that this is a contribution that can be of high interest to the NeurIPS community, and as such my recommendation is to accept for publication. However, reviewer 3MNN raises some valid points regarding comparison of constants with existing results and regarding clarity of the statements that I encourage the authors to incorporate into the final version. | train | [
"K6O06TLX6Vd",
"9w-RBnZauVI",
"JqkJnhGrT4T",
"8MgjAR14PSb",
"3lAvyo3A9TU",
"MtDaGnrz2vU",
"yhvpWMdudO4",
"wSCdgVh4pR",
"SmHyylcoUWE",
"D0vwdOZ80eQ",
"rZAgYUoqBB3",
"MbuDdPpY2ZQ",
"kUvaNnq34_",
"2LiaAR_8f_0",
"pjWawbWtSy-",
"aHPXak0-sn2"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work introduces a proximal algorithm for training neural networks. Specifically, the method allows the use of non-convex regularization functions and general PSD pre-conditioners in the objective, while guaranteeing convergence to a stationary point. Empirical results are provided for training sparse neural ... | [
7,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_Qijzj3WqUl3",
"MtDaGnrz2vU",
"8MgjAR14PSb",
"3lAvyo3A9TU",
"wSCdgVh4pR",
"SmHyylcoUWE",
"nips_2021_Qijzj3WqUl3",
"rZAgYUoqBB3",
"D0vwdOZ80eQ",
"2LiaAR_8f_0",
"K6O06TLX6Vd",
"aHPXak0-sn2",
"pjWawbWtSy-",
"yhvpWMdudO4",
"nips_2021_Qijzj3WqUl3",
"nips_2021_Qijzj3WqUl3"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.