paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2021_8v_4EVifBqX | Tensor decompositions of higher-order correlations by nonlinear Hebbian plasticity | Biological synaptic plasticity exhibits nonlinearities that are not accounted for by classic Hebbian learning rules. Here, we introduce a simple family of generalized nonlinear Hebbian learning rules. We study the computations implemented by their dynamics in the simple setting of a neuron receiving feedforward inputs. These nonlinear Hebbian rules allow a neuron to learn tensor decompositions of its higher- order input correlations. The particular input correlation decomposed and the form of the decomposition depend on the location of nonlinearities in the plasticity rule. For simple, biologically motivated parameters, the neuron learns eigenvectors of higher-order input correlation tensors. We prove that tensor eigenvectors are attractors and determine their basins of attraction. We calculate the volume of those basins, showing that the dominant eigenvector has the largest basin of attraction. We then study arbitrary learning rules and find that any learning rule that admits a finite Taylor expansion into the neural input and output also has stable equilibria at generalized eigenvectors of higher-order input correlation tensors. Nonlinearities in synaptic plasticity thus allow a neuron to encode higher-order input correlations in a simple fashion.
| accept | This paper shows how nonlinear Hebbian rules learn tensor decompositions of higher-order input correlations. This is an important contribution to the literature on Hebbian synaptic plasticity. Original submission missed important citations and related work, however the revised introduction proposed by the authors better frame the work and its contributions. | train | [
"7osEFIyNt7",
"hmezRj5rDd2",
"Zpt4Ci23H8M",
"79x1TfY07Ku",
"1_SnFQ2UqJ",
"MBumV7eAtO",
"XmJEovieA-",
"B1OP5ifDVM5",
"i5C-gQ1cVX4",
"sLwArE4eYSt",
"7B3a45TOzU8"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for taking the time to address my questions in such details. I will maintain my \"good paper, accept\" score.",
"This work proposes to determine the solutions of \"generalized\" Hebbian rules, i.e., applied to tensor matrices, and analyze the stability of these solutions. The model starts by postulat... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
9,
7,
7
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"MBumV7eAtO",
"nips_2021_8v_4EVifBqX",
"XmJEovieA-",
"i5C-gQ1cVX4",
"sLwArE4eYSt",
"7B3a45TOzU8",
"B1OP5ifDVM5",
"hmezRj5rDd2",
"nips_2021_8v_4EVifBqX",
"nips_2021_8v_4EVifBqX",
"nips_2021_8v_4EVifBqX"
] |
nips_2021_6h14cMLgb5q | Online Adaptation to Label Distribution Shift | Machine learning models often encounter distribution shifts when deployed in the real world. In this paper, we focus on adaptation to label distribution shift in the online setting, where the test-time label distribution is continually changing and the model must dynamically adapt to it without observing the true label. This setting is common in many real world scenarios such as medical diagnosis, where disease prevalences can vary substantially at different times of the year. Leveraging a novel analysis, we show that the lack of true label does not hinder estimation of the expected test loss, which enables the reduction of online label shift adaptation to conventional online learning. Informed by this observation, we propose adaptation algorithms inspired by classical online learning techniques such as Follow The Leader (FTL) and Online Gradient Descent (OGD) and derive their regret bounds. We empirically verify our findings under both simulated and real world label distribution shifts and show that OGD is particularly effective and robust to a variety of challenging label shift scenarios.
| accept | The paper studies the problem of label distribution shifts in the online setting, and proposes two algorithms which guarantee low regret without the need to ever observing the true labels or losses on the test data sequence.
The paper received uniformly positive reviews. The reviewers appreciated:
- novel setting (first study of label distribution shift in the online setting), potentially important for real-world machine learning problems
- original and novel theoretical results
- comprehensive experimental study
- clarity of presentation.
Given the above, I also recommend the paper to be accepted. | train | [
"bJi_JnlpIJ3",
"0PofS0XRRIc",
"xX224SG9qeW",
"7lDebY7l1lF",
"DjWrnweDBcF",
"zqUo_eNuCl",
"x7mWMsD_dGL",
"FpiwoOLJJIj",
"RfCWP--kCDX",
"Rmpmgy0QjKF",
"patex-kCj99",
"GTmIE-DAIkJ",
"ig5ZKT9k-Z",
"6k71RuXl_ee",
"IimuZoGURA"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the reference to online learning with dynamic regret. We agree that it is possible to define the evaluation metric for online adaptation in terms of dynamic regret and analyze existing algorithms against a dynamic benchmark, which can serve as a promising direction for future work. We will include a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"0PofS0XRRIc",
"xX224SG9qeW",
"7lDebY7l1lF",
"RfCWP--kCDX",
"Rmpmgy0QjKF",
"x7mWMsD_dGL",
"patex-kCj99",
"IimuZoGURA",
"ig5ZKT9k-Z",
"6k71RuXl_ee",
"GTmIE-DAIkJ",
"nips_2021_6h14cMLgb5q",
"nips_2021_6h14cMLgb5q",
"nips_2021_6h14cMLgb5q",
"nips_2021_6h14cMLgb5q"
] |
nips_2021_ZYU8HEjL7KQ | One Explanation is Not Enough: Structured Attention Graphs for Image Classification | Attention maps are popular tools for explaining the decisions of convolutional neural networks (CNNs) for image classification. Typically, for each image of interest, a single attention map is produced, which assigns weights to pixels based on their importance to the classification. We argue that a single attention map provides an incomplete understanding since there are often many other maps that explain a classification equally well. In this paper, we propose to utilize a beam search algorithm to systematically search for multiple explanations for each image. Results show that there are indeed multiple relatively localized explanations for many images. However, naively showing multiple explanations to users can be overwhelming and does not reveal their common and distinct structures. We introduce structured attention graphs (SAGs), which compactly represent sets of attention maps for an image by visualizing how different combinations of image regions impact the confidence of a classifier. An approach to computing a compact and representative SAG for visualization is proposed via diverse sampling. We conduct a user study comparing the use of SAGs to traditional attention maps for answering comparative counterfactual questions about image classifications. Our results show that the users are significantly more accurate when presented with SAGs compared to standard attention map baselines.
| accept | The paper revisits (visual) explanations and proposes to utilize a beam search algorithm to systematically search for multiple explanations for each image. To have a compact representation of the explanations, it proposes to make use of structured attention graphs, showing how different combinations of image regions impact the confidence of a classifier. This is an interesting and natural idea and definitely has promise. While the reviews also identify several difficulties such as a representation bias in the experimental evaluation, they also agree that it is important to understand better how to present explanations to the user. And here the paper makes an interesting contributions, as also the most negative reviewer agrees upon. | train | [
"XnrwVddcaNq",
"OWpqUx7H8m6",
"Ttq-AcWOdSu",
"ja2c_j3AukR",
"u7EQ9Eih_GK",
"kC5uGwoZmMS",
"FPKmOeTHJ2Q",
"07CyFdcamMt",
"seTIgvg8cr",
"vrYuIyosFYy",
"KVDfwisUfw8",
"-UVa98N3tD2"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors have provided some responses related to my comments, however they were not really insightful. I still consider that the paper is interesting and can be of value in applications. My grade remains the same.",
"This paper states that a single saliency map is not enough and proposes to search for multip... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
4
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"seTIgvg8cr",
"nips_2021_ZYU8HEjL7KQ",
"FPKmOeTHJ2Q",
"u7EQ9Eih_GK",
"kC5uGwoZmMS",
"-UVa98N3tD2",
"OWpqUx7H8m6",
"KVDfwisUfw8",
"vrYuIyosFYy",
"nips_2021_ZYU8HEjL7KQ",
"nips_2021_ZYU8HEjL7KQ",
"nips_2021_ZYU8HEjL7KQ"
] |
nips_2021_tDqef76wFaO | Integrating Expert ODEs into Neural ODEs: Pharmacology and Disease Progression | Modeling a system's temporal behaviour in reaction to external stimuli is a fundamental problem in many areas. Pure Machine Learning (ML) approaches often fail in the small sample regime and cannot provide actionable insights beyond predictions. A promising modification has been to incorporate expert domain knowledge into ML models. The application we consider is predicting the patient health status and disease progression over time, where a wealth of domain knowledge is available from pharmacology. Pharmacological models describe the dynamics of carefully-chosen medically meaningful variables in terms of systems of Ordinary Differential Equations (ODEs). However, these models only describe a limited collection of variables, and these variables are often not observable in clinical environments. To close this gap, we propose the latent hybridisation model (LHM) that integrates a system of expert-designed ODEs with machine-learned Neural ODEs to fully describe the dynamics of the system and to link the expert and latent variables to observable quantities. We evaluated LHM on synthetic data as well as real-world intensive care data of COVID-19 patients. LHM consistently outperforms previous works, especially when few training samples are available such as at the beginning of the pandemic.
| accept | First, thanks to the authors for this engaging submission on an important topic. While the reviewers expressed some concern about understanding the method, the back and forth between the reviewers and authors appears to have addressed most of those concerns.
Reviewer 62Te was concerned about the source of the improvement, given all of the moving pieces in the proposed approach. The additional simulation study has shed more light on the behavior of the proposed algorithm.
Reviewer yPRn voiced concerns about novelty, and how the proposed approach is related to two recent physics + ML algorithms. The authors addressed these concerns in their response, and I encourage them to more explicitly compare these methods to their own in the manuscript.
| train | [
"77mZy4UASBR",
"pnvYbxqvH99",
"XpWNW7NQsai",
"wOshrbEINt",
"HpU8wSyvUfH",
"eFQAMWlZXSx",
"sRv_J-JNt1g",
"znU1USPj4h",
"yNqV5eQco4-",
"KGGHzmLFEC",
"GrbtkmQuYEt",
"CQLcHiubuUz",
"W8vlI-KThPu",
"xeSlvl60UOW",
"VihnYAeVj1_",
"85UdvI_snxG",
"mgdbCjYVP9U"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Hello, I wanted to express my apologies for being absent from the discussion so far. I wanted to reserve time to re-read the paper as well as consider the response in context. I kept putting this off and for this I apologize. I appreciate the time the authors spent to address my concerns and I would recommend tha... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"CQLcHiubuUz",
"HpU8wSyvUfH",
"mgdbCjYVP9U",
"85UdvI_snxG",
"xeSlvl60UOW",
"sRv_J-JNt1g",
"W8vlI-KThPu",
"yNqV5eQco4-",
"KGGHzmLFEC",
"mgdbCjYVP9U",
"85UdvI_snxG",
"VihnYAeVj1_",
"xeSlvl60UOW",
"nips_2021_tDqef76wFaO",
"nips_2021_tDqef76wFaO",
"nips_2021_tDqef76wFaO",
"nips_2021_tDqe... |
nips_2021_fDSDkiiXHzj | Shifted Chunk Transformer for Spatio-Temporal Representational Learning | Spatio-temporal representational learning has been widely adopted in various fields such as action recognition, video object segmentation, and action anticipation.Previous spatio-temporal representational learning approaches primarily employ ConvNets or sequential models, e.g., LSTM, to learn the intra-frame and inter-frame features. Recently, Transformer models have successfully dominated the study of natural language processing (NLP), image classification, etc. However, the pure-Transformer based spatio-temporal learning can be prohibitively costly on memory and computation to extract fine-grained features from a tiny patch. To tackle the training difficulty and enhance the spatio-temporal learning, we construct a shifted chunk Transformer with pure self-attention blocks. Leveraging the recent efficient Transformer design in NLP, this shifted chunk Transformer can learn hierarchical spatio-temporal features from a local tiny patch to a global videoclip. Our shifted self-attention can also effectively model complicated inter-frame variances. Furthermore, we build a clip encoder based on Transformer to model long-term temporal dependencies. We conduct thorough ablation studies to validate each component and hyper-parameters in our shifted chunk Transformer, and it outperforms previous state-of-the-art approaches on Kinetics-400, Kinetics-600,UCF101, and HMDB51.
| accept | All reviewers recommend acceptance of this submission. The final ratings are: 6, 6, 6, 8.
All four reviewers acknowledge the strong empirical results, both in terms of accuracy as well as efficiency. The inclusion of results on motion-heavy datasets in the rebuttal period was deemed valuable and informative. It is recommended to add these results to the main paper.
Reviewers comment on the poor presentation, especially the section concerning the shifted attention. The authors should use the detailed feedback given by the reviewers in order to improve the technical discussion as well as the motivation for the approach.
The ACS agree on the recommendation of acceptance. | train | [
"T6pLYWack8Z",
"7MiP3f3B69U",
"CfoSAmIrCP8",
"y2qdeIe0aCW",
"eGdfZGCWtGT",
"3DuL9pMClMU",
"qQjWw2aEY99",
"Q_sY62SeJ_T",
"EgxuUu4eJgk",
"-2RV2pBJU6A",
"qJevHenYPQR",
"-zlDCLZZPDb",
"3lnInYglIk8",
"mLyWDHULpsH",
"yzIVSIw4XLY",
"hK-i0nn23q2",
"1yy_OwLfmyQ",
"BrH8CUX4lB3"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
" Thanks a lot for your suggestions! All feedbacks from reviewers lead to improvements of this paper. We will improve the paper’s presentation accordingly. Additional experiments will be added in the manuscript.\n\nCode and trained models are being released soon.",
" We appreciate your suggestions! We will add al... | [
-1,
-1,
-1,
6,
-1,
-1,
6,
8,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
4,
-1,
-1,
5,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"-2RV2pBJU6A",
"eGdfZGCWtGT",
"qJevHenYPQR",
"nips_2021_fDSDkiiXHzj",
"-zlDCLZZPDb",
"hK-i0nn23q2",
"nips_2021_fDSDkiiXHzj",
"nips_2021_fDSDkiiXHzj",
"nips_2021_fDSDkiiXHzj",
"yzIVSIw4XLY",
"1yy_OwLfmyQ",
"y2qdeIe0aCW",
"EgxuUu4eJgk",
"qQjWw2aEY99",
"EgxuUu4eJgk",
"qQjWw2aEY99",
"Q_s... |
nips_2021_dfMekDuTbNH | Faster proximal algorithms for matrix optimization using Jacobi-based eigenvalue methods | We consider proximal splitting algorithms for convex optimization problems over matrices. A significant computational bottleneck in many of these algorithms is the need to compute a full eigenvalue or singular value decomposition at each iteration for the evaluation of a proximal operator.In this paper we propose to use an old and surprisingly simple method due to Jacobi to compute these eigenvalue and singular value decompositions, and we demonstrate that it can lead to substantial gains in terms of computation time compared to standard approaches. We rely on three essential properties of this method: (a) its ability to exploit an approximate decomposition as an initial point, which in the case of iterative optimization algorithms can be obtained from the previous iterate; (b) its parallel nature which makes it a great fit for hardware accelerators such as GPUs, now common in machine learning, and (c) its simple termination criterion which allows us to trade-off accuracy with computation time. We demonstrate the efficacy of this approach on a variety of algorithms and problems, and show that, on a GPU, we can obtain 5 to 10x speed-ups in the evaluation of proximal operators compared to standard CPU or GPU linear algebra routines. Our findings are supported by new theoretical results providing guarantees on the approximation quality of proximal operators obtained using approximate eigenvalue or singular value decompositions.
| accept | The submission studies the application of the Jacobi method to compute approximate eigendecompositions or SVD in the context of solving composite optimization problems over matrices, where the objective functions are spectral. The main theoretical contribution of the paper shows how the error in the approximate decomposition (in terms of the $\ell_2$-norm of the off-diagonal entries) propagates into the analysis of proximal gradient descent and proximal accelerated gradient descent. The AC agrees with the reviewers that this is an interesting result, albeit fairly simple given previous work of inexact oracles for proximal methods. Moreover, the reviewers and the AC all find that the paper is well-written, making the technical contribution easy to grasp.
The main practical contribution is an experimental evaluation of the resulting Jacobi-based method against the analogue QR-based methods, which generally shows the superiority of the Jacobi method. This is partly explained by the fact that the Jacobi method can effectively use previous decompositions as warm starts for the next. The AC agrees with reviewers BV6g and KM7u that the significance of such experiments would be increased by including larger matrices and real-world datasets.
In conclusion, the AC believes that the interesting and well-explained theoretical contribution, together with the positive results on the experimental part, outweigh the lack of larger experiments and recommends that the paper be accepted. | train | [
"7jxwtb5Kx2t",
"NaTirttM8mH",
"bYhREbpyzK",
"79EMRH6vwZ",
"FQzytGiXjRP",
"GQyotLGlrfp",
"dPxG6q05Ia0",
"xgM9t74WYLp",
"c6_mf9qMgZ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes to apply a Jacobi-based approximate eigenvalue decomposition method in proximal slitting algorithms for minimization of convex spectral function with matrix variable. The adopted Jacobi-based method is used to compute proximal operators in each iteration, and by taking the previous iterate as th... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_dfMekDuTbNH",
"GQyotLGlrfp",
"c6_mf9qMgZ",
"xgM9t74WYLp",
"dPxG6q05Ia0",
"7jxwtb5Kx2t",
"nips_2021_dfMekDuTbNH",
"nips_2021_dfMekDuTbNH",
"nips_2021_dfMekDuTbNH"
] |
nips_2021_Ah5CMODl52 | Decrypting Cryptic Crosswords: Semantically Complex Wordplay Puzzles as a Target for NLP | Cryptic crosswords, the dominant crossword variety in the UK, are a promising target for advancing NLP systems that seek to process semantically complex, highly compositional language. Cryptic clues read like fluent natural language but are adversarially composed of two parts: a definition and a wordplay cipher requiring character-level manipulations. Expert humans use creative intelligence to solve cryptics, flexibly combining linguistic, world, and domain knowledge. In this paper, we make two main contributions. First, we present a dataset of cryptic clues as a challenging new benchmark for NLP systems that seek to process compositional language in more creative, human-like ways. After showing that three non-neural approaches and T5, a state-of-the-art neural language model, do not achieve good performance, we make our second main contribution: a novel curriculum approach, in which the model is first fine-tuned on related tasks such as unscrambling words. We also introduce a challenging data split, examine the meta-linguistic capabilities of subword-tokenized models, and investigate model systematicity by perturbing the wordplay part of clues, showing that T5 exhibits behavior partially consistent with human solving strategies. Although our curricular approach considerably improves on the T5 baseline, our best-performing model still fails to generalize to the extent that humans can. Thus, cryptic crosswords remain an unsolved challenge for NLP systems and a potential source of future innovation.
| accept | The submission introduces a new task for solving cryptic crossword problems. The authors introduce carefully constructed splits of the dataset to test generalization, and show that the task is very challenging for existing pretrained models. The reviewers agree that the paper should not be penalized because of a recent similar arxiv paper from Efrat et al. 2021, and it is great that they acknowledged this paper in the submission. The submission contributes is an interesting new problem that may inspire future work, and I recommend acceptance. | train | [
"7AtT9tublFp",
"QzHJa8YH2Z",
"tK37D_OeyON",
"MeNW-yb8fuz",
"oM1W2yDYxsP",
"ELWuMDae_u",
"d6X9ngLleua",
"hTSNGQzdbt7",
"xNsGkJ9OgIA",
"phaHIdCNeYh"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your responses. I did not initially observe quite how contemporaneous this work and the Efrat et al paper were. In light of that my review's criticism of this manuscript was overly negative regarding the novelty of its contribution. I've updated my score to reflect this and apologize for the over... | [
-1,
6,
8,
-1,
-1,
-1,
-1,
-1,
7,
8
] | [
-1,
5,
5,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"ELWuMDae_u",
"nips_2021_Ah5CMODl52",
"nips_2021_Ah5CMODl52",
"tK37D_OeyON",
"phaHIdCNeYh",
"QzHJa8YH2Z",
"tK37D_OeyON",
"xNsGkJ9OgIA",
"nips_2021_Ah5CMODl52",
"nips_2021_Ah5CMODl52"
] |
nips_2021_CmI7NqBR4Ua | An Improved Analysis of Gradient Tracking for Decentralized Machine Learning | Anastasiia Koloskova, Tao Lin, Sebastian U. Stich | accept | This paper studies gradient tracking for stochastic decentralized learning. Their new analysis provides improved convergence rates for strongly convex functions. In particular the dependence on p (which is essentially the spectral gap) is changed to pc^2, which is always better since c>p. Moreover, as noted in the paper, c can be controlled and therefore can provide better guarantees in practice as well. This is a clean observation, and perhaps of broader interest if (11) and Lemma 5 can be used to prove contraction in other problems.
With multiple rounds of communications a reviewer has noted that better rakes in terms of k_g can be obtained. It would be good to include that remark, and in particular emphasize that this paper does not consider multiple rounds of communication. The authors' claim about the analysis extending to time varying graphs and local updates easily are disappointing. Primarily because it is not true as one of the reviewer points out (which the authors agree to), and partly because this is a sloppiness where I am not sure if the authors even analyzed it themselves before putting this claim in the paper. At a latter point someone may think about obviousness of these things when they are unable to extend the results. | train | [
"QM521g0GK8",
"9Nc-ZoIVkSS",
"6B8dcceGyHw",
"IeLT4HwqW45",
"PJvWZ5xRCad",
"ciQExZlkVS",
"e5TZJQqpgpm",
"qZGsrMGO5CB",
"dMwoIPO6lex",
"8GWlHX5zIfM",
"hIiZIsm9iQv",
"Y7lDNwDBu5M",
"O5trWepuXW",
"X7V2M_k08qh",
"4p3xuzDO9gE",
"gI1kmBBUF6S",
"n5COnreboNP",
"dH6KXSYfDxA"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work studies the problem of decentralized nonconvex optimization problems and it is relevant to the conference. The paper provides convergence analysis for an existing and well-studied algorithm -- stochastic gradient tracking algorithms, without providing any improvements. The paper prove its algorithm the s... | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
4,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"nips_2021_CmI7NqBR4Ua",
"6B8dcceGyHw",
"IeLT4HwqW45",
"PJvWZ5xRCad",
"ciQExZlkVS",
"dMwoIPO6lex",
"dMwoIPO6lex",
"hIiZIsm9iQv",
"Y7lDNwDBu5M",
"O5trWepuXW",
"dH6KXSYfDxA",
"n5COnreboNP",
"gI1kmBBUF6S",
"QM521g0GK8",
"nips_2021_CmI7NqBR4Ua",
"nips_2021_CmI7NqBR4Ua",
"nips_2021_CmI7Nq... |
nips_2021_Juk1LKbFvd | Entropic Desired Dynamics for Intrinsic Control | An agent might be said, informally, to have mastery of its environment when it has maximised the effective number of states it can reliably reach. In practice, this often means maximizing the number of latent codes that can be discriminated from future states under some short time horizon (e.g. \cite{eysenbach2018diversity}). By situating these latent codes in a globally consistent coordinate system, we show that agents can reliably reach more states in the long term while still optimizing a local objective. A simple instantiation of this idea, \textbf{E}ntropic \textbf{D}esired \textbf{D}ynamics for \textbf{I}ntrinsic \textbf{C}on\textbf{T}rol (EDDICT), assumes fixed additive latent dynamics, which results in tractable learning and an interpretable latent space. Compared to prior methods, EDDICT's globally consistent codes allow it to be far more exploratory, as demonstrated by improved state coverage and increased unsupervised performance on hard exploration games such as Montezuma's Revenge.
| accept | This paper studies the problem of making exploration efficient by learning a global-local structure. The paper generally received positive reviews which tended towards acceptance. However, the reviewers had difficulty understanding some notations and suggested comparing the proposed method to some relevant baselines. The authors provided a rebuttal that addressed many of the reviewers' concerns. The paper was discussed post rebuttal and all the reviewers responded to the rebuttal. Reviewers generally agree that the paper should be accepted but there are still many pending updates that authors have promised to finish. AC agrees with the reviewers and suggests acceptance. The authors are urged to look at reviewers' final feedback and incorporate the new experiments/baselines in the camera-ready. | train | [
"Z1pl6R8I9E4",
"V8FWV7jSuct",
"0a0rRtSz4n",
"KZd0WVvhm2",
"jmFxHpSWhCj",
"vouIVLILBpS",
"JDoXmsoV7Xu",
"CEPU_tU4L4I",
"vC1zkehQ1-0",
"KeGvP-14IdR",
"v38pOsqE31e",
"qel4J6dXlNV",
"yijx_-l5E_",
"ImlNA9HJahZ",
"XoW_tymsJar",
"5rwfc6WDVQV",
"YAwPmDOARy_",
"3WoaAv0xfI",
"wjo3RY2ecY"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_re... | [
"This paper proposes an approach called Entropic Desired Dynamics for Intrinsic Control (EDDICT) which tries to learn options in an unsupervised manner. Given interaction with the environment, EDDICT aims to learn a latent code which best describes a future trajectory. This work is inspired from Variational Intrins... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_Juk1LKbFvd",
"XoW_tymsJar",
"vC1zkehQ1-0",
"yijx_-l5E_",
"JDoXmsoV7Xu",
"v38pOsqE31e",
"nips_2021_Juk1LKbFvd",
"nips_2021_Juk1LKbFvd",
"ImlNA9HJahZ",
"nips_2021_Juk1LKbFvd",
"YAwPmDOARy_",
"nips_2021_Juk1LKbFvd",
"5rwfc6WDVQV",
"CEPU_tU4L4I",
"Z1pl6R8I9E4",
"qel4J6dXlNV",
... |
nips_2021_V5V1vGrI2z | Exploring Cross-Video and Cross-Modality Signals for Weakly-Supervised Audio-Visual Video Parsing | The audio-visual video parsing task aims to temporally parse a video into audio or visual event categories. However, it is labor intensive to temporally annotate audio and visual events and thus hampers the learning of a parsing model. To this end, we propose to explore additional cross-video and cross-modality supervisory signals to facilitate weakly-supervised audio-visual video parsing. The proposed method exploits both the common and diverse event semantics across videos to identify audio or visual events. In addition, our method explores event co-occurrence across audio, visual, and audio-visual streams. We leverage the explored cross-modality co-occurrence to localize segments of target events while excluding irrelevant ones. The discovered supervisory signals across different videos and modalities can greatly facilitate the training with only video-level annotations. Quantitative and qualitative results demonstrate that the proposed method performs favorably against existing methods on weakly-supervised audio-visual video parsing.
| accept | Obviously, this work is of generally reasonable quality, but well-informed reviewers had serious questions as to the novelty of the work and strength of the evaluations. I think the reviewers were generally fair, so I will go with their overall average score in making my recommendation, but I recognize that this paper is very close to the border. | train | [
"lkod2qcgfcu",
"3lbg6cIXbJY",
"KdLF8CDTTnv",
"fEEH1E_zNay",
"Z9ijty_yG0k",
"BLN-CDjJnSK",
"IrkuDfe61b",
"1sG1V0XzTeK"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a method for weakly supervised audio-visual video parsing task where the goal is to segment video/audio streams into different event categories. Given video level label during training, the authors propose an audio-visual class co-occurrence module to capture the relationships between event cate... | [
6,
4,
-1,
-1,
-1,
-1,
5,
4
] | [
4,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_V5V1vGrI2z",
"nips_2021_V5V1vGrI2z",
"1sG1V0XzTeK",
"3lbg6cIXbJY",
"lkod2qcgfcu",
"IrkuDfe61b",
"nips_2021_V5V1vGrI2z",
"nips_2021_V5V1vGrI2z"
] |
nips_2021_4bKbEP9b65v | Littlestone Classes are Privately Online Learnable | Noah Golowich, Roi Livni | accept | A very nice paper that does what it title says: gives a differentially-private online algorithm for classes of finite Littlestone dimension. The paper is clearly written and gives a good background of the problem, and while the paper mostly makes use of known techniques, it still gives a nontrivial and interesting result. | train | [
"aKASkplLp8j",
"Ninhjj56XLQ",
"ejgh9Pe3eim",
"D5CZKyxnjL",
"NlC3EvR69qy",
"HSczmaY0MKE",
"DUTA5Va_Eoj",
"anF11MWuiSM"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers the problem of online learning hypothesis class $H$ under an additional privacy constraint: The sequence of hypotheses $h_1, h_2, \\ldots, h_T \\in H$ chosen by the learner must be differentially private w.r.t. the data sequence $(x_1, y_1)$ through $(x_T, y_T)$. \n\nIt is well-known that a hyp... | [
7,
-1,
-1,
-1,
-1,
8,
7,
8
] | [
3,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"nips_2021_4bKbEP9b65v",
"aKASkplLp8j",
"anF11MWuiSM",
"DUTA5Va_Eoj",
"HSczmaY0MKE",
"nips_2021_4bKbEP9b65v",
"nips_2021_4bKbEP9b65v",
"nips_2021_4bKbEP9b65v"
] |
nips_2021_b-88mXTMg4J | Dual Parameterization of Sparse Variational Gaussian Processes | Sparse variational Gaussian process (SVGP) methods are a common choice for non-conjugate Gaussian process inference because of their computational benefits. In this paper, we improve their computational efficiency by using a dual parameterization where each data example is assigned dual parameters, similarly to site parameters used in expectation propagation. Our dual parameterization speeds-up inference using natural gradient descent, and provides a tighter evidence lower bound for hyperparameter learning. The approach has the same memory cost as the current SVGP methods, but it is faster and more accurate.
| accept | This paper revisits a known alternative, so-called dual, parameterization for models with Gaussian priors and iid likelihoods and applies it to stochastic variational inference in Gaussian process models. It shows _empirically_ that this parameterization results in faster optimization and tighter bounds that improve hyper-parameter learning. Results are shown on small UCI datasets and on MINST (n=70,000 datapoints).
Overall, the reviewers believe that the proposed approach is incremental as it takes a known idea (that of the dual parameterization) and applies to the SVGP context. However, all the reviewers agree that the paper provides a significant contribution to the NeurIPS community and that NeurIPS will benefit from knowing the details and findings in this paper.
Besides its incremental nature, one of the major criticisms of this paper was with respect to its clarity (Reviewer aCDy), which the authors seemed to have addressed satisfactorily. Another important point (raised by reviewer XG5T) concerns whether the benefits of the approach would extend to larger problems for which one would need a higher number of inducing variables ($m>100$). The authors have clarified that the bigger the $m$ the smaller the marginal gain obtained by the proposed method. However, the proposed approach still provides benefits for settings of practical interest.
| train | [
"u1ZY5yi2pjD",
"tO9Ml696Gmh",
"HGgg5I-NFmK",
"Wc10odG1yvN",
"eZ6VKIFvXok",
"gqKVRZRbLes",
"LQCxqjUYQ7U",
"II2ke6uBFx4",
"xzbSJFmt-MN",
"p0zaV3m9rL",
"v-b5LEKC81A"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their detailed answers to my points. The authors have adequately answered my questions, and my main criticism was regarding the clarity of the presentation in Sections 3-4. The authors said they would expand on these sections. As this would probably not be a major revision, I do not think ... | [
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
8,
6,
6
] | [
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"II2ke6uBFx4",
"nips_2021_b-88mXTMg4J",
"nips_2021_b-88mXTMg4J",
"v-b5LEKC81A",
"p0zaV3m9rL",
"HGgg5I-NFmK",
"xzbSJFmt-MN",
"tO9Ml696Gmh",
"nips_2021_b-88mXTMg4J",
"nips_2021_b-88mXTMg4J",
"nips_2021_b-88mXTMg4J"
] |
nips_2021_Ua9Vi0QqwD4 | Learning to dehaze with polarization | Haze, a common kind of bad weather caused by atmospheric scattering, decreases the visibility of scenes and degenerates the performance of computer vision algorithms. Single-image dehazing methods have shown their effectiveness in a large variety of scenes, however, they are based on handcrafted priors or learned features, which do not generalize well to real-world images. Polarization information can be used to relieve its ill-posedness, however, real-world images are still challenging since existing polarization-based methods usually assume that the transmitted light is not significantly polarized, and they require specific clues to estimate necessary physical parameters. In this paper, we propose a generalized physical formation model of hazy images and a robust polarization-based dehazing pipeline without the above assumption or requirement, along with a neural network tailored to the pipeline. Experimental results show that our approach achieves state-of-the-art performance on both synthetic data and real-world hazy images.
| accept | The four reviewers thought this paper was above threshold for acceptance. They all found the idea useful and interesting. The author response also helped to clarify some issues raised by the reviewer. | test | [
"-uEnOfqwjC4",
"pbNxYm6yzzl",
"wQrvU6lIjD7",
"Rt8IVr0W9Qe",
"xYtbEk8J96",
"rPuEKXJgYvw",
"X_pIC-QeZim",
"mc8mZPVPjiy",
"OqLTATlngE",
"kasfeMVC6Ny",
"u6ccqCMuIf"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed response. I have taken it into account for my final recommendation.",
" I'd like to thank the authors for their response, which has helped clear up some questions I had during the review. After reading the other reviews, I would still keep my positive rating for the paper, and would ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"rPuEKXJgYvw",
"Rt8IVr0W9Qe",
"nips_2021_Ua9Vi0QqwD4",
"u6ccqCMuIf",
"kasfeMVC6Ny",
"OqLTATlngE",
"mc8mZPVPjiy",
"nips_2021_Ua9Vi0QqwD4",
"nips_2021_Ua9Vi0QqwD4",
"nips_2021_Ua9Vi0QqwD4",
"nips_2021_Ua9Vi0QqwD4"
] |
nips_2021_YxxzNLfXBz | Conservative Data Sharing for Multi-Task Offline Reinforcement Learning | Offline reinforcement learning (RL) algorithms have shown promising results in domains where abundant pre-collected data is available. However, prior methods focus on solving individual problems from scratch with an offline dataset without considering how an offline RL agent can acquire multiple skills. We argue that a natural use case of offline RL is in settings where we can pool large amounts of data collected in various scenarios for solving different tasks, and utilize all of this data to learn behaviors for all the tasks more effectively rather than training each one in isolation. However, sharing data across all tasks in multi-task offline RL performs surprisingly poorly in practice. Thorough empirical analysis, we find that sharing data can actually exacerbate the distributional shift between the learned policy and the dataset, which in turn can lead to divergence of the learned policy and poor performance. To address this challenge, we develop a simple technique for data- sharing in multi-task offline RL that routes data based on the improvement over the task-specific data. We call this approach conservative data sharing (CDS), and it can be applied with multiple single-task offline RL methods. On a range of challenging multi-task locomotion, navigation, and vision-based robotic manipulation problems, CDS achieves the best or comparable performance compared to prior offline multi- task RL methods and previous data sharing approaches.
| accept | This paper presents a data-sharing strategy called "conservative data sharing" CDS to tackle the distributional shift of the offline RL algorithm in the multitask setting. All the reviewers evaluated positively about the work, and this would make a solid constribution to NeurIPS. I hope that reviewer comments help the authors for future work. | val | [
"Tmur7SEyNDg",
"BDIWg5az8sp",
"cOTyiB3Yvj7",
"eiJehgPgt5B",
"T9jpez2GewS",
"O_FHhGSx-K",
"xvNIYAVbKA0",
"RWlh-z9cIbQ",
"sLCqFXy4_T7",
"RewxJ_rZWG3",
"iE2Cvioe0F6",
"d_mK6IkT9T_"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for additional experiments to show the general application of CDS. The results look very nice!",
" Thank you for your positive feedback and for updating your score! Regarding your question, the algorithm softly shares all data across all of the tasks as discussed in L284-288. Thus, by the end of training... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
8,
6
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"sLCqFXy4_T7",
"cOTyiB3Yvj7",
"T9jpez2GewS",
"nips_2021_YxxzNLfXBz",
"eiJehgPgt5B",
"nips_2021_YxxzNLfXBz",
"d_mK6IkT9T_",
"iE2Cvioe0F6",
"RewxJ_rZWG3",
"nips_2021_YxxzNLfXBz",
"nips_2021_YxxzNLfXBz",
"nips_2021_YxxzNLfXBz"
] |
nips_2021__wdgJCH-Jf | Universal Rate-Distortion-Perception Representations for Lossy Compression | In the context of lossy compression, Blau \& Michaeli (2019) adopt a mathematical notion of perceptual quality and define the information rate-distortion-perception function, generalizing the classical rate-distortion tradeoff. We consider the notion of universal representations in which one may fix an encoder and vary the decoder to achieve any point within a collection of distortion and perception constraints. We prove that the corresponding information-theoretic universal rate-distortion-perception function is operationally achievable in an approximate sense. Under MSE distortion, we show that the entire distortion-perception tradeoff of a Gaussian source can be achieved by a single encoder of the same rate asymptotically. We then characterize the achievable distortion-perception region for a fixed representation in the case of arbitrary distributions, and identify conditions under which the aforementioned results continue to hold approximately. This motivates the study of practical constructions that are approximately universal across the RDP tradeoff, thereby alleviating the need to design a new encoder for each objective. We provide experimental results on MNIST and SVHN suggesting that on image compression tasks, the operational tradeoffs achieved by machine learning models with a fixed encoder suffer only a small penalty when compared to their variable encoder counterparts.
| accept | The paper provides an interesting contribution of theoretical nature, that essentially shows that one can work with a single encoder but vary the decoder to achieve different rate-distortion-perception trades-off in (image) compression. The work is likely to stimulate interesting discussion at NeurIPS, and provides useful insights towards future research in lossy compression. | val | [
"f9tq3ib5d7H",
"qbX_5DkRNgr",
"ov86AGT36ZJ",
"W-BEv_ovsDm",
"r55dcR4LH7",
"tMfnGzMCf4",
"prXeLz9S9j",
"c2Gb8SouS9c"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The rate-distortion-perception (RDP) function[5,23] is a generalization of the classical rate-distortion (RD) trade-off in lossy compression to also measure realism, which establishes a theoretical footing for generative image/video compression, a topic of active research[1,11,17,25].\n\nThis paper studies the RDP... | [
8,
-1,
-1,
-1,
-1,
7,
8,
7
] | [
3,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"nips_2021__wdgJCH-Jf",
"c2Gb8SouS9c",
"prXeLz9S9j",
"tMfnGzMCf4",
"f9tq3ib5d7H",
"nips_2021__wdgJCH-Jf",
"nips_2021__wdgJCH-Jf",
"nips_2021__wdgJCH-Jf"
] |
nips_2021_usxt30HpW66 | What’s a good imputation to predict with missing values? | How to learn a good predictor on data with missing values? Most efforts focus on first imputing as well as possible and second learning on the completed data to predict the outcome. Yet, this widespread practice has no theoretical grounding. Here we show that for almost all imputation functions, an impute-then-regress procedure with a powerful learner is Bayes optimal. This result holds for all missing-values mechanisms, in contrast with the classic statistical results that require missing-at-random settings to use imputation in probabilistic modeling. Moreover, it implies that perfect conditional imputation is not needed for good prediction asymptotically. In fact, we show that on perfectly imputed data the best regression function will generally be discontinuous, which makes it hard to learn. Crafting instead the imputation so as to leave the regression function unchanged simply shifts the problem to learning discontinuous imputations. Rather, we suggest that it is easier to learn imputation and regression jointly. We propose such a procedure, adapting NeuMiss, a neural network capturing the conditional links across observed and unobserved variables whatever the missing-value pattern. Our experiments confirm that joint imputation and regression through NeuMiss is better than various two step procedures in a finite-sample regime.
| accept | The paper studies prediction in the presence of missing values. A common approach to solving this problem is "impute-then-regress", independently imputing missing values and predicting using the imputed data. The authors show that this strategy is almost always Bayes optimal w.r.t. risk, but that a poor imputation may lead to learning a complex prediction function. Several strategies are evaluated and compared empirically on synthetic data.
All reviewers recognised the value of this work and pointed out its strong motivation and clarity in exposition. | train | [
"7Lyeqinbb1K",
"cSSp43y7mIO",
"5mToew2vKPe",
"lc8ljtYsfjz",
"us3rkX2lHk_",
"7Mn1dy2-Qg",
"fqy9CIF2UXb",
"y7CeuFxwYq4",
"zXH8810_e1U"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers and area chair,\n\nThank you again for your constructive comments. We feel that all reviewers agree that this work is important (Impute-then-regress procedures are very common but little studied), and that it comes with strong and novel theoretical results as well as experimental evidence. We have ... | [
-1,
-1,
-1,
-1,
-1,
7,
9,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"nips_2021_usxt30HpW66",
"zXH8810_e1U",
"y7CeuFxwYq4",
"fqy9CIF2UXb",
"7Mn1dy2-Qg",
"nips_2021_usxt30HpW66",
"nips_2021_usxt30HpW66",
"nips_2021_usxt30HpW66",
"nips_2021_usxt30HpW66"
] |
nips_2021_VXeoK3fJZhW | Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification | Reinforcement learning (RL) algorithms assume that users specify tasks by manually writing down a reward function. However, this process can be laborious and demands considerable technical expertise. Can we devise RL algorithms that instead enable users to specify tasks simply by providing examples of successful outcomes? In this paper, we derive a control algorithm that maximizes the future probability of these successful outcome examples. Prior work has approached similar problems with a two-stage process, first learning a reward function and then optimizing this reward function using another reinforcement learning algorithm. In contrast, our method directly learns a value function from transitions and successful outcomes, without learning this intermediate reward function. Our method therefore requires fewer hyperparameters to tune and lines of code to debug. We show that our method satisfies a new data-driven Bellman equation, where examples take the place of the typical reward function term. Experiments show that our approach outperforms prior methods that learn explicit reward functions.
| accept | The reviewer's initially agreed upon the strong novelty of this paper's "data driven" task specification, which shares ideas with existing thoughts in imitation and goal-conditioning, but is both intuitively useful in practice and well supported by the paper's elements. Good points were raised by reviewers 3uQ8 and yauQ, in particular, about the relationship of the theory to existing proofs and of the most fair discussion of baseline tasks and comparisons. After a healthy exchange, the picture was quite clear that the paper's elements are nicely distinct and add important new ideas.
I'm certain this paper will draw a healthy audience from offline-RL and imitation practitioners. In some domains where the author's task specification interface is indeed a much better fit, this work may have rather significant impact. For the overall RL/imitation community, it is another nicely done example of using a mix of existing and new tools to develop coherent theory and algorithms for a new spin on the problem. This is likely to be cited and to support others innovating in different ways in the space. | train | [
"YvzYOGGq4AL",
"142quIwJoJN",
"D95zYoji-V5",
"kuxVHXMazgf",
"Im0ya3HMZy",
"cOtSxtI8cl"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"**Summary:** This paper offered a new perspective for continuous control tasks, e.g robotic, that root in one question, can agent learn the task by successful examples rather than rewards. By learning a classifier that predicts whether the task could be solved in the future to estimates the probability of solving ... | [
7,
-1,
-1,
-1,
8,
7
] | [
4,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_VXeoK3fJZhW",
"cOtSxtI8cl",
"YvzYOGGq4AL",
"Im0ya3HMZy",
"nips_2021_VXeoK3fJZhW",
"nips_2021_VXeoK3fJZhW"
] |
nips_2021_NbaEmFm2mUW | Hierarchical Skills for Efficient Exploration | In reinforcement learning, pre-trained low-level skills have the potential to greatly facilitate exploration. However, prior knowledge of the downstream task is required to strike the right balance between generality (fine-grained control) and specificity (faster learning) in skill design. In previous work on continuous control, the sensitivity of methods to this trade-off has not been addressed explicitly, as locomotion provides a suitable prior for navigation tasks, which have been of foremost interest. In this work, we analyze this trade-off for low-level policy pre-training with a new benchmark suite of diverse, sparse-reward tasks for bipedal robots. We alleviate the need for prior knowledge by proposing a hierarchical skill learning framework that acquires skills of varying complexity in an unsupervised manner. For utilization on downstream tasks, we present a three-layered hierarchical learning algorithm to automatically trade off between general and specific skills as required by the respective task. In our experiments, we show that our approach performs this trade-off effectively and achieves better results than current state-of-the-art methods for end-to-end hierarchical reinforcement learning and unsupervised skill discovery.
| accept | The paper proposes a new way of hierarchical goal-based learning. There have been multiple examples of hierarchical strategies where higher levels set goals for the lower levels. However, often such goals are set in a quite low dimensional space, either because the state space is low dimensional to begin with or because a subset of dimensions, e.g. corresponding to the COM coordinates, are pre-specified to be the relevant ones. The paper proposes that which dimensions are relevant for the goal are task dependent, and so it should be up to the higher level policy to choose which dimensions are relevant for goal-setting. In this way, the higher level policy can make the goal for a skill more general or specific, allow a better trade-off between these factors.
The reviewers had mixed opinions initially, but additional results from the authors convinced some reviewers to update their scores, resulting in a unanimous accept recommendation. Summarizing their opinions:
- Originality: the proposed approach is interesting & original. The authors also propose an original and interesting new suite of benchmarking tasks.
- Technical quality: Initially, the opinions were mixed on quality, as some reviewers deemed important baselines to be missing. With the provision of additional HRL baseline, the reviewers were satisfied on quality.
- Relevance and significance: The problem is relevant for NeurIPS. The results were not super surprising (e.g., the 'full goal space' baseline also worked pretty well across tasks), however, reviewers pointed out the paper might be an important step towards solving sparse reward task.
- On clarity, the reviews were a bit mixed, from 'difficult to parse' to 'clear and well explained', with specific issues pointed out for possible improvement.
Overall, the paper proposes an original new method and (taken into account the new results), sufficiently evaluates it in the context of relevant baselines. The paper could certainly be improved further, but I think as is it will be an interesting addition to the NeurIPS program.
I had one additional minor comment: In the introduction, it is mentioned that "In the large body of work ... on HRL ... relies explicitly or implicitly on prior knowledge that low-level skills should control the center of mass" (lines 22-25). While I agree that this is a common assumption, I don't think it's true for the all current HRL methods as seems implied here (e.g. the option-critic lets the agent control any dimensions, feudal networks do use a subspace but the subspace is learned), and in particular, the reference given for this statement in line 25 are mostly methods where the state consists only of the COM, thus, it's inevitable that a HRL method would control the COM rather than a particular assumption. The HIRO paper, for example, would be a better example.
HIRO: "Data efficient reinfocement learning", Nachum et al.
Option-critic: "The option-critic architecture", Bacon et al.
Feudal Networks: "Feudal networks for reinforcement learning", Vezhnevets et al.
| train | [
"lyHk1Dz1QOZ",
"37c7nyaQeAt",
"-vHeKD8czT2",
"0uHnse0kLue",
"dXoCjd_9KIj",
"_WUliaG3gV",
"jSswNNRiJ_Z",
"cX3Jz9UXZJY",
"UmJN7xl4T4R",
"9vmtQqxXN0F",
"UcT1T4U4Hvx",
"X5cBjZt1HE2",
"uXbayW9K0D",
"-YGT4kQGiiN",
"PxmK2het2rL",
"YyaCJgN6Gwq",
"PpHS5gLIwlY",
"oXMz_sONu6H",
"tIkES810p5r... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
"This work proposes a benchmark task with bipedal robots instead of locomotion tasks, where the bipedal robots are low dimensional and perform various movements. It also proposes a hierarchical reinforcement learning method with three levels: a policy that specifies a goal space (a set of features to operate on), a... | [
6,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_NbaEmFm2mUW",
"0uHnse0kLue",
"PpHS5gLIwlY",
"cX3Jz9UXZJY",
"jSswNNRiJ_Z",
"nips_2021_NbaEmFm2mUW",
"UmJN7xl4T4R",
"oXMz_sONu6H",
"9vmtQqxXN0F",
"X5cBjZt1HE2",
"uXbayW9K0D",
"-YGT4kQGiiN",
"YyaCJgN6Gwq",
"PxmK2het2rL",
"_WUliaG3gV",
"tIkES810p5r",
"tSEJJQ_Rrh",
"lyHk1Dz1Q... |
nips_2021_rqjfa49ODLE | Evidential Softmax for Sparse Multimodal Distributions in Deep Generative Models | Many applications of generative models rely on the marginalization of their high-dimensional output probability distributions. Normalization functions that yield sparse probability distributions can make exact marginalization more computationally tractable. However, sparse normalization functions usually require alternative loss functions for training since the log-likelihood is undefined for sparse probability distributions. Furthermore, many sparse normalization functions often collapse the multimodality of distributions. In this work, we present ev-softmax, a sparse normalization function that preserves the multimodality of probability distributions. We derive its properties, including its gradient in closed-form, and introduce a continuous family of approximations to ev-softmax that have full support and can be trained with probabilistic loss functions such as negative log-likelihood and Kullback-Leibler divergence. We evaluate our method on a variety of generative models, including variational autoencoders and auto-regressive architectures. Our method outperforms existing dense and sparse normalization techniques in distributional accuracy. We demonstrate that ev-softmax successfully reduces the dimensionality of probability distributions while maintaining multimodality.
| accept | The paper introduces a normalizing function called evidential softmax (ev-softmax). Based on principles of evidential theory, ev-softmax is able to assigns zero probability to some classes. It extends Itkina et al. (2020) by recovering their transformation and allowing its usage at training time. The experiment section shows (relatively small) performance gains compared to other sparse methods on several simple tasks and on a machine translation task.
Most reviewers agree that this is a good paper that proposes an interesting contribution, closes the loop with Itkina et al. (2020), and establishes an interesting link between evidential theory and sparse transformations. The main weaknesses which have been pointed out are lack of details about some of the experiments (clarified in the rebuttal), lack of camparison with alpha-entmax for other values of alpha (not crucial but a nice-to-have), and lack of discussion of how ev_softmax relates to other sparse transformations (which the authors elaborate on in the rebuttal, but could be further expanded). This changes seem doable at camera ready time, therefore I recommend acceptance.
I urge the authors to follow the recommendations of the reviewers. Reporting other values of alpha should be simple given that the code for alpha-entmax is publicly available. More importantly, the relation between ev_softmax and entmax/sparsemax should be clarified:
- The claim that ev_softmax(x)_j = 0 => sparsemax(x)_j = 0 is not correct - the supports of ev_softmax and sparsemax are generally not a subset of each other (check comment with counterexamples)
- Note that the support of sparsemax can also be expressed as a mean condition on the largest scores — see Alg 1 in https://arxiv.org/pdf/1602.02068.pdf. The zeros of sparsemax are the entries where x_j <= mean(top_k(x)) - 1/k, where k is the size of the support. I think this deserves discussion.
- When x in R^2, sparsemax is a hard sigmoid (Fig 1 in the ref above) but ev_softmax seems to be a 0/1 loss, piecewise constant, with zero gradients everywhere. This means you can’t backpropagate through ev_softmax for this 2-dimensional case. I think this an important limitation that should be discussed (sparsemax and entmax don’t have this limitation). | train | [
"-h9uolHlm-o",
"XbjDBiDZSb0",
"HRKopvwLu9W",
"cZEwfmEdeY5",
"tC9iop9yG7d",
"WwbrDr9kCR9",
"jlhiFePh813",
"5IolaeAjrQb",
"TNQrWKldN77",
"lPh3LNy1_p",
"Mn_kZY6m1z",
"T3-W6T9SaSd",
"ryiCV7tY2kp",
"2v7Z8Tw9SHa",
"ZmGpTi5h7sZ",
"N-5hmcaKfwp",
"P_TeBISjqrW"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the response, and we address the questions/comments here:\n> 1. I understand that applying evidential softmax at training time is done with the goal of performance improvements. However, I think the paper is lacking discussion about why this outcome is expected. Can you characterize the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"jlhiFePh813",
"HRKopvwLu9W",
"cZEwfmEdeY5",
"-h9uolHlm-o",
"WwbrDr9kCR9",
"-h9uolHlm-o",
"lPh3LNy1_p",
"2v7Z8Tw9SHa",
"ryiCV7tY2kp",
"5IolaeAjrQb",
"N-5hmcaKfwp",
"P_TeBISjqrW",
"ZmGpTi5h7sZ",
"nips_2021_rqjfa49ODLE",
"nips_2021_rqjfa49ODLE",
"nips_2021_rqjfa49ODLE",
"nips_2021_rqjf... |
nips_2021_OQLCPvYnMOv | Submodular + Concave | Siddharth Mitra, Moran Feldman, Amin Karbasi | accept | The reviewers agree that this generally a good paper although not entirely without (minor) flaws. Please take the reviewers comments in consideration when preparing a revision. The answers provided by the authors were given due consideration. | train | [
"YKbyNcTcGD8",
"orqzmcBi0W_",
"0j66jZpXGHF",
"-hgdYcKQ6T",
"LMAW43hUcyI",
"vATUWZNW4S",
"JCUI9VnH2rJ",
"mUwaFtP-emN",
"qDc035qIePp",
"HXcc1dPxrGB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper studies maximization of submodular + concave functions over solvable polytopes. This problem forms a new class of non-convex optimization problems, for which no theoretical results have been obtained. Depending on the conditions of the problem, the authors develop several algorithms and prove that they ... | [
6,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_OQLCPvYnMOv",
"nips_2021_OQLCPvYnMOv",
"nips_2021_OQLCPvYnMOv",
"LMAW43hUcyI",
"mUwaFtP-emN",
"YKbyNcTcGD8",
"0j66jZpXGHF",
"HXcc1dPxrGB",
"orqzmcBi0W_",
"nips_2021_OQLCPvYnMOv"
] |
nips_2021_GERI2kZ84V | DeepGEM: Generalized Expectation-Maximization for Blind Inversion | Typically, inversion algorithms assume that a forward model, which relates a source to its resulting measurements, is known and fixed. Using collected indirect measurements and the forward model, the goal becomes to recover the source. When the forward model is unknown, or imperfect, artifacts due to model mismatch occur in the recovery of the source. In this paper, we study the problem of blind inversion: solving an inverse problem with unknown or imperfect knowledge of the forward model parameters. We propose DeepGEM, a variational Expectation-Maximization (EM) framework that can be used to solve for the unknown parameters of the forward model in an unsupervised manner. DeepGEM makes use of a normalizing flow generative network to efficiently capture complex posterior distributions, which leads to more accurate evaluation of the source's posterior distribution used in EM. We showcase the effectiveness of our DeepGEM approach by achieving strong performance on the challenging problem of blind seismic tomography, where we significantly outperform the standard method used in seismology. We also demonstrate the generality of DeepGEM by applying it to a simple case of blind deconvolution.
| accept | The problem studies inverse problems with a partially unknown forward operator. The idea is to use a general variational Expectation-Maximization framework aided by a normalizing flow generative network. The paper is focused on blind seismic tomographic and shows very good performance. The comparisons to baselines was a bit confusing but we understand that there are limited prior works on this problem and that the proposed method will easily outperform methods that do not use deep generative models.
There was a debate on this paper solving blind inversion versus that mismatch. In any case there is good novelty in the paper which contains novel ideas, is solid in a real and useful application and is very well written. | test | [
"cLBwKUHoGfV",
"ii9h2VX_Jk",
"rZBo4TK7Kmo",
"BZbbIZWlni",
"-yiyX6ctave",
"UZQdzMm_OF",
"BfM5ClMD5j",
"xaRn04lwD57",
"uthLRAMkfRO",
"iSMXfSzpBOT",
"YROLrHLQ9fN",
"zFkPynmy7XA"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a normalizing-flow-based framework to solve inverse problems with model mismatch. The main idea is to use a normalizing flow to model the posterior distribution of the unknowns x conditioned on the current model parameters \\theta and the observation y, as part of the expectation-maximization a... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
8
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_GERI2kZ84V",
"BfM5ClMD5j",
"BZbbIZWlni",
"uthLRAMkfRO",
"zFkPynmy7XA",
"YROLrHLQ9fN",
"cLBwKUHoGfV",
"iSMXfSzpBOT",
"nips_2021_GERI2kZ84V",
"nips_2021_GERI2kZ84V",
"nips_2021_GERI2kZ84V",
"nips_2021_GERI2kZ84V"
] |
nips_2021_LMotP3zsq_d | Learning to Generate Visual Questions with Noisy Supervision | The task of visual question generation (VQG) aims to generate human-like neural questions from an image and potentially other side information (e.g., answer type or the answer itself). Existing works often suffer from the severe one image to many questions mapping problem, which generates uninformative and non-referential questions. Recent work has demonstrated that by leveraging double visual and answer hints, a model can faithfully generate much better quality questions. However, visual hints are not available naturally. Despite they proposed a simple rule-based similarity matching method to obtain candidate visual hints, they could be very noisy practically and thus restrict the quality of generated questions. In this paper, we present a novel learning approach for double-hints based VQG, which can be cast as a weakly supervised learning problem with noises. The key rationale is that the salient visual regions of interest can be viewed as a constraint to improve the generation procedure for producing high-quality questions. As a result, given the predicted salient visual regions of interest, we can focus on estimating the probability of being ground-truth questions, which in turn implicitly measures the quality of predicted visual hints. Experimental results on two benchmark datasets show that our proposed method outperforms the state-of-the-art approaches by a large margin on a variety of metrics, including both automatic machine metrics and human evaluation.
| accept | All reviewers recommend reject after reviewing the author response and discussion.
While the author response addressed some concerns, several critical concerns remained, including:
1) Limited novelty over prior work.
2) Limited focus on the benefit of VQG, although the authors acknowledge the importance in Section 3.4 and the author response.
2a) Section 3.4. misses comparison to closes competitor DH-VQG and ablations.
2b) The additional results provided in the author response on few / zero-shot are interesting, but again miss comparison to prior work (e.g. DH-VQG) and ablations for this important final task
I thus recommend reject.
[The authors might want to consider submitting a significantly revised version to a vision or NLP focused venue] | train | [
"RYlF_bUZDPE",
"hhF4B60AoXf",
"32sqHYqNRJ",
"IxN8HXNJMz",
"MaStfsF3_YG",
"w-JLb_FSMP",
"HeFvJyDkBtn",
"biaOtIyxclg",
"ogG0NsOXPnm",
"eV2PN8kDEHw",
"gtJgFmDFSX9",
"ZyPhF-_HRT",
"lhdMx5mcfd0"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work aims to generate human-like visual questions from an image, the questions and answers in training data, and the visual hints via an ad-hoc method using an object detector where detected visual regions and predicted attributes are available following Kai et al., 2021. The improvement over the previous wor... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_LMotP3zsq_d",
"ZyPhF-_HRT",
"RYlF_bUZDPE",
"lhdMx5mcfd0",
"ZyPhF-_HRT",
"gtJgFmDFSX9",
"gtJgFmDFSX9",
"RYlF_bUZDPE",
"lhdMx5mcfd0",
"ZyPhF-_HRT",
"nips_2021_LMotP3zsq_d",
"nips_2021_LMotP3zsq_d",
"nips_2021_LMotP3zsq_d"
] |
nips_2021_X_jSy6seRj | Pure Exploration in Kernel and Neural Bandits | We study pure exploration in bandits, where the dimension of the feature representation can be much larger than the number of arms. To overcome the curse of dimensionality, we propose to adaptively embed the feature representation of each arm into a lower-dimensional space and carefully deal with the induced model misspecifications. Our approach is conceptually very different from existing works that can either only handle low-dimensional linear bandits or passively deal with model misspecifications. We showcase the application of our approach to two pure exploration settings that were previously under-studied: (1) the reward function belongs to a possibly infinite-dimensional Reproducing Kernel Hilbert Space, and (2) the reward function is nonlinear and can be approximated by neural networks. Our main results provide sample complexity guarantees that only depend on the effective dimension of the feature spaces in the kernel or neural representations. Extensive experiments conducted on both synthetic and real-world datasets demonstrate the efficacy of our methods.
| accept | This is good paper that should be accepted to the conference. Reviewer 7Wf3 raised a concern about plagiarism and gave a very low score of 2 because of the plagiarism claim. However, the other three reviewers and I agree that there is no plagiarism in this paper; unfortunately the reviewer did not interact with us and did not update his score. Aside form the plagiarism claim there are plenty of comments from the reviewers that the authors should use to improve their paper further. | train | [
"qS2aqiPgObK",
"k1X1AfKx8uL",
"ZjxA6Pgj-6c",
"RfHnBTf9Bm1",
"yxgBiDgp06",
"6ELjlxL0Up1",
"W4N8tE9J9Rp",
"RsjWUikxj8",
"KFmbmxb1qUN",
"mKQ3xh51g17",
"-k4tY9MKZLw",
"C3d3pN75QuU"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We truly appreciate you providing valuable feedback and taking the time to participate in the committee discussion. We will incorporate your suggestions for improving the paper. Thank you!\n",
" Dear authors, dear reviewing committee, \n\nThank you for your response to my questions, I believe that clarifying so... | [
-1,
-1,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
8,
2
] | [
-1,
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"k1X1AfKx8uL",
"-k4tY9MKZLw",
"nips_2021_X_jSy6seRj",
"nips_2021_X_jSy6seRj",
"nips_2021_X_jSy6seRj",
"C3d3pN75QuU",
"ZjxA6Pgj-6c",
"RfHnBTf9Bm1",
"-k4tY9MKZLw",
"nips_2021_X_jSy6seRj",
"nips_2021_X_jSy6seRj",
"nips_2021_X_jSy6seRj"
] |
nips_2021_GSXEx6iYd0 | Numerical Composition of Differential Privacy | Sivakanth Gopi, Yin Tat Lee, Lukas Wutschitz | accept | The reviewers felt that, although the algorithm in this paper is very similar to previous work, the modifications feel like the "right" way to solve the problem, and the improvements are significant for an important problem. The reviewers agreed that this paper should be accepted as a spotlight. | train | [
"CdPuGYGptPW",
"YdY_7IrcI5M",
"GfW5-9R43v2",
"_9hZ2whfm-a",
"4Eu9x1AAk2V",
"6qnIjfse_1z",
"9HKZc9bmIVp",
"MS8fa_6KPP",
"8E53JouPCmD",
"maqxlx5lGgg",
"y_OmwIdLMCi",
"_BPyv31wUC3",
"wR6LtSf3ghJ"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your suggestions! In the next revision, we will add a plot of the number of discretization points vs convergence of the algorithm and compare it previous work.",
" Thank you for clarifying. I have no other concerns. Keep up the great work :-)",
" You are right in asking the question about which ob... | [
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
8,
9
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"6qnIjfse_1z",
"GfW5-9R43v2",
"_9hZ2whfm-a",
"MS8fa_6KPP",
"nips_2021_GSXEx6iYd0",
"9HKZc9bmIVp",
"4Eu9x1AAk2V",
"wR6LtSf3ghJ",
"_BPyv31wUC3",
"y_OmwIdLMCi",
"nips_2021_GSXEx6iYd0",
"nips_2021_GSXEx6iYd0",
"nips_2021_GSXEx6iYd0"
] |
nips_2021_qz0MLeaTP1C | Coresets for Classification – Simplified and Strengthened | Tung Mai, Cameron Musco, Anup Rao | accept | After the rebuttal phase, all reviewers see the merits of the paper and this coincides with
my own impressions. The paper is worth being published.
| train | [
"hysVWnsrT7",
"dbTwOtKzRzA",
"f_UmtfpObQ3",
"peYu0bixQEA",
"QJsB9ST4wGz",
"5QZkHL-9jg8",
"BL5hAYb6a2R",
"H3J88YfqNP4",
"XsdY7ezENla"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies construction of coresets for training linear classifiers with objective functions such as logistic loss and hinge loss. The coresets they construct are of size $d\\mu_y(X)^2/\\epsilon^2$ upto polylogarithmic factors where $\\mu_y$ is the parameter that denotes the maximum ratio of the correctly ... | [
7,
-1,
-1,
-1,
-1,
-1,
7,
8,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_qz0MLeaTP1C",
"peYu0bixQEA",
"hysVWnsrT7",
"XsdY7ezENla",
"BL5hAYb6a2R",
"H3J88YfqNP4",
"nips_2021_qz0MLeaTP1C",
"nips_2021_qz0MLeaTP1C",
"nips_2021_qz0MLeaTP1C"
] |
nips_2021_UDe_F-4EeHd | Sequential Algorithms for Testing Closeness of Distributions | Aadil Oufkir, Omar Fawzi, Nicolas Flammarion, Aurélien Garivier | accept | The paper provides new upper and lower bounds of the sample complexity of testing closeness of distributions, which together highlight the advantages of a sequential approach to the problem. Despite some initial concerns over constant-factor separations, the reviewers have reached a consensus that the results of the paper are novel and significant, providing crisp complexity bounds on a fundamental learning-related problem.
In revising the manuscript, the authors should pay close attention to the reviewer’s feedback, particularly in regards to discussion of related literature. | train | [
"zPWMc7zic6H",
"iO8XAt4n2Cw",
"y-aThpzXY_t",
"5l5ncuRNuXp",
"SbmhgQ37nL",
"iMmvi1_RvGR",
"bE0m_FIMeTt",
"TYXHajGZDT"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks a lot for your comments. The information is good to see. I'll leave my rating as-is.",
" Thank you for your response and clarifications! It eased my concerns, so that I increased my score by 1.",
"In distribution property testing, one aims to solve decision problems on probability distributions with ... | [
-1,
-1,
6,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
3,
4
] | [
"5l5ncuRNuXp",
"iMmvi1_RvGR",
"nips_2021_UDe_F-4EeHd",
"TYXHajGZDT",
"bE0m_FIMeTt",
"y-aThpzXY_t",
"nips_2021_UDe_F-4EeHd",
"nips_2021_UDe_F-4EeHd"
] |
nips_2021_FFtcBBVIg1T | Overlapping Spaces for Compact Graph Representations | Various non-trivial spaces are becoming popular for embedding structured data such as graphs, texts, or images. Following spherical and hyperbolic spaces, more general product spaces have been proposed. However, searching for the best configuration of a product space is a resource-intensive procedure, which reduces the practical applicability of the idea. We generalize the concept of product space and introduce an overlapping space that does not have the configuration search problem. The main idea is to allow subsets of coordinates to be shared between spaces of different types (Euclidean, hyperbolic, spherical). As a result, we often need fewer coordinates to store the objects. Additionally, we propose an optimization algorithm that automatically learns the optimal configuration. Our experiments confirm that overlapping spaces outperform the competitors in graph embedding tasks with different evaluation metrics. We also perform an empirical analysis in a realistic information retrieval setup, where we compare all spaces by incorporating them into DSSM. In this case, the proposed overlapping space consistently achieves nearly optimal results without any configuration tuning. This allows for reducing training time, which can be essential in large-scale applications.
| accept | This paper had quite a bit of high-quality discussion between reviewers and authors, and is approaching consensus. Multiple reviewers specifically champion the paper and ask for acceptance. The main argument against acceptance, from one of the reviewers is the lack of theoretical analysis. While this is something that would have been nice to have, most of the time, in embedding works, there is no tractable analysis that can be had, without specifying some much more limited version of the problem (e.g., studying distortion bounds for hyperbolic embeddings, but only for trees). So while going off in a theory direction would be interesting, the paper succeeds at its main task.
The paper’s core idea is clever; I can imagine a bunch of work that could take off from seeing this idea. Overall I lean towards acceptance. One thing the authors should work on is some additional clarity in the writing for the final version; this was noted by several reviewers, and I agree.
| train | [
"MS1rKxIbCm",
"7tWy7J15XId",
"ebjV3-SyC_",
"v5ECH_F_f3B",
"Trix8j-nsT",
"hG3XX6ej9z",
"kueVIi13A0",
"-mif-YWA91",
"rkp9I3wAau7",
"ziFaQaGV-dg",
"-Gd7qhAS4WS",
"vuunwFRhcrZ",
"8hi8ffNj2x3",
"L-4tJNaf496",
"DpsKxbtRm4F",
"7Cq56Sj7yB6",
"PF7uNiUe3Ct",
"BcgT2oB3G2",
"I5Ir3MuZdHh",
... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"a... | [
"In this paper, the authors propose to extend product space for embedding structured data. In doing so, the author poses to allow each dimension to possibly belong to multiple signatures t. To optimize, the authors propose to use universal signatures and a regular Adam optimizer. Extensive experiments have been con... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"nips_2021_FFtcBBVIg1T",
"hG3XX6ej9z",
"hG3XX6ej9z",
"vuunwFRhcrZ",
"rkp9I3wAau7",
"kueVIi13A0",
"PF7uNiUe3Ct",
"-Gd7qhAS4WS",
"PF7uNiUe3Ct",
"8hi8ffNj2x3",
"L-4tJNaf496",
"7Cq56Sj7yB6",
"I5Ir3MuZdHh",
"DpsKxbtRm4F",
"2elMcPe-CKW",
"BcgT2oB3G2",
"8KrJ6THkZYn",
"MS1rKxIbCm",
"tLjR... |
nips_2021_81Erd42Wimi | Hyperparameter Tuning is All You Need for LISTA | Learned Iterative Shrinkage-Thresholding Algorithm (LISTA) introduces the concept of unrolling an iterative algorithm and training it like a neural network. It has had great success on sparse recovery. In this paper, we show that adding momentum to intermediate variables in the LISTA network achieves a better convergence rate and, in particular, the network with instance-optimal parameters is superlinearly convergent. Moreover, our new theoretical results lead to a practical approach of automatically and adaptively calculating the parameters of a LISTA network layer based on its previous layers. Perhaps most surprisingly, such an adaptive-parameter procedure reduces the training of LISTA to tuning only three hyperparameters from data: a new record set in the context of the recent advances on trimming down LISTA complexity. We call this new ultra-light weight network HyperLISTA. Compared to state-of-the-art LISTA models, HyperLISTA achieves almost the same performance on seen data distributions and performs better when tested on unseen distributions (specifically, those with different sparsity levels and nonzero magnitudes). Code is available: https://github.com/VITA-Group/HyperLISTA.
| accept | This work makes significant contributions to research into algorithm unrolling, by building upon the LISTA and ALISTA works. After providing a new ALISTA parameterization, and introducing a momentum term the authors both significantly reduce the complexity of tuning relative to prior work, and achieve significant performance gains. Most reviewer criticism centered on quality of writing, which was addressed in the rebuttal. Regarding the technical merit of the work, some reviewers offered extremely high praise arguing strongly for acceptance. | train | [
"iD-VjQHzG-u",
"JFW0qloPojZ",
"1E984j7x3i1",
"pYBcjERI_C",
"OAN4XLjQ-l",
"7YyeQf3lKoG",
"OeCf8RQ77QT",
"PKtROqBfVf",
"YL50uQqlWD",
"zn2MqHfO7qC",
"q6UobOIzjfx",
"VaQXswXMFpO",
"-gMtR1-xWtB",
"e6-eyT2WHLC",
"ZEaIDeHMD23",
"s0BB1aCPxUn",
"JlXsSyysnLw",
"Rv6aJcJfDDH",
"d2_1TJMY0Qp",... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"... | [
" Reviewer fXkT has cited a number of strengths and reasons for accepting the paper. Can you take a look at their review and author response and comment on your thoughts? Is there anything you disagree with or feel that they missed?",
"This paper proposed a modification to the ALISTA algorithm which offers a bett... | [
-1,
5,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
2,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"7YyeQf3lKoG",
"nips_2021_81Erd42Wimi",
"pYBcjERI_C",
"OeCf8RQ77QT",
"nips_2021_81Erd42Wimi",
"nips_2021_81Erd42Wimi",
"zn2MqHfO7qC",
"YL50uQqlWD",
"JFW0qloPojZ",
"q6UobOIzjfx",
"ZEaIDeHMD23",
"nips_2021_81Erd42Wimi",
"JlXsSyysnLw",
"e70dbvhbyl",
"s0BB1aCPxUn",
"Rv6aJcJfDDH",
"Ow0cci... |
nips_2021_Jyxmk4wUoQV | Foundations of Symbolic Languages for Model Interpretability | Several queries and scores have recently been proposed to explain individual predictions over ML models. Examples include queries based on “anchors”, which are parts of an instance that are sufficient to justify its classification, and “feature-perturbation” scores such as SHAP. Given the need for flexible, reliable, and easy-to-apply interpretability methods for ML models, we foresee the need for developing declarative languages to naturally specify different explainability queries. We do this in a principled way by rooting such a language in a logic called FOIL, which allows for expressing many simple but important explainability queries, and might serve as a core for more expressive interpretability languages. We study the computational complexity of FOIL queries over two classes of ML models often deemed to be easily interpretable: decision trees and more general decision diagrams. Since the number of possible inputs for an ML model is exponential in its dimension, tractability of the FOIL evaluation problem is delicate but can be achieved by either restricting the structure of the models, or the fragment of FOIL being evaluated. We also present a prototype implementation of FOIL wrapped in a high-level declarative language and perform experiments showing that such a language can be used in practice.
| accept | The paper attracted significant discussion among reviewers. The reviewers identified the potential of a declarative unifying framework and rigorous theoretical analysis. The reviewers also highlighted the novelty as a core strength. On the other hand, one reviewer did point out the lack of convincing empirical study.
The consensus among reviewers was that the fact paper drew diverse and strong opinions makes it a great candidate for acceptance since it has the potential to spur such discussions in the community. Given that all the reviewers came to a consensus that the paper be accepted (even though not every reviewer agreed with the objectives of the paper), the job of AC was an easy one.
We hope that the authors will take the comments of all the reviewers very seriously and incorporate them into the final version.
| train | [
"9Frv3EiLX17",
"WpxxWCrIc_",
"dOeA-GoWvDI",
"PGrUB8FtMs7",
"v3Pw4TUuEB7",
"Yfv2wkcPpSO",
"9Rf61lsgGZ1",
"pho2_JcXbZb",
"K3LFzSWraL"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" [Comment] “I am not very convinced that this is a definitive language which will be used in the field, since it is basically a low-level language. For example the definition of the notion MATCH (line 210) is a roundabout way of saying something quite simple, i.e. that x and y agree on features in S. However, the ... | [
-1,
-1,
-1,
-1,
-1,
7,
8,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
4
] | [
"K3LFzSWraL",
"pho2_JcXbZb",
"9Rf61lsgGZ1",
"Yfv2wkcPpSO",
"nips_2021_Jyxmk4wUoQV",
"nips_2021_Jyxmk4wUoQV",
"nips_2021_Jyxmk4wUoQV",
"nips_2021_Jyxmk4wUoQV",
"nips_2021_Jyxmk4wUoQV"
] |
nips_2021_G8A_Nl0yim6 | Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism | Paria Rashidinejad, Banghua Zhu, Cong Ma, Jiantao Jiao, Stuart Russell | accept | Authors show novel and interesting theoretical results that provide scale-sensitive minimax upper and lower bounds to off-policy learning, which correctly quantify the difficulty of the problem as the behavior policy becomes near-optimal. I agree with the reviewers that the contributions are significant and substantive. | train | [
"4P_YYOcPYY",
"2UrJvd-EQf0",
"BA5qFfU8QNV",
"uYfP1GlreL",
"aIsFU5pnYNw",
"LfLZjtf_Zg",
"ShlMcp5Rk0u",
"Pwik777T6nJ",
"kyOLZSAihtT",
"Oq-BTA5_n_W",
"l66yyrNpsKe",
"8JMFHpqlOHR"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for providing further feedback and suggestions for improving the quality and clarity of the paper. We are happy that the reviewer is satisfied with our answers. We agree with the reviewer's suggestions and will include further discussion on going beyond the tabular setting and our decomposit... | [
-1,
6,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"LfLZjtf_Zg",
"nips_2021_G8A_Nl0yim6",
"kyOLZSAihtT",
"nips_2021_G8A_Nl0yim6",
"ShlMcp5Rk0u",
"Oq-BTA5_n_W",
"uYfP1GlreL",
"8JMFHpqlOHR",
"2UrJvd-EQf0",
"l66yyrNpsKe",
"nips_2021_G8A_Nl0yim6",
"nips_2021_G8A_Nl0yim6"
] |
nips_2021_MAorPaLqam_ | Impression learning: Online representation learning with synaptic plasticity | Understanding how the brain constructs statistical models of the sensory world remains a longstanding challenge for computational neuroscience. Here, we derive an unsupervised local synaptic plasticity rule that trains neural circuits to infer latent structure from sensory stimuli via a novel loss function for approximate online Bayesian inference. The learning algorithm is driven by a local error signal computed between two factors that jointly contribute to neural activity: stimulus drive and internal predictions --- the network's 'impression' of the stimulus. Physiologically, we associate these two components with the basal and apical dendrites of pyramidal neurons, respectively. We show that learning can be implemented online, is capable of capturing temporal dependencies in continuous input streams, and generalizes to hierarchical architectures. Furthermore, we demonstrate both analytically and empirically that the algorithm is more data-efficient than a three-factor plasticity alternative, enabling it to learn statistics of high-dimensional, naturalistic inputs. Overall, the model provides a bridge from mechanistic accounts of synaptic plasticity to algorithmic descriptions of unsupervised probabilistic learning and inference.
| accept | This paper introduces an unsupervised local plasticity rule to perform approximate online Bayesian inference of latent structure from sensory stimuli. All reviewers agree on the novelty and significance of the paper. However, reviewers brought up concerns about biological plausibility of some elements of the proposed algorithm, which should be addressed. | train | [
"LEs4hvFnIhv",
"CVCmCPqAunB",
"cIVdhLpZFzb",
"T3J37Mrw-eF",
"NvuonVzqhb7",
"DMK2EWhBSTY",
"EOWuh69KcLU",
"LY1ueK7FUF6",
"0iHP_Ikgjpy",
"6kS1RjW--AY"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" OK. Thanks for the explanation. I will maintain my \"good paper -- accept\" score.",
" Thank you for the detailed reply. I am happy with the answers provided, but will maintain my score as the biological plausibility and link to experimental observations is still not clear to me.",
" Q. It would be nice to m... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
2
] | [
"DMK2EWhBSTY",
"T3J37Mrw-eF",
"6kS1RjW--AY",
"EOWuh69KcLU",
"0iHP_Ikgjpy",
"LY1ueK7FUF6",
"nips_2021_MAorPaLqam_",
"nips_2021_MAorPaLqam_",
"nips_2021_MAorPaLqam_",
"nips_2021_MAorPaLqam_"
] |
nips_2021_vLPqnPf9k0 | How Well do Feature Visualizations Support Causal Understanding of CNN Activations? | Roland Zimmermann, Judy Borowski, Robert Geirhos, Matthias Bethge, Thomas Wallis, Wieland Brendel | accept | This paper investigates whether feature visualizations, such as activation maximization, provide a benefit to humans in predicting model performance over simpler approaches such as the maximally-activating data samples. By performing a set of human experiments, the authors show that a) there is a benefit to these visualizations over no visualizations, but that b) there is no difference between these approaches and the use of natural images. Reviewers all found the experiments to be very carefully designed and rigorous, and praised the overall clarity of the paper. There is certainly a need for more thorough and careful examination of the interpretability methods that are widely used, and I think this paper will prove impactful both because of its observations and as a model for how to conduct such evaluations. I recommend it be accepted as a spotlight. | train | [
"iHOZOm25QFs",
"d5KBX34QLen",
"3Kesnkr7Ljt",
"irIqKmwrV3A",
"KMNH2vC6Ze6",
"39XZrhBi0vE",
"5U-aPceIvsW",
"0tbmKZ-yN5F",
"0bRFOcXdb91",
"Zz-_kU1ogy7",
"SbS1KXQOnI",
"erwhW_0q-fS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper put forward an objective psychophysical task to evaluate whether feature visualization could help humans understanding CNN activations. The authors recorded the details of their objective psychophysical task and adopted lots of baselines that ensure their experiments' preciseness. They also analyze the ... | [
6,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_vLPqnPf9k0",
"3Kesnkr7Ljt",
"irIqKmwrV3A",
"SbS1KXQOnI",
"Zz-_kU1ogy7",
"nips_2021_vLPqnPf9k0",
"0bRFOcXdb91",
"nips_2021_vLPqnPf9k0",
"39XZrhBi0vE",
"erwhW_0q-fS",
"iHOZOm25QFs",
"nips_2021_vLPqnPf9k0"
] |
nips_2021_xZvuqfT6Otj | Fixes That Fail: Self-Defeating Improvements in Machine-Learning Systems | Machine-learning systems such as self-driving cars or virtual assistants are composed of a large number of machine-learning models that recognize image content, transcribe speech, analyze natural language, infer preferences, rank options, etc. Models in these systems are often developed and trained independently, which raises an obvious concern: Can improving a machine-learning model make the overall system worse? We answer this question affirmatively by showing that improving a model can deteriorate the performance of downstream models, even after those downstream models are retrained. Such self-defeating improvements are the result of entanglement between the models in the system. We perform an error decomposition of systems with multiple machine-learning models, which sheds light on the types of errors that can lead to self-defeating improvements. We also present the results of experiments which show that self-defeating improvements emerge in a realistic stereo-based detection system for cars and pedestrians.
| accept | Three reviewers gave favorable scores, one borderline, and one negative (ARBJ). The last reviewer engaged in a productive discussion with the authors, so we can expect the final paper to be improved. The paper addresses a important question and has been refined a lot since it was first submitted to ICML 2021, so it deserves to be published now.
The paper addresses a problem that is important and will be of interest to a broad audience. The issue is that in a system that uses multiple machine learning models, improving the accuracy of some component models may lead to worse results for the system as a whole. The paper explains the problem and some formalizations of it. Although solutions are nor provided, illustrative examples and cases from the real world are described. | test | [
"1eZkHm9LUhW",
"PYfJXL4XRM2",
"z35Wqbf0p6",
"VYuqhSRGXS6",
"7BriqTz2WkE",
"bZqY1FDVJzC",
"9fA7x_4ONtF",
"_GdJDqYx36M",
"JXruQ5OUZKd",
"HUPGBqYXtPq",
"avDKu8uAQCj",
"jBQBMHJ4c1V",
"1zbbPSIkktz",
"yGU97jLsC0G",
"oJzQD7u787Q",
"lvbSXPEc1VM",
"lISCOhlOKmr",
"NvGFT8Fp83X",
"I7zSsFQT31... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_re... | [
"This paper studies the problem of \"self-defeating improvement\" (improving an upstream ML model deteriorates a downstream ML model in the same system). Leveraging Bayes error decomposition, it decomposes the error of a system with multiple ML models into: (excess) upstream error, upstream compatibility error (whe... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"nips_2021_xZvuqfT6Otj",
"1zbbPSIkktz",
"jBQBMHJ4c1V",
"7BriqTz2WkE",
"bZqY1FDVJzC",
"9fA7x_4ONtF",
"_GdJDqYx36M",
"JXruQ5OUZKd",
"HUPGBqYXtPq",
"jBQBMHJ4c1V",
"1eZkHm9LUhW",
"AJPbJJaAQ3J",
"I7zSsFQT31V",
"NvGFT8Fp83X",
"lISCOhlOKmr",
"nips_2021_xZvuqfT6Otj",
"nips_2021_xZvuqfT6Otj",... |
nips_2021_HxuQiq1SnyS | Coarse-to-fine Animal Pose and Shape Estimation | Most existing animal pose and shape estimation approaches reconstruct animal meshes with a parametric SMAL model. This is because the low-dimensional pose and shape parameters of the SMAL model makes it easier for deep networks to learn the high-dimensional animal meshes. However, the SMAL model is learned from scans of toy animals with limited pose and shape variations, and thus may not be able to represent highly varying real animals well. This may result in poor fittings of the estimated meshes to the 2D evidences, e.g. 2D keypoints or silhouettes. To mitigate this problem, we propose a coarse-to-fine approach to reconstruct 3D animal mesh from a single image. The coarse estimation stage first estimates the pose, shape and translation parameters of the SMAL model. The estimated meshes are then used as a starting point by a graph convolutional network (GCN) to predict a per-vertex deformation in the refinement stage. This combination of SMAL-based and vertex-based representations benefits from both parametric and non-parametric representations. We design our mesh refinement GCN (MRGCN) as an encoder-decoder structure with hierarchical feature representations to overcome the limited receptive field of traditional GCNs. Moreover, we observe that the global image feature used by existing animal mesh reconstruction works is unable to capture detailed shape information for mesh refinement. We thus introduce a local feature extractor to retrieve a vertex-level feature and use it together with the global feature as the input of the MRGCN. We test our approach on the StanfordExtra dataset and achieve state-of-the-art results. Furthermore, we test the generalization capacity of our approach on the Animal Pose and BADJA datasets. Our code is available at the project website.
| accept | The paper received 4 positive final ratings: 7, 6, 6, 6.
On the positive side, the reviewers appreciated importance of the problem, an overall meaningful approach and strong empirical performance. The main remaining concern was around novelty of individual components, but at the end the reviewers agreed that the combination of those is sufficiently novel and effective in the given setting. The remaining concerns were mostly around clarity, gaps in related works, and somewhat limited evaluation (dogs only). Some of those were addressed in the rebuttal, as acknowledged by the reviewers.
The final recommendation is to accept as a poster. The authors are highly encouraged to incorporate all feedback from the reviewers in the camera ready version of the manuscript. | train | [
"leRYk_7C9rM",
"SvMiC2u4b97",
"FOvpM2BT8IR",
"OFnwpP-vXsX",
"yx2rzyeCovF",
"urF-21vbTow",
"ADtVVf4ejvK",
"H2NLflVG0G-",
"-l3guhCmoJU",
"uhymaw9jViJ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The rebuttal covers all my concerns. I agree with other reviewers that the novelty seems limited and experiments are constrained to dogs. If the authors could improve the experiments, I think it will be a good paper. For now, I keep my rating. ",
" Thank you to the authors for their detailed response, particula... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
4
] | [
"yx2rzyeCovF",
"FOvpM2BT8IR",
"uhymaw9jViJ",
"-l3guhCmoJU",
"H2NLflVG0G-",
"ADtVVf4ejvK",
"nips_2021_HxuQiq1SnyS",
"nips_2021_HxuQiq1SnyS",
"nips_2021_HxuQiq1SnyS",
"nips_2021_HxuQiq1SnyS"
] |
nips_2021_Tn0PnRY877g | Meta-Learning Sparse Implicit Neural Representations | Implicit neural representations are a promising new avenue of representing general signals by learning a continuous function that, parameterized as a neural network, maps the domain of a signal to its codomain; the mapping from spatial coordinates of an image to its pixel values, for example. Being capable of conveying fine details in a high dimensional signal, unboundedly of its domain, implicit neural representations ensure many advantages over conventional discrete representations. However, the current approach is difficult to scale for a large number of signals or a data set, since learning a neural representation---which is parameter heavy by itself---for each signal individually requires a lot of memory and computations. To address this issue, we propose to leverage a meta-learning approach in combination with network compression under a sparsity constraint, such that it renders a well-initialized sparse parameterization that evolves quickly to represent a set of unseen signals in the subsequent training. We empirically demonstrate that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models with the same number of parameters, when trained to fit each signal using the same number of optimization steps.
| accept | The paper proposes to learn functional representations of signals such as images using meta learning and network pruning, so that an initial sparse network can be perturbed to fit each training or testing image. The proposed method is novel, simple and clean, and it can be useful in applications based on functional representations. The experiments are solid and thorough, especially with the additional experiments performed during the rebuttal period.
After rebuttal, this paper received three final ratings of 7, 7, 7 (the reviewer who initially assigned a rating of 5 upgraded the rating to 7, but was unable to make the change formally due to technical error). The rebuttals have addressed all the concerns of the reviewers.
This paper makes a valuable contribution and can be accepted. | train | [
"X4JgozSXW5t",
"fAM4Kou18j",
"ZlipG3ZjJb",
"tYVXIzzv0wU",
"LzNRms933qF",
"4Ao_peupGx-",
"7RQ_X_Bc6OX",
"slEnJVxMrkO",
"Uc5Hd4jGpe-"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper describes a method for representing sets of signals using coordinate-based neural representations (or INRs) by combining a meta learning approach for learning the initial weights with a network pruning approach for ensuring that this initialization is as sparse and memory-efficient as possible. The paper... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_Tn0PnRY877g",
"ZlipG3ZjJb",
"7RQ_X_Bc6OX",
"nips_2021_Tn0PnRY877g",
"X4JgozSXW5t",
"Uc5Hd4jGpe-",
"slEnJVxMrkO",
"nips_2021_Tn0PnRY877g",
"nips_2021_Tn0PnRY877g"
] |
nips_2021_vllRjSTWcLs | Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation | This paper presents a simple yet effective approach to modeling space-time correspondences in the context of video object segmentation. Unlike most existing approaches, we establish correspondences directly between frames without re-encoding the mask features for every object, leading to a highly efficient and robust framework. With the correspondences, every node in the current query frame is inferred by aggregating features from the past in an associative fashion. We cast the aggregation process as a voting problem and find that the existing inner-product affinity leads to poor use of memory with a small (fixed) subset of memory nodes dominating the votes, regardless of the query. In light of this phenomenon, we propose using the negative squared Euclidean distance instead to compute the affinities. We validated that every memory node now has a chance to contribute, and experimentally showed that such diversified voting is beneficial to both memory efficiency and inference accuracy. The synergy of correspondence networks and diversified voting works exceedingly well, achieves new state-of-the-art results on both DAVIS and YouTubeVOS datasets while running significantly faster at 20+ FPS for multiple objects without bells and whistles.
| accept | This paper has four positive reviews (6,6,6,7). While the scores are close to the borderline, the reviewers are consistent, and appreciate the same points in the paper. It should be accepted to NeurIPS.
| train | [
"DXZGGC-mlN-",
"EEe8s8XDzXB",
"X0ZmrmSp73V",
"AmCkaAbgxz",
"tI1LUKXyf7",
"sw4YwxcS9A",
"CDlCyYkecd",
"OSD6V2xA8_A"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the review and questions. We will address the questions one by one:\n\n**1\\. Technical contributions.** We would like to restate that simplicity is one of our goals, and our careful redesign of core modules in the popular STM framework is instrumental to future works in this direction. With our to-... | [
-1,
-1,
-1,
-1,
7,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
4,
5,
5,
5
] | [
"OSD6V2xA8_A",
"sw4YwxcS9A",
"CDlCyYkecd",
"tI1LUKXyf7",
"nips_2021_vllRjSTWcLs",
"nips_2021_vllRjSTWcLs",
"nips_2021_vllRjSTWcLs",
"nips_2021_vllRjSTWcLs"
] |
nips_2021_aLE2sEtMNXv | Sparse Spiking Gradient Descent | Nicolas Perez-Nieves, Dan Goodman | accept | Dear authors,
congratulations on your submission being accepted at Neurips. The reviewers appreciated the sparse BP algorithm for speeding up training of spiking neural networks, and agree that it would be of interest to the Neurips audience. However, they also included feedback and constructive criticisms which we would ask you to include in your final version, in particular regarding limitations of the approach on large data sets due to memory limitations (linear scaling with number of steps). It would be ideal if you are able to include stronger empirical results on more complex data-sets in the final version.
Best, your AC | train | [
"Saz8C_wPwfn",
"R8et-kkIxE",
"UucOXM0t5f",
"rk9HlrNXOfc",
"O2eVwxOUyff",
"efkB39ff2xO",
"HkQOPrOTCNT",
"lLDynpLcK7D",
"TmGwyf4TXRb",
"kiyVxt7LLmc"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work introduces a new method to train spiking neural networks using far fewer gradients during the backwards back-propagation pass. By only retaining the potential of so-called \"active\" neurons (those within a threshold epsilon of spiking) and learning through a surrogate function, these networks of spikin... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"nips_2021_aLE2sEtMNXv",
"O2eVwxOUyff",
"kiyVxt7LLmc",
"TmGwyf4TXRb",
"Saz8C_wPwfn",
"lLDynpLcK7D",
"Saz8C_wPwfn",
"nips_2021_aLE2sEtMNXv",
"nips_2021_aLE2sEtMNXv",
"nips_2021_aLE2sEtMNXv"
] |
nips_2021_NJS8kp15zzH | Rethinking Calibration of Deep Neural Networks: Do Not Be Afraid of Overconfidence | Capturing accurate uncertainty quantification of the prediction from deep neural networks is important in many real-world decision-making applications. A reliable predictor is expected to be accurate when it is confident about its predictions and indicate high uncertainty when it is likely to be inaccurate. However, modern neural networks have been found to be poorly calibrated, primarily in the direction of overconfidence. In recent years, there is a surge of research on model calibration by leveraging implicit or explicit regularization techniques during training, which obtain well calibration by avoiding overconfident outputs. In our study, we empirically found that despite the predictions obtained from these regularized models are better calibrated, they suffer from not being as calibratable, namely, it is harder to further calibrate their predictions with post-hoc calibration methods like temperature scaling and histogram binning. We conduct a series of empirical studies showing that overconfidence may not hurt final calibration performance if post-hoc calibration is allowed, rather, the penalty of confident outputs will compress the room of potential improvements in post-hoc calibration phase. Our experimental findings point out a new direction to improve calibration of DNNs by considering main training and post-hoc calibration as a unified framework.
| accept | This paper makes the claim that although deep neural networks tend to be overconfident and poorly calibrated after training, they are “calibratable” in that post-hoc calibration methods like Platt or temperature scaling can recover good calibration. The authors argue that many regularization methods that have been proposed to improve calibration during training actually result in less calibratable classifiers. Experiments are run using a ResNet-32 on SVHN, CIFAR10/100 and a FCNN on 20 newsgroups.
Weaknesses
- All evaluation is done using the standard in-distribution validation and test sets.
- The only metric that is evaluated is expected calibration error, which is not a proper scoring rule. It’s well established that ECE can be gamed and should not be used as the sole metric for comparison of models.
- This paper would be much stronger if it included more difficult / modern datasets and models. A start would be imagenet with corresponding shifted / OOD variants like imagenet A, C, V2. In cases where the data distribution changes we know, TS doesn’t perform well. Thus is this going to provide the right recommendation under those circumstances? The claims are quite broad given the scope of the experiments.
- Can we expect to make general claims over all of deep learning given results of a specific ResNet architecture on 32x32 images and a FCNN on 20-newsgroups?
- There are some concerns about showing results that were calibrated using the test data
Strengths
- The reviewers found that the paper was interesting to read and well written.
- They found some of the analysis especially insightful (e.g. "Especially the histogram of logits that shows that regularizers that improve calibration lead to "squared together" logits which makes post-training calibration more difficult and the concept of "learned epoch" is beautiful").
- The arguments were found to be sound and validated by the empirical results.
Overall, this paper presents an interesting observation, but the claims seem somewhat too broad and sweeping given the content of the empirical evaluation. One might expect to see some evaluation of these models on cases with changes to the data distribution, more rigorous metrics (e.g. NLL), more varied architectures, some analysis of epistemic vs aleatoric uncertainty. Nevertheless, the reviewers consensus is that the strengths outweigh the weaknesses of the paper and that these results will be useful and interesting to the community. Thus the recommendation is to accept.
Nit: Bayesian DNNs “which indirectly infer prediction uncertainty through weight uncertainties” is a mischaracterization of BNNs. They directly infer prediction uncertainty by marginalizing over models, producing a distribution of predictions.
| train | [
"8sDuZJnTdx9",
"7e6usggooOL",
"6XYspH75iwg",
"oZMxPAHUjWr",
"IIUX8qBzzqj",
"-GLDyY4QIoJ",
"qfK3R3oGqVQ",
"5RxllL8-dKr",
"UhWMzBsrfx",
"S2pRnf7rCs",
"-c1NDgEtmcN",
"niuiEQHGIZ6"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
" Thank you again for the valuable comments. We will check the manuscript again and add the new experimental results in the revised paper.",
" Thanks for the detailed clarification and additional results. I'm glad the authors have directly responded my questions.\n\nI'm increasing the rating to 6 here.\n\nThanks.... | [
-1,
-1,
6,
-1,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1
] | [
"7e6usggooOL",
"-c1NDgEtmcN",
"nips_2021_NJS8kp15zzH",
"qfK3R3oGqVQ",
"UhWMzBsrfx",
"nips_2021_NJS8kp15zzH",
"niuiEQHGIZ6",
"nips_2021_NJS8kp15zzH",
"S2pRnf7rCs",
"5RxllL8-dKr",
"6XYspH75iwg",
"-GLDyY4QIoJ"
] |
nips_2021_kuK2VARZGnI | Towards Efficient and Effective Adversarial Training | The vulnerability of Deep Neural Networks to adversarial attacks has spurred immense interest towards improving their robustness. However, present state-of-the-art adversarial defenses involve the use of 10-step adversaries during training, which renders them computationally infeasible for application to large-scale datasets. While the recent single-step defenses show promising direction, their robustness is not on par with multi-step training methods. In this work, we bridge this performance gap by introducing a novel Nuclear-Norm regularizer on network predictions to enforce function smoothing in the vicinity of data samples. While prior works consider each data sample independently, the proposed regularizer uses the joint statistics of adversarial samples across a training minibatch to enhance optimization during both attack generation and training, obtaining state-of-the-art results amongst efficient defenses. We achieve further gains by incorporating exponential averaging of network weights over training iterations. We finally introduce a Hybrid training approach that combines the effectiveness of a two-step variant of the proposed defense with the efficiency of a single-step defense. We demonstrate superior results when compared to multi-step defenses such as TRADES and PGD-AT as well, at a significantly lower computational cost.
| accept | The paper introduces a Nuclear-Norm Adversarial Training (NuAT) by imposing a rank minimization constraint on the oscillation of function values across a training minibatch. All reviewers thought the paper is above the accept threshold. I suggest authors to take reviewers' suggestions in revising the final version of their paper. | train | [
"IbkogDWMIqJ",
"fm4FVZPETNV",
"rmkKp1Hjxg5",
"LRsN4DPMDFS",
"chFvp9REM9G",
"s-P-Aa-U8LQ",
"CZvMpQ3I1b",
"wqN9O5H0UbQ",
"oQ100-nDx09"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" I think this submission is publication-worthy in NeurIPS 2021.\nI will keep my score.\n\nThanks again to the author for the extensive response during the rebuttal.\n",
" We sincerely thank all reviewers for their time in going through our response and for the post-rebuttal comments. \n\nWe integrated our code w... | [
-1,
-1,
7,
7,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
3
] | [
"fm4FVZPETNV",
"chFvp9REM9G",
"nips_2021_kuK2VARZGnI",
"nips_2021_kuK2VARZGnI",
"wqN9O5H0UbQ",
"rmkKp1Hjxg5",
"LRsN4DPMDFS",
"oQ100-nDx09",
"nips_2021_kuK2VARZGnI"
] |
nips_2021_rYhBGWYm6AU | Intriguing Properties of Contrastive Losses | We study three intriguing properties of contrastive learning. First, we generalize the standard contrastive loss to a broader family of losses, and we find that various instantiations of the generalized loss perform similarly under the presence of a multi-layer non-linear projection head. Second, we study if instance-based contrastive learning (with a global image representation) can learn well on images with multiple objects present. We find that meaningful hierarchical local features can be learned despite the fact that these objectives operate on global instance-level features. Finally, we study the phenomenon of feature suppression among competing features shared across augmented views, such as "color distribution" vs "object class". We construct datasets with explicit and controllable competing features, and show that, for contrastive learning, a few bits of easy-to-learn shared features can suppress, and even fully prevent, the learning of other sets of competing features. In scenarios where there are multiple objects in an image, the dominant object would suppress the learning of smaller objects. Existing contrastive learning methods critically rely on data augmentation to favor certain sets of features over others, and could suffer from learning saturation for scenarios where existing augmentations cannot fully address the feature suppression. This poses open challenges to existing contrastive learning techniques.
| accept | The paper studies three properties of contrastive learning. Reviewers found that although these properties were shown in experiments the paper did not explain why they happen, but thought they can guide better algorithm design in contrastive learning and of interest to a wide audience. Reviewers agreed that this is a good paper. Please include in the final manuscript the additional explanation provided during the rebuttal. Accept | train | [
"Rxa4tf9_hn",
"AtssxAvMSG",
"YS22ZiJf1FR",
"qKai2tJzqK8",
"wH8UmkwxGKD",
"ZJ7V5y_VYEV",
"M1KKPao0CO",
"WyMsKL9IsJ9",
"-rJ26c5y6xQ",
"6d4Biv_vLhp",
"odBLoaU_zQe",
"Y-3T5n5KMcU"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies properties of the contrastive losses. The contributions are three-folds:\nFirst, the authors find that when using a deep multi-layer nonlinear projection head, the choice of objectives does not matter within a family of generalized losses, the same goes for the choice of batch size. The authors ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"nips_2021_rYhBGWYm6AU",
"WyMsKL9IsJ9",
"M1KKPao0CO",
"-rJ26c5y6xQ",
"ZJ7V5y_VYEV",
"6d4Biv_vLhp",
"Y-3T5n5KMcU",
"odBLoaU_zQe",
"Rxa4tf9_hn",
"nips_2021_rYhBGWYm6AU",
"nips_2021_rYhBGWYm6AU",
"nips_2021_rYhBGWYm6AU"
] |
nips_2021_tfBBt_q4nHT | Detecting Moments and Highlights in Videos via Natural Language Queries | Detecting customized moments and highlights from videos given natural language (NL) user queries is an important but under-studied topic. One of the challenges in pursuing this direction is the lack of annotated data. To address this issue, we present the Query-based Video Highlights (QVHighlights) dataset. It consists of over 10,000 YouTube videos, covering a wide range of topics, from everyday activities and travel in lifestyle vlog videos to social and political activities in news videos. Each video in the dataset is annotated with: (1) a human-written free-form NL query, (2) relevant moments in the video w.r.t. the query, and (3) five-point scale saliency scores for all query-relevant clips. This comprehensive annotation enables us to develop and evaluate systems that detect relevant moments as well as salient highlights for diverse, flexible user queries. We also present a strong baseline for this task, Moment-DETR, a transformer encoder-decoder model that views moment retrieval as a direct set prediction problem, taking extracted video and query representations as inputs and predicting moment coordinates and saliency scores end-to-end. While our model does not utilize any human prior, we show that it performs competitively when compared to well-engineered architectures. With weakly supervised pretraining using ASR captions, Moment-DETR substantially outperforms previous methods. Lastly, we present several ablations and visualizations of Moment-DETR. Data and code is publicly available at https://github.com/jayleicn/moment_detr.
| accept | The authors responded fully to the reviwers' concerns (new training, new baselines ...) and constructively to the ethics reviews. Following the rebuttal, all reviewers recommend acceptance.
Of course, the authors should update the paper and dataset as stated.
Congratulations! | train | [
"WPDWBOn--Wu",
"CPYUigiH6v",
"WuhrLUUgULE",
"OrkzW3Ry97u",
"OuF3Rdbg-HA",
"DTAMKRzHei",
"8TgC3v7tVD",
"AjI2INR2y92",
"F3PjM_YVeqH",
"wmVfuBlfDpe",
"ze6cMb5DjE6",
"SlQ4zH_l9G-",
"SlMc6R9f6L",
"aoT6ot4EuTk",
"HafyMJiGgOw"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your time for making this detailed feedback. We are happy that our response has addressed most of your concerns. For your remaining questions:\n\n**Bias in the dataset**: We believe the small improvement from adding Slow-Fast features may also be because of the already very impressive representation... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"OrkzW3Ry97u",
"nips_2021_tfBBt_q4nHT",
"OuF3Rdbg-HA",
"wmVfuBlfDpe",
"8TgC3v7tVD",
"AjI2INR2y92",
"nips_2021_tfBBt_q4nHT",
"nips_2021_tfBBt_q4nHT",
"HafyMJiGgOw",
"aoT6ot4EuTk",
"CPYUigiH6v",
"SlMc6R9f6L",
"nips_2021_tfBBt_q4nHT",
"nips_2021_tfBBt_q4nHT",
"nips_2021_tfBBt_q4nHT"
] |
nips_2021_w5j80GVGFsr | Stochastic optimization under time drift: iterate averaging, step-decay schedules, and high probability guarantees | We consider the problem of minimizing a convex function that is evolving in time according to unknown and possibly stochastic dynamics. Such problems abound in the machine learning and signal processing literature, under the names of concept drift and stochastic tracking. We provide novel non-asymptotic convergence guarantees for stochastic algorithms with iterate averaging, focusing on bounds valid both in expectation and with high probability. Notably, we show that the tracking efficiency of the proximal stochastic gradient method depends only logarithmically on the initialization quality when equipped with a step-decay schedule.
| accept | All the reviewers were positive about this paper and felt that it presented a good analysis of less-well-studied problem.
| train | [
"kAAHAYlCl0P",
"zjZq5Hlc-4A",
"MRN-vJTPbo",
"tMbXaE38x33",
"tYt05igei6J",
"BpaFqh5yQN6",
"YIr8WNFBJk5",
"G8zBY7DJfjv",
"aEUpLpJYP0E",
"uspblFfL3Gs",
"MyHrG8iEaMh",
"khzwoz6bmmt"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I intend to keep my current score and suggest to accept the paper.",
"The paper under review is devoted to the analysis of the stochastic approximation algorithm with the presence of the drift term. The main result is to derive finite-time bounds on the expected mean square error. The... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3
] | [
"G8zBY7DJfjv",
"nips_2021_w5j80GVGFsr",
"tMbXaE38x33",
"YIr8WNFBJk5",
"BpaFqh5yQN6",
"MyHrG8iEaMh",
"khzwoz6bmmt",
"uspblFfL3Gs",
"zjZq5Hlc-4A",
"nips_2021_w5j80GVGFsr",
"nips_2021_w5j80GVGFsr",
"nips_2021_w5j80GVGFsr"
] |
nips_2021_u8HmtBBSVJS | Learning Stable Deep Dynamics Models for Partially Observed or Delayed Dynamical Systems | Learning how complex dynamical systems evolve over time is a key challenge in system identification. For safety critical systems, it is often crucial that the learned model is guaranteed to converge to some equilibrium point. To this end, neural ODEs regularized with neural Lyapunov functions are a promising approach when states are fully observed. For practical applications however, {\em partial observations} are the norm. As we will demonstrate, initialization of unobserved augmented states can become a key problem for neural ODEs. To alleviate this issue, we propose to augment the system's state with its history. Inspired by state augmentation in discrete-time systems, we thus obtain {\em neural delay differential equations}. Based on classical time delay stability analysis, we then show how to ensure stability of the learned models, and theoretically analyze our approach. Our experiments demonstrate its applicability to stable system identification of partially observed systems and learning a stabilizing feedback policy in delayed feedback control.
| accept | Thank you for your submission to NeurIPS. Although there is still some disagreement between reviewers after the rebuttal period, overall the proposed paper seems to be a solid contribution to the emerging field of modeling continuous-time dynamical systems with delays and/or partial observability. The work is perhaps a bit incremental in that it is largely a combination of past approaches, including past work in Neural DDEs and past work on ensuring the stability of dynamical systems specified by Lyapunov functions. But it also combines these past methods in new and rather interesting ways, and I believe is worthy of highlighting at NeurIPS. Thus, while the paper is somewhat borderline, I am recommending it be accepted, with the strong comment that the authors address the reviewer concerns that _can_ be addressed (i.e., not just the general concerns about significance) as they did in the rebuttal.
| train | [
"WIlirpMwI3n",
"4iDWxUzc-A5",
"P5jwag1g7ZS",
"Dc1L_Bnfn14",
"BGaC3h3VJb",
"v8pbXixvFR",
"5Hwf8ERXVV",
"7KBZA5NgIh",
"b3QNEKeRM8A"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper presents a method to learn the dynamics of stable systems. The authors first exploit Neural Delay Differential Equations to model the partial observability and time delay of the system. They then employ Lyapunov-Razumikhin Function to ensure the stability of the learned dynamics. The authors evaluate the... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_u8HmtBBSVJS",
"nips_2021_u8HmtBBSVJS",
"v8pbXixvFR",
"5Hwf8ERXVV",
"b3QNEKeRM8A",
"4iDWxUzc-A5",
"WIlirpMwI3n",
"nips_2021_u8HmtBBSVJS",
"nips_2021_u8HmtBBSVJS"
] |
nips_2021_6tGP5Z-QbMb | An Uncertainty Principle is a Price of Privacy-Preserving Microdata | John Abowd, Robert Ashmead, Ryan Cumings-Menon, Simson Garfinkel, Daniel Kifer, Philip Leclerc, William Sexton, Ashley Simpson, Christine Task, Pavel Zhuravlev | accept | This paper can be viewed as a response to the anomalies being found in the Census data release. The main result is that any private algorithm that is consistent on micro data must incur an error overhead on either "local" queries (think block populations) or a "global" query (think total population). Without a micro data constraint, one can answer these queries with constant error. With the micro data requirement, the authors show that the error must be asymptotically larger for one of these.
This is an interesting paper that throws light on an important phenomenon and explains the cost of micro data requirement. The authors are encouraged to carefully look at the reviews and address reviewer comments and incorporate relevant clarifications from the rebuttal. I recommend that the paper be accepted. | train | [
"FKhI4i5w7OS",
"PFceW2FdRzh",
"qJbosxscd8w",
"96XGHxf9K1S",
"fMpej5-Vxby",
"Do6C-O7H5b1",
"_WGx3ze62QV",
"rBao1gpFZcn",
"H_3RSWiw5EJ",
"gqyq9lkQm2t"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the explanation. It clears my confusion, and I have changed my rating accordingly. \n\nOne minor comment: If part of the Theorem 3 can be proved as corollaries of Theorem 7.2 in [3], then it is better to credit that part to [3].",
"The paper studies the problem of answering disjoint count queries und... | [
-1,
6,
8,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
5,
4,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"fMpej5-Vxby",
"nips_2021_6tGP5Z-QbMb",
"nips_2021_6tGP5Z-QbMb",
"Do6C-O7H5b1",
"PFceW2FdRzh",
"H_3RSWiw5EJ",
"gqyq9lkQm2t",
"qJbosxscd8w",
"nips_2021_6tGP5Z-QbMb",
"nips_2021_6tGP5Z-QbMb"
] |
nips_2021_7wunGXQoC27 | Fairness in Ranking under Uncertainty | Ashudeep Singh, David Kempe, Thorsten Joachims | accept | After discussion, the reviewers were all in favor of accepting the paper. The discussion of related work should be expanded (in particular, KRW seems to ask for a very similar notion of fairness in rankings, although as you point out, the actual setting considered is distinct). Thanks for the strong submission! | train | [
"X2M8vDZ4HOM",
"Wos2q7XDR9S",
"atiToDZAFlP",
"dDcOlvrTZHN",
"EjgwPaHEPiW",
"spHRcZ-V8x3",
"5n-sgHSR2Iv",
"RgJyg6Bc2ia",
"ceUbl91XNV4",
"nnNxNlvm17r",
"9TRXa58M6-x",
"r1oo9XTJq51",
"1KRtl2PHqie"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you again for the thoughtful continuation of the discussion! We agree that the MovieLens dataset is best seen as a stand-in for more consequential problem domains. Complementing the affordances of this dataset was a key motivation for also going through the effort of building and evaluating the method in a ... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"atiToDZAFlP",
"nips_2021_7wunGXQoC27",
"EjgwPaHEPiW",
"RgJyg6Bc2ia",
"spHRcZ-V8x3",
"5n-sgHSR2Iv",
"Wos2q7XDR9S",
"1KRtl2PHqie",
"r1oo9XTJq51",
"9TRXa58M6-x",
"nips_2021_7wunGXQoC27",
"nips_2021_7wunGXQoC27",
"nips_2021_7wunGXQoC27"
] |
nips_2021_in_RVSTqYxK | Generalized Proximal Policy Optimization with Sample Reuse | In real-world decision making tasks, it is critical for data-driven reinforcement learning methods to be both stable and sample efficient. On-policy methods typically generate reliable policy improvement throughout training, while off-policy methods make more efficient use of data through sample reuse. In this work, we combine the theoretically supported stability benefits of on-policy algorithms with the sample efficiency of off-policy algorithms. We develop policy improvement guarantees that are suitable for the off-policy setting, and connect these bounds to the clipping mechanism used in Proximal Policy Optimization. This motivates an off-policy version of the popular algorithm that we call Generalized Proximal Policy Optimization with Sample Reuse. We demonstrate both theoretically and empirically that our algorithm delivers improved performance by effectively balancing the competing goals of stability and sample efficiency.
| accept | After reading each other's reviews and the authors' feedback, the reviewers discussed the merits and flaws of the paper.
The authors' answers solved most of the doubts exposed in the reviews, and the reviewers agree that this paper can be accepted for publication.
I want to congratulate the authors and invite them to modify their paper following the reviewers' suggestions. | train | [
"cBCXamP1LcV",
"MOsTiEOl9_",
"AP6npKGrRbZ",
"Z2ZqEZf1GE",
"DVivi57LwA",
"texVNlczJu0",
"-WpKPC5dYEf",
"hYGRR8lI-8I",
"d9YI6JBPWPU",
"J7YEb5F18si",
"tp8ekD4jKp",
"PSMT21mu_h8",
"6Moy1T4dS-Y",
"Yq7YSATSEs",
"pyhabe9awFs",
"SwSLkdZSYVm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I thank the efforts of the authors on the results of new experiments. I have also read the response to other reviewers, and the explanations is satisfactory to me. I will keep my original score.",
" I want to thank the authors for their detailed response, I am satisfied with how my concerns were addressed. The ... | [
-1,
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"d9YI6JBPWPU",
"PSMT21mu_h8",
"nips_2021_in_RVSTqYxK",
"6Moy1T4dS-Y",
"nips_2021_in_RVSTqYxK",
"-WpKPC5dYEf",
"hYGRR8lI-8I",
"Yq7YSATSEs",
"J7YEb5F18si",
"tp8ekD4jKp",
"SwSLkdZSYVm",
"pyhabe9awFs",
"AP6npKGrRbZ",
"DVivi57LwA",
"nips_2021_in_RVSTqYxK",
"nips_2021_in_RVSTqYxK"
] |
nips_2021_lU1tFeUyBTI | Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data | Knowledge distillation~(KD) aims to craft a compact student model that imitates the behavior of a pre-trained teacher in a target domain. Prior KD approaches, despite their gratifying results, have largely relied on the premise that \emph{in-domain} data is available to carry out the knowledge transfer. Such an assumption, unfortunately, in many cases violates the practical setting, since the original training data or even the data domain is often unreachable due to privacy or copyright reasons. In this paper, we attempt to tackle an ambitious task, termed as \emph{out-of-domain} knowledge distillation~(OOD-KD), which allows us to conduct KD using only OOD data that can be readily obtained at a very low cost. Admittedly, OOD-KD is by nature a highly challenging task due to the agnostic domain gap. To this end, we introduce a handy yet surprisingly efficacious approach, dubbed as~\textit{MosaicKD}. The key insight behind MosaicKD lies in that, samples from various domains share common local patterns, even though their global semantic may vary significantly; these shared local patterns, in turn, can be re-assembled analogous to mosaic tiling, to approximate the in-domain data and to further alleviating the domain discrepancy. In MosaicKD, this is achieved through a four-player min-max game, in which a generator, a discriminator, a student network, are collectively trained in an adversarial manner, partially under the guidance of a pre-trained teacher. We validate MosaicKD over {classification and semantic segmentation tasks} across various benchmarks, and demonstrate that it yields results much superior to the state-of-the-art counterparts on OOD data. Our code is available at \url{https://github.com/zju-vipa/MosaicKD}.
| accept | The paper studies a new setting, called out-of-domain knowledge distillation, where the teacher network and the student network are trained on different datasets. The proposed new setting is interesting and more practical than existing ones. In terms of the solution, the authors propose a mosaic idea to synthesize in-domain data by imitating the local patterns from real world OOD data. The key technical idea is to use minimax optimization to ensure the synthesized data could fool the discriminator. The solution is novel and technically sound. Reviewers all agreed that the proposed problem is interesting and the solution is novel.Thus, I am recommending acceptance.
Of course, the authors need to carefully revise the manuscript according to reviewers’ comments. The answers to Q3 and Q4 of Reviewer yT4j must be included in the final version.
| train | [
"IeZBQdfapDz",
"9RNkV8m1Cvs",
"cezj8OthJF",
"_CJ3WShWuAb",
"HeOEbM99OmA",
"eTTZPQk5jXa",
"-Cugtvpn3Ml",
"1CvU9Djh9_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies knowledge distillation in out-of-domain settings and proposes a generative method called MosaicKD to handle the agnostic domain gap between OOD transfer set and original training data. The core idea of MosaicKD lies in the observation that different domain may still share some local patterns, wh... | [
7,
8,
-1,
-1,
-1,
-1,
6,
7
] | [
4,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_lU1tFeUyBTI",
"nips_2021_lU1tFeUyBTI",
"9RNkV8m1Cvs",
"IeZBQdfapDz",
"1CvU9Djh9_",
"-Cugtvpn3Ml",
"nips_2021_lU1tFeUyBTI",
"nips_2021_lU1tFeUyBTI"
] |
nips_2021_zzdf0CirJM4 | Batch Active Learning at Scale | The ability to train complex and highly effective models often requires an abundance of training data, which can easily become a bottleneck in cost, time, and computational resources. Batch active learning, which adaptively issues batched queries to a labeling oracle, is a common approach for addressing this problem. The practical benefits of batch sampling come with the downside of less adaptivity and the risk of sampling redundant examples within a batch -- a risk that grows with the batch size. In this work, we analyze an efficient active learning algorithm, which focuses on the large batch setting. In particular, we show that our sampling method, which combines notions of uncertainty and diversity, easily scales to batch sizes (100K-1M) several orders of magnitude larger than used in previous studies and provides significant improvements in model training efficiency compared to recent baselines. Finally, we provide an initial theoretical analysis, proving label complexity guarantees for a related sampling method, which we show is approximately equivalent to our sampling method in specific settings.
| accept | A majority of reviewers voted for acceptance (including the ones shown as official reviewers and an emergency reviewer that I included last minute). The only reviewer voting for rejection is reviewer YihA, which seems to be too negative without strong justifications given that most of his claims are not supported by the other reviewers. I, therefore, have decided to accept the paper. | train | [
"A1pa19Y1nId",
"0iwNu3b2HRb",
"7S-gNn63O8J",
"OckF5AiZ1mS",
"S70orkMDN6p",
"kvI5VlvkD0",
"RpqhOBvRIbx",
"420DrpU6pCR",
"j_UAA5BRei1",
"U5-C8fQl51",
"DIfZosA-VdA",
"MMhVWjCBJV5",
"6AK5h0iEYIC",
"h4eS2cZHZYD",
"u_bwstO81jf",
"KvV45lvHaMq",
"6ziih3k5axS",
"2a7kawDvk4H",
"Kq3u0ryk-Oe... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"of... | [
" I am satisfied with the authors' response. I keep my score the same (6). ",
" The response addresses some of my issues and the additional experiment is helpful. I would keep my score.",
" We thank the additional reviewer for their valuable comments.\n\n\nBelow, we provide a discussion of the additional algori... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"MMhVWjCBJV5",
"7S-gNn63O8J",
"S70orkMDN6p",
"kvI5VlvkD0",
"nips_2021_zzdf0CirJM4",
"U5-C8fQl51",
"420DrpU6pCR",
"DIfZosA-VdA",
"nips_2021_zzdf0CirJM4",
"6AK5h0iEYIC",
"u_bwstO81jf",
"j4OFhV2g0Nr",
"j_UAA5BRei1",
"MyEXiCjBTbE",
"KvV45lvHaMq",
"6ziih3k5axS",
"2a7kawDvk4H",
"Kq3u0ryk... |
nips_2021_mv-1sL8FMN5 | Joint Semantic Mining for Weakly Supervised RGB-D Salient Object Detection | Training saliency detection models with weak supervisions, e.g., image-level tags or captions, is appealing as it removes the costly demand of per-pixel annotations. Despite the rapid progress of RGB-D saliency detection in fully-supervised setting, it however remains an unexplored territory when only weak supervision signals are available. This paper is set to tackle the problem of weakly-supervised RGB-D salient object detection. The key insight in this effort is the idea of maintaining per-pixel pseudo-labels with iterative refinements by reconciling the multimodal input signals in our joint semantic mining (JSM). Considering the large variations in the raw depth map and the lack of explicit pixel-level supervisions, we propose spatial semantic modeling (SSM) to capture saliency-specific depth cues from the raw depth and produce depth-refined pseudo-labels. Moreover, tags and captions are incorporated via a fill-in-the-blank training in our textual semantic modeling (TSM) to estimate the confidences of competing pseudo-labels. At test time, our model involves only a light-weight sub-network of the training pipeline, i.e., it requires only an RGB image as input, thus allowing efficient inference. Extensive evaluations demonstrate the effectiveness of our approach under the weakly-supervised setting. Importantly, our method could also be adapted to work in both fully-supervised and unsupervised paradigms. In each of these scenarios, superior performance has been attained by our approach with comparing to the state-of-the-art dedicated methods. As a by-product, a CapS dataset is constructed by augmenting existing benchmark training set with additional image tags and captions.
| accept | The paper finally receives mixed reviews with slight preference to being accepted. We thoroughly check the review comments of Reviewer n5it (the only reviewer who has ‘reject’ opinion) and the authors’ responses to them in the rebuttal. Although the n5it submitted no post-rebuttal review, from our perspective, his or her comments mostly fall into technical questions (not some fundamental ones for this work), and the authors’ rebuttal seems to be mostly satisfactory to clarify the concerns. This is the ground for our recommendation, and the authors may need to reflect their own responses about the novelty and additional experiments in the final draft along with the source code. | train | [
"lMECgLHjwhU",
"cCfc3xr_woh",
"eczOlAvL1N6",
"qBSv4VFj7fY",
"pllUx-A7o1s",
"Z8Hk1l0U13t",
"-mOn1rAOGL",
"AH7AjabeEdW",
"0Poq0DW1ec",
"CvJ1mU1SeoE",
"OT9FDAfdTZ1"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer bzEb,\n\nWe sincerely appreciate your constructive comments and positive response. We will carefully improve the paper and include the above analyses and results in the revised version.\n\nThanks & Regards, \\\nAuthors of paper-48.",
" I have read the response and thank the authors for their effor... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"cCfc3xr_woh",
"-mOn1rAOGL",
"CvJ1mU1SeoE",
"nips_2021_mv-1sL8FMN5",
"Z8Hk1l0U13t",
"CvJ1mU1SeoE",
"0Poq0DW1ec",
"OT9FDAfdTZ1",
"nips_2021_mv-1sL8FMN5",
"nips_2021_mv-1sL8FMN5",
"nips_2021_mv-1sL8FMN5"
] |
nips_2021_M0J1c3PqwKZ | Not All Images are Worth 16x16 Words: Dynamic Transformers for Efficient Image Recognition | Vision Transformers (ViT) have achieved remarkable success in large-scale image recognition. They split every 2D image into a fixed number of patches, each of which is treated as a token. Generally, representing an image with more tokens would lead to higher prediction accuracy, while it also results in drastically increased computational cost. To achieve a decent trade-off between accuracy and speed, the number of tokens is empirically set to 16x16 or 14x14. In this paper, we argue that every image has its own characteristics, and ideally the token number should be conditioned on each individual input. In fact, we have observed that there exist a considerable number of “easy” images which can be accurately predicted with a mere number of 4x4 tokens, while only a small fraction of “hard” ones need a finer representation. Inspired by this phenomenon, we propose a Dynamic Transformer to automatically configure a proper number of tokens for each input image. This is achieved by cascading multiple Transformers with increasing numbers of tokens, which are sequentially activated in an adaptive fashion at test time, i.e., the inference is terminated once a sufficiently confident prediction is produced. We further design efficient feature reuse and relationship reuse mechanisms across different components of the Dynamic Transformer to reduce redundant computations. Extensive empirical results on ImageNet, CIFAR-10, and CIFAR-100 demonstrate that our method significantly outperforms the competitive baselines in terms of both theoretical computational efficiency and practical inference speed. Code and pre-trained models (based on PyTorch and MindSpore) are available at https://github.com/blackfeather-wang/Dynamic-Vision-Transformer and https://github.com/blackfeather-wang/Dynamic-Vision-Transformer-MindSpore.
| accept | All three reviewers are positive about the paper (also after considering the authors' response and discussion). Here are the main points:
Pros:
1) Well motivated and well presented idea
2) Good inference speed improvements on ImageNet
3) Good ablation studies
Cons:
1) Somewhat related to existing early-exiting strategies
2) Doubts about practical gains in inference/training speed
The authors were successful at addressing the reviewers' concerns. Overall, the idea is clean, is presented well, and works well. Thus I recommend the acceptance of the paper. | train | [
"S3fijjfLujL",
"9Yjf-gXpYt2",
"COsbywybCFA",
"1-EUxag6fdi",
"TYDxoorfuKK",
"aONOswrd8UV",
"XqLziTZRHuA",
"Y007XTMp7uU",
"7xHXrlhWo6"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper introduces an efficient inference strategy for vision transformers (ViTs). The motivation is that there are some “easy” images which can be classified using a few tokens. Based on the motivation, the paper proposes a cascading inference framework which do not proceed to next stage if the prediction conf... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_M0J1c3PqwKZ",
"Y007XTMp7uU",
"nips_2021_M0J1c3PqwKZ",
"TYDxoorfuKK",
"aONOswrd8UV",
"COsbywybCFA",
"S3fijjfLujL",
"7xHXrlhWo6",
"nips_2021_M0J1c3PqwKZ"
] |
nips_2021_NEgqO9yB7e | Contrastive Learning for Neural Topic Model | Recent empirical studies show that adversarial topic models (ATM) can successfully capture semantic patterns of the document by differentiating a document with another dissimilar sample. However, utilizing that discriminative-generative architecture has two important drawbacks: (1) the architecture does not relate similar documents, which has the same document-word distribution of salient words; (2) it restricts the ability to integrate external information, such as sentiments of the document, which has been shown to benefit the training of neural topic model. To address those issues, we revisit the adversarial topic architecture in the view point of mathematical analysis, propose a novel approach to re-formulate discriminative goal as an optimization problem, and design a novel sampling method which facilitates the integration of external variables. The reformulation encourages the model to incorporate the relations among similar samples and enforces the constraint on the similarity among dissimilar ones; while the sampling method, which is based on the internal input and reconstructed output, helps inform the model of salient words contributing to the main topic. Experimental results show that our framework outperforms other state-of-the-art neural topic models in three common benchmark datasets that belong to various domains, vocabulary sizes, and document lengths in terms of topic coherence.
| accept | This paper tackles neural topic model training and proposes a new sampling strategy, altogether improving performance under NPMI, topic coherence and downstream classification performance.
The reviewers find the paper interesting, novel, and intuitive, with some concerns about design choices. The authors answered the reviewers questions thoroughly with some additional ablations that help justify the design choices, e.g., the tf-idf sample weighting strategy. The authors should include these additional results in the paper.
I would also recommend some polishing of the presentation: for example in Fig 3 the fonts are too small and the bars on the right side are not labelled; could that figure not be a table? Additionally, tf-idf should not be rendered in math mode (surrounded by \$); instead, use \\emph. | train | [
"ChhMRR5zt2Z",
"ned39ef4BIF",
"FR4Ljzh04tm",
"MItT3DZRJp3",
"eN_ZwQ1SVrd",
"oDpIu3bdUn_",
"epaV6dVEljb",
"XDjFampi87",
"HLvubZQAdH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a contrastive learning framework to improve the performance of neural topic model in terms of topic coherence. To guide the model focus on the negative samples and separate topics more clearly, the paper proposes an adaptive scheduling strategy to estimate $\\beta$. And a word-based sampling str... | [
6,
-1,
-1,
-1,
-1,
-1,
5,
7,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"nips_2021_NEgqO9yB7e",
"ChhMRR5zt2Z",
"XDjFampi87",
"nips_2021_NEgqO9yB7e",
"HLvubZQAdH",
"epaV6dVEljb",
"nips_2021_NEgqO9yB7e",
"nips_2021_NEgqO9yB7e",
"nips_2021_NEgqO9yB7e"
] |
nips_2021_1LLemKrsgQp | Learning in two-player zero-sum partially observable Markov games with perfect recall | Tadashi Kozuno, Pierre Ménard, Remi Munos, Michal Valko | accept | This paper studies partially observable extensive games with perfect recall. They propose a computationally efficient model-free algorithm with sqrt(T) regret. All reviewers believe the results are solid and strong, and the techniques used in this paper are interesting. Therefore, we recommend acceptance. | train | [
"RObehSYw597",
"qpz9Q2uhSS",
"f8YZIKuf0zl",
"YgAh7HxqAWc",
"yTA9c97h3pV",
"ULdGUL9BSUx",
"fbYhMT1ywy-",
"ZfVAMRqLlpM",
"yKrAst9K_qZ",
"0TnyokJA3AX",
"tiOPH9dM2HH",
"3sRq2imV-xC",
"O0LwhH5HkkK"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your explanations. I have no other concerns.",
"This paper studies partially observable extensive games with perfect recall. They propose a computationally efficient model-free algorithm with $\\sqrt{T}$-regret. The algorithm works by reducing the tree-structured game to a linear bandits problem a... | [
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"yKrAst9K_qZ",
"nips_2021_1LLemKrsgQp",
"tiOPH9dM2HH",
"nips_2021_1LLemKrsgQp",
"ULdGUL9BSUx",
"fbYhMT1ywy-",
"ZfVAMRqLlpM",
"YgAh7HxqAWc",
"O0LwhH5HkkK",
"3sRq2imV-xC",
"qpz9Q2uhSS",
"nips_2021_1LLemKrsgQp",
"nips_2021_1LLemKrsgQp"
] |
nips_2021_tTeJejS8vte | A Geometric Structure of Acceleration and Its Role in Making Gradients Small Fast | Jongmin Lee, Chanwoo Park, Ernest Ryu | accept | The paper provides new geometric insights into acceleration under the Euclidean squared norm based on parallelism and collinearity of query and auxiliary points. The reviewers and area chair found the conceptual value of this contribution somewhat hard to evaluate, possibly because it only covers the Euclidean case and does not fully motivate the choice of Lyapunov function. However, the value of the contribution is reinforced by the fact that the authors use it to provide the best known bounds for making gradient small in the prox-grad setting and to derive a number of alternative algorithms for making gradient small. This improvement testifies to the novelty and usefulness of the techniques in the paper. | test | [
"KznTd8oQlyS",
"GwGOSyCPWdI",
"HEfv4G9rphN",
"10JG1waeO9",
"26YLDFZ2t1",
"RQQchGHF9j",
"gFLD58abtPF",
"gFpDu9l7Tlh"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the thoughtful reviews.\n1. Thank you. We are glad to hear that the reviewer found our geometric approach interesting.\n\n2. The reverse recursion of the iteration parameters is indeed unusual. To execute the algorithm one should (a) fix the total iteration count $K$ a priori (b) execute... | [
-1,
-1,
-1,
-1,
7,
7,
6,
2
] | [
-1,
-1,
-1,
-1,
3,
3,
3,
5
] | [
"gFLD58abtPF",
"gFpDu9l7Tlh",
"26YLDFZ2t1",
"RQQchGHF9j",
"nips_2021_tTeJejS8vte",
"nips_2021_tTeJejS8vte",
"nips_2021_tTeJejS8vte",
"nips_2021_tTeJejS8vte"
] |
nips_2021_MtvKv_BDVV | ATISS: Autoregressive Transformers for Indoor Scene Synthesis | The ability to synthesize realistic and diverse indoor furniture layouts automatically or based on partial input, unlocks many applications, from better interactive 3D tools to data synthesis for training and simulation. In this paper, we present ATISS, a novel autoregressive transformer architecture for creating diverse and plausible synthetic indoor environments, given only the room type and its floor plan. In contrast to prior work, which poses scene synthesis as sequence generation, our model generates rooms as unordered sets of objects. We argue that this formulation is more natural, as it makes ATISS generally useful beyond fully automatic room layout synthesis. For example, the same trained model can be used in interactive applications for general scene completion, partial room re-arrangement with any objects specified by the user, as well as object suggestions for any partial room. To enable this, our model leverages the permutation equivariance of the transformer when conditioning on the partial scene, and is trained to be permutation-invariant across object orderings. Our model is trained end-to-end as an autoregressive generative model using only labeled 3D bounding boxes as supervision. Evaluations on four room types in the 3D-FRONT dataset demonstrate that our model consistently generates plausible room layouts that are more realistic than existing methods.In addition, it has fewer parameters, is simpler to implement and train and runs up to 8 times faster than existing methods.
| accept | Reviewers agree, and I concur, that this paper is solid. It has a well executed, relatively scaled up model and idea with good results. I also agree with the relative lack of novelty or insight, so I recommend acceptance as a poster. | train | [
"tY9PpY3nQUm",
"X5-0GdD6gw",
"SSzIl02jiq",
"Cw63AhKW-J",
"GJvOPFop_A_",
"alQ9PvvB7pj"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an transformer based framework for autoregressive indoor scene synthesis. A scene is modelled by an orthographic projection of the floor geometry (encoded by a CNN), and an unordered set of objects, characterized by category (encoded with a learned embedding), location, orientation and size (en... | [
7,
-1,
-1,
-1,
8,
7
] | [
4,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_MtvKv_BDVV",
"alQ9PvvB7pj",
"tY9PpY3nQUm",
"GJvOPFop_A_",
"nips_2021_MtvKv_BDVV",
"nips_2021_MtvKv_BDVV"
] |
nips_2021_bXTxva_xx6r | Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks | Despite their tremendous successes, convolutional neural networks (CNNs) incur high computational/storage costs and are vulnerable to adversarial perturbations. Recent works on robust model compression address these challenges by combining model compression techniques with adversarial training. But these methods are unable to improve throughput (frames-per-second) on real-life hardware while simultaneously preserving robustness to adversarial perturbations. To overcome this problem, we propose the method of Generalized Depthwise-Separable (GDWS) convolution - an efficient, universal, post-training approximation of a standard 2D convolution. GDWS dramatically improves the throughput of a standard pre-trained network on real-life hardware while preserving its robustness. Lastly, GDWS is scalable to large problem sizes since it operates on pre-trained models and doesn't require any additional training. We establish the optimality of GDWS as a 2D convolution approximator and present exact algorithms for constructing optimal GDWS convolutions under complexity and error constraints. We demonstrate the effectiveness of GDWS via extensive experiments on CIFAR-10, SVHN, and ImageNet datasets. Our code can be found at https://github.com/hsndbk4/GDWS.
| accept | Taken at face value, this paper seems to have a lot of variance in the scores, but this is driven mostly by the low score given by reviewer PpQM. The authors have noted that PpQM's review does not make sense, I have read it and confirmed that it does not make sense, and in fact another reviewer has confirmed to me that it does not make sense. That makes it especially strange that they have chosen to respond to the rebuttal that their score is unchanged, but at any rate I think it's safe to ignore their review.
With that out of the way, the scores are 6, 6, and 7.
I found the review of c4Yo (a 6) most informative, and appreciated the discussion that followed.
I will recommend acceptance in this instance | train | [
"_3OJyyu48Pl",
"hd9o1AhAD0",
"OJx2ODsG9bQ",
"OZRx13Sb2sL",
"tpppse6FOt7",
"IRiSnBhyQ-",
"xt5WaO_5rOp",
"XbZlNHlmwv0",
"4UXhG-znI5Y",
"qJ-Do4hSkLs",
"YG26-iYATw",
"MGEgrbPfN5s"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank you for your careful reading of our paper and your strong support of our work. In the final version of the paper, we will most certainly state that the adversarial robustness benefits of GDWS are due to its approximation quality and incorporate other clarifications arising from this review process.",
"... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
5
] | [
"OJx2ODsG9bQ",
"OZRx13Sb2sL",
"IRiSnBhyQ-",
"tpppse6FOt7",
"MGEgrbPfN5s",
"YG26-iYATw",
"qJ-Do4hSkLs",
"4UXhG-znI5Y",
"nips_2021_bXTxva_xx6r",
"nips_2021_bXTxva_xx6r",
"nips_2021_bXTxva_xx6r",
"nips_2021_bXTxva_xx6r"
] |
nips_2021_Ib6VSrtZcu9 | A Provably Efficient Model-Free Posterior Sampling Method for Episodic Reinforcement Learning | Thompson Sampling is one of the most effective methods for contextual bandits and has been generalized to posterior sampling for certain MDP settings. However, existing posterior sampling methods for reinforcement learning are limited by being model-based or lack worst-case theoretical guarantees beyond linear MDPs. This paper proposes a new model-free formulation of posterior sampling that applies to more general episodic reinforcement learning problems with theoretical guarantees. We introduce novel proof techniques to show that under suitable conditions, the worst-case regret of our posterior sampling method matches the best known results of optimization based methods. In the linear MDP setting with dimension, the regret of our algorithm scales linearly with the dimension as compared to a quadratic dependence of the existing posterior sampling-based exploration algorithms.
| accept | This is a well-written and interesting paper. However, as multiple reviewers pointed out, its not clear whether the approach considered is computationally viable. As one of the reviewers suggests, this could be a dealbreaker. At the same time, I do appreciate that the work in sorting out sample complexity for value function sampling under these idealized conditions. After thinking about this some, I am inclined to recommend accepted. | train | [
"cJRyLlYQoN",
"_-VSiAj8mMp",
"uQprDCuV4Gq",
"lQAfU52OFeD",
"PIn3aB9dWtE",
"opAKMBXhn29",
"g0xL-W3CcpI",
"IMp_0BJ9NmH",
"6rglaFS0vNP"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors study the model-free posterior sampling algorithm for the episodic RL problem. A novel algorithm and analysis are proposed. The regret guarantee is proved, and specifically it depends on a term related to the function class, a term related to the structural complexity measure (decoupling... | [
7,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"nips_2021_Ib6VSrtZcu9",
"uQprDCuV4Gq",
"g0xL-W3CcpI",
"IMp_0BJ9NmH",
"cJRyLlYQoN",
"6rglaFS0vNP",
"nips_2021_Ib6VSrtZcu9",
"nips_2021_Ib6VSrtZcu9",
"nips_2021_Ib6VSrtZcu9"
] |
nips_2021_1_gaHBaRYt | Fast Federated Learning in the Presence of Arbitrary Device Unavailability | Federated learning (FL) coordinates with numerous heterogeneous devices to collaboratively train a shared model while preserving user privacy. Despite its multiple advantages, FL faces new challenges. One challenge arises when devices drop out of the training process. In this case, the convergence of popular FL algorithms such as FedAvg is severely influenced by the straggling devices. To tackle this challenge, we study federated learning algorithms in the presence of arbitrary device unavailability and propose an algorithm named Memory-augmented Impatient Federated Averaging (MIFA). Our algorithm efficiently avoids excessive latency induced by inactive devices, and corrects the gradient bias using the memorized latest updates from them. We prove that MIFA achieves minimax optimal convergence rates on non-i.i.d. data for both strongly convex and non-convex smooth functions. We also provide an explicit characterization of the improvement over baseline algorithms through a case study, and validate the results by numerical experiments on real-world datasets.
| accept | The authors focus on addresses some new challenges in Federated Learning. In particular they focus on the effect of straggler devices that may dropout which severely affect convergence of popular FL baselines. To ovecome this challenge they propose a new algorithm called Memory-augmented Impatient Federated Averaging (MIFA). The authors claim that their approach avoids the latency of inactive devices and can effectively correct for such gradient bias by using memorized updates from devices. The authors also show that this approaches achieve a minimax optimal convergence rate for non-i.i.d. data in certain cases. They also corroborate their theoretical findings with a variety of experiments. Most reviewers thought the paper was well written with a clearly presented approach to tackle a well motivated problem. The reviewers raised a variety of technical concerns some of which were addressed during the rebuttal. The conclusion of the discussion was that while the paper is not extremely novel/original and falls into the marginal category is a solid contribution and merits acceptance. I agree with this assessment and recommend acceptance with the condition that the authors clearly address the valid concerns raised by the reviewers. | train | [
"vnoJCMrmfqK",
"GLmC3KGE2I",
"6B4218IJCLy",
"utwj4G0UHwe",
"s8xTffIBL0L",
"abLoODPTDc",
"Rl31NMuBwVt",
"n5xGTqJSGJO"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely thank the reviewer for acknowledging that we provided an interesting solution. We will edit and clarify the potentially confusing statements as suggested.\n\n**Q1**: Why $b=40(L/\\mu)^{1.5}$ form is needed in A4? While the number of inactive rounds grow as $O(t)$, it seems to grow slowly because of t... | [
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"abLoODPTDc",
"Rl31NMuBwVt",
"n5xGTqJSGJO",
"s8xTffIBL0L",
"nips_2021_1_gaHBaRYt",
"nips_2021_1_gaHBaRYt",
"nips_2021_1_gaHBaRYt",
"nips_2021_1_gaHBaRYt"
] |
nips_2021_nqutwR1WDBY | On The Structure of Parametric Tournaments with Application to Ranking from Pairwise Comparisons | Vishnu Veerathu, Arun Rajkumar | accept | The reviewers are satisfied with the author responses and agreed with acceptance. The authors incorporate reviewer feedback and additional experiments presented in the rebuttal in the final manuscript. | test | [
"VUNm4BdSz_L",
"TDwVveHkXJY",
"Gqbcp1wCLf",
"SY35M_A33EF",
"Gp7bCb-L5Ue",
"AsOL1ndwLWu",
"HR54EAbQF43"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their replies. My score remains unchanged. ",
" Thanks for the detailed review and the suggestions. We address the concerns raised in the review below.\n\n\n$Reviewer ~ comment ~ about ~ theoretical ~ contributions ~ restricted ~ to ~rank ~ 2$: \n\nWe note that our contributions extend ... | [
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"SY35M_A33EF",
"AsOL1ndwLWu",
"Gp7bCb-L5Ue",
"HR54EAbQF43",
"nips_2021_nqutwR1WDBY",
"nips_2021_nqutwR1WDBY",
"nips_2021_nqutwR1WDBY"
] |
nips_2021_OG18MI5TRL | SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers | We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perceptron (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to Segformer-B5, which reaches much better performance and efficiency than previous counterparts.For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters, being 5x smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C.
| accept | The authors propose an efficient semantic segmentation model composed of a hierarchical Transformer encoder and a lightweight MLP decoder. The transformer encoder employs overlapped patch merging, efficient self-attention module, and a depthwise convolution inserted between two MLP layers in FFN to replace positional embeddings. The empirical evaluation demonstrates better accuracy/speed tradeoffs compared to competing models.
The paper was reviewed by 4 experts who appreciated the clarity of exposition, simplicity of the design, the effectiveness of the proposed approach in dealing with test-train resolution mismatch, as well as competitive performance. During the discussion the reviewers pointed out several concerns related to positioning and the significance of technical contributions which were mostly addressed. Finally, the reviewers agreed that the approach is a simple and competitive baseline for the semantic segmentation community. I will recommend acceptance. | train | [
"nH9AwC-ab2",
"vjJ1FkALi-A",
"YRoZFB19SZh",
"V43bwcujxaR",
"qoGTzsE8dfh",
"0kP-v6cnVCb",
"0HElrqcvWlS",
"v4C3zqpLWr2",
"UCoiE-jDyBF",
"PaXoJv2JzuB",
"c3gI8kOR8OF",
"laxbYpXZci8",
"ezKZx3cKTiR",
"qbR4Wde4oFM",
"yoXcQR6R35a",
"OYyAB7dGTg-",
"ez3oACjbcI",
"yuXNWyIh74E",
"GO4nE3bksLQ... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
... | [
" Dear Reviewer n5uX, we have attached a revised draft following the suggestions. Hope this will address your final concerns.",
" Dear Reviewers and ACs:\n\nThank you very much for the helpful reviews. We are thankful for the thorough suggestions on our previous manuscript. We have taken all the suggestions and m... | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"V43bwcujxaR",
"nips_2021_OG18MI5TRL",
"qoGTzsE8dfh",
"qoGTzsE8dfh",
"nips_2021_OG18MI5TRL",
"0HElrqcvWlS",
"yuXNWyIh74E",
"nips_2021_OG18MI5TRL",
"PaXoJv2JzuB",
"laxbYpXZci8",
"ezKZx3cKTiR",
"yoXcQR6R35a",
"my-isAxmcCK",
"nips_2021_OG18MI5TRL",
"OYyAB7dGTg-",
"ez3oACjbcI",
"YRoZFB19... |
nips_2021_nHRGW_wETLQ | Fairness via Representation Neutralization | Existing bias mitigation methods for DNN models primarily work on learning debiased encoders. This process not only requires a lot of instance-level annotations for sensitive attributes, it also does not guarantee that all fairness sensitive information has been removed from the encoder. To address these limitations, we explore the following research question: Can we reduce the discrimination of DNN models by only debiasing the classification head, even with biased representations as inputs? To this end, we propose a new mitigation technique, namely, Representation Neutralization for Fairness (RNF) that achieves fairness by debiasing only the task-specific classification head of DNN models. To this end, we leverage samples with the same ground-truth label but different sensitive attributes, and use their neutralized representations to train the classification head of the DNN model. The key idea of RNF is to discourage the classification head from capturing spurious correlation between fairness sensitive information in encoder representations with specific class labels. To address low-resource settings with no access to sensitive attribute annotations, we leverage a bias-amplified model to generate proxy annotations for sensitive attributes. Experimental results over several benchmark datasets demonstrate our RNF framework to effectively reduce discrimination of DNN models with minimal degradation in task-specific performance.
| accept | The main merit of this work seems to be a novel approach for debiasing and for creating proxy annotations. The reviewers thought that the experimental setup was a bit weak, missing (a) comparisons with other baselines that don't require sensitive attributes; (b) experiments with more datasets; (c) discussing related work. These issues were partly answered during the discussion phase, and I strongly encourage the authors to include the new experiments and revise their discussions in their next revision.
The reviews are still split and there is still unclarity about the neutralization scheme that uses an average. The reviewers would have liked additional clarification beyond the toy setting, and demonstration that it works with multiple attributes. Relatedly, there was some discussion whether experimenting with multiple attributes is a must, and the general feeling is that it's not a must have. However, a discussion of this extension is necessary. And, further clarification/justification for the neutralization scheme is also needed.
More generally, the reviewers pointed out many parts that were unclear or not justified, most of which were answered by the authors. So these should be good pointers on how to improve the paper in the next revision.
| train | [
"M4M5-Wq29_m",
"oiu8Kki7XS_",
"kaMFKn-rkyW",
"M_GYoBN9ekd",
"h4qWMCaTSX",
"nzu8XCU0YMa",
"EtUqaQtkWf4",
"yZteCJZ4Oqz",
"Tp8LEF9olCv",
"LWqKarf3Pg",
"DdbVxN4ic8",
"H7akpDIlLIN",
"TUxtpqAqfCK",
"Vh6CBYeglg",
"AMJ9Lpt9xaT"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a method to improve the fairness metric, e.g. Demographic Parity and Equalized Odds, by neutralizing the classification head to prevent it from relying on spurious attribute. To do this, they use the interpolation of pretrained encoder representations from a pair of sample that share the same l... | [
5,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_nHRGW_wETLQ",
"h4qWMCaTSX",
"nips_2021_nHRGW_wETLQ",
"LWqKarf3Pg",
"nzu8XCU0YMa",
"EtUqaQtkWf4",
"DdbVxN4ic8",
"H7akpDIlLIN",
"AMJ9Lpt9xaT",
"kaMFKn-rkyW",
"M4M5-Wq29_m",
"Vh6CBYeglg",
"nips_2021_nHRGW_wETLQ",
"nips_2021_nHRGW_wETLQ",
"nips_2021_nHRGW_wETLQ"
] |
nips_2021_rEBScZF6G70 | Residual Relaxation for Multi-view Representation Learning | Multi-view methods learn representations by aligning multiple views of the same image and their performance largely depends on the choice of data augmentation. In this paper, we notice that some other useful augmentations, such as image rotation, are harmful for multi-view methods because they cause a semantic shift that is too large to be aligned well. This observation motivates us to relax the exact alignment objective to better cultivate stronger augmentations. Taking image rotation as a case study, we develop a generic approach, Pretext-aware Residual Relaxation (Prelax), that relaxes the exact alignment by allowing an adaptive residual vector between different views and encoding the semantic shift through pretext-aware learning. Extensive experiments on different backbones show that our method can not only improve multi-view methods with existing augmentations, but also benefit from stronger image augmentations like rotation.
| accept | This paper relaxes the alignment objective in multiview self-supervised learning when the data augmentation causes semantic shifts in different views. The proposed pretext-aware residual relaxation method allows an adaptive residual vector between different views. The proposed method can benefit from stronger image augmentations like rotation and outperform the existing methods. Based on a recent theoretical framework on self-supervised learning, the authors provide theoretical guarantees of the proposed method. The paper is well motivated, well written and has practical impact on downstream image classification tasks. Reviewers all agreed that the proposed solution is interesting and the improvement is significant. I am recommending acceptance of this paper. The authors need to make sure they add the new results in the rebuttal to the final version.
| train | [
"PXzIra9uhv",
"HxumLMHdZz",
"dxW7tVJHy-K",
"LvCEb-PuT33",
"rhtRdxNKKZE",
"3m4H3vbwws5",
"2sJzi6aXVvj",
"EfkEgiMSk5U",
"utrEO-BJ2YX",
"cWJ-fwkN5ae"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this work, the authors propose a new approach, named Pretext-aware Residual Relaxation (Prelex) to learning the representations from multiple views. The method mainly employs the residual relaxed similarity loss to improve the alignment between positive samples. Majors.\n1 Motivation. Fig. 1 (a) shows the mai... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nips_2021_rEBScZF6G70",
"dxW7tVJHy-K",
"3m4H3vbwws5",
"EfkEgiMSk5U",
"cWJ-fwkN5ae",
"PXzIra9uhv",
"utrEO-BJ2YX",
"nips_2021_rEBScZF6G70",
"nips_2021_rEBScZF6G70",
"nips_2021_rEBScZF6G70"
] |
nips_2021_R-616EWWKF5 | Do Vision Transformers See Like Convolutional Neural Networks? | Convolutional neural networks (CNNs) have so far been the de-facto model for visual data. Recent work has shown that (Vision) Transformer models (ViT) can achieve comparable or even superior performance on image classification tasks. This raises a central question: how are Vision Transformers solving these tasks? Are they acting like convolutional networks, or learning entirely different visual representations? Analyzing the internal representation structure of ViTs and CNNs on image classification benchmarks, we find striking differences between the two architectures, such as ViT having more uniform representations across all layers. We explore how these differences arise, finding crucial roles played by self-attention, which enables early aggregation of global information, and ViT residual connections, which strongly propagate features from lower to higher layers. We study the ramifications for spatial localization, demonstrating ViTs successfully preserve input spatial information, with noticeable effects from different classification methods. Finally, we study the effect of (pretraining) dataset scale on intermediate features and transfer learning, and conclude with a discussion on connections to new architectures such as the MLP-Mixer.
| accept | A good paper on a timely topic. All reviewers recommend acceptance. Could be a spotlight presentation. | train | [
"UuCnM4-ICYM",
"UHKduTbMbT",
"scEc4XnnPoX",
"0rWKT_PBxqB",
"lFki02uV9c9",
"1Ofsv7AxCnW",
"dYUnCHh1ztx",
"jQtq2m7dOhG",
"7vu420NUwtQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed response. Several of my concerns have been addressed, I provide more details on two aspects in the following.\n\n**ResNets vs other CNNs** I agree that ResNets perform generally well for many tasks and substantially inspired subsequent architectures. For the present study, comparing to... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"lFki02uV9c9",
"7vu420NUwtQ",
"jQtq2m7dOhG",
"dYUnCHh1ztx",
"1Ofsv7AxCnW",
"nips_2021_R-616EWWKF5",
"nips_2021_R-616EWWKF5",
"nips_2021_R-616EWWKF5",
"nips_2021_R-616EWWKF5"
] |
nips_2021_WcY6S6PDuly | Optimization-Based Algebraic Multigrid Coarsening Using Reinforcement Learning | Large sparse linear systems of equations are ubiquitous in science and engineering, such as those arising from discretizations of partial differential equations. Algebraic multigrid (AMG) methods are one of the most common methods of solving such linear systems, with an extensive body of underlying mathematical theory. A system of linear equations defines a graph on the set of unknowns and each level of a multigrid solver requires the selection of an appropriate coarse graph along with restriction and interpolation operators that map to and from the coarse representation. The efficiency of the multigrid solver depends critically on this selection and many selection methods have been developed over the years. Recently, it has been demonstrated that it is possible to directly learn the AMG interpolation and restriction operators, given a coarse graph selection. In this paper, we consider the complementary problem of learning to coarsen graphs for a multigrid solver, a necessary step in developing fully learnable AMG methods. We propose a method using a reinforcement learning (RL) agent based on graph neural networks (GNNs), which can learn to perform graph coarsening on small planar training graphs and then be applied to unstructured large planar graphs, assuming bounded node degree. We demonstrate that this method can produce better coarse graphs than existing algorithms, even as the graph size increases and other properties of the graph are varied. We also propose an efficient inference procedure for performing graph coarsening that results in linear time complexity in graph size.
| accept | There have been a few papers in the literature on using ML to "learn" interpolation operators in multigrid solvers, but to the best of my knowledge this is the first paper that attempts to learn the coarsening strategy itself. Admittedly, this work is still a little exploratory: the objective is a simple theoretical bound (which theoretically only applies to the two-grid case), the examples are very simple, there is not much discussion of wall clock time performance up to a given accuracy on real problems with real high performing AMG solvers, little exploration of the architecture of the learning agent itself with e.g. ablation studies. Nonetheless, the paper deserves attention because it has the potential to open up a refreshingly new direction of research. The reviewers seem to agree on this, as well as the fact that the paper is clearly written and does not attempt to exaggerate its contributions. AMG is a very important numerical method, even if it has heuristic elements, so if ML-based coarsening and interpolation strategies can improve it, that might have large impact. | train | [
"y4LY5mrkIG3",
"V8N6yN8tJo4",
"BozlUlB4ZIz",
"PPYHGj-Fukk",
"tn3g2OelfK",
"4wDnbzUGtvu"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Regarding the network size, we note that the recent graph-based method for finding AMG interpolation operators (ICML 2020, reference [13]) also uses a small network size to achieve their results. Moreover, a wide range of robotics applications of reinforcement learning have small network sizes (OpenAI gym problem... | [
-1,
-1,
-1,
6,
6,
4
] | [
-1,
-1,
-1,
2,
2,
3
] | [
"4wDnbzUGtvu",
"tn3g2OelfK",
"PPYHGj-Fukk",
"nips_2021_WcY6S6PDuly",
"nips_2021_WcY6S6PDuly",
"nips_2021_WcY6S6PDuly"
] |
nips_2021_DJ6fmWG4qvW | Delayed Propagation Transformer: A Universal Computation Engine towards Practical Control in Cyber-Physical Systems | Multi-agent control is a central theme in the Cyber-Physical Systems (CPS). However, current control methods either receive non-Markovian states due to insufficient sensing and decentralized design, or suffer from poor convergence. This paper presents the Delayed Propagation Transformer (DePT), a new transformer-based model that specializes in the global modeling of CPS while taking into account the immutable constraints from the physical world. DePT induces a cone-shaped spatial-temporal attention prior, which injects the information propagation and aggregation principles and enables a global view. With physical constraint inductive bias baked into its design, our DePT is ready to plug and play for a broad class of multi-agent systems. The experimental results on one of the most challenging CPS -- network-scale traffic signal control system in the open world -- show that our model outperformed the state-of-the-art expert methods on synthetic and real-world datasets. Our codes are released at: https://github.com/VITA-Group/DePT.
| accept | This work has ben extensively discussed by reviewers, and the AC (myself) has also stepped in to review the paper.
In general this work studies an application of model-free RL, together with a newly proposed transformer architecture with a cone-shaped attention to handle information flow in the temporal-spatial sense. This paper is more of an applied work, with not much algorithmic/theoretical contribution. Still I believe the problem studied is refreshing to the RL community, and the results which are decently good (though not super impressive) when compared with SOTA, demonstrates initial success of applying this ML technique to a more realistic problem. Most importantly, this also shows how transformers can be applied to problems beyond language and small-scale control problems in MuJoCo with novel features in the architectural design, which is new on both transformer design research and the CPS application. The paper is generally well-written, with the newly proposed transformer architecture clearly stated and discussed. On the overall, it's easy to follow and intuitive.
Several reviewers express concerns on the significance of this work, issues in evaluation, and the value to the broader community. I agree these points are valid, but given the new ideas proposed for transformer-based architecture and the applications to multi-agent systems with these techniques are quite interesting, I think the benefits outweighs the drawback. On the overall, I'd recommend acceptance for this paper. | train | [
"QcTcXGml0qz",
"esIFeegtB0W",
"SaJ4O2byK4e",
"TRNKIG0AqqU",
"FKMYC_5BbFY",
"1SvfpiOZZd",
"ggDtXgd29-0",
"Y3c0b8w34lG",
"_05PhrNmcv6",
"oPO5awMaZXV",
"A7zdeJ1GF0S",
"43HqzP3UFA",
"H1KN9gy07SE",
"-LlKvP1E-RG",
"FOA-rN_gui3",
"BZQKO0xWfjT",
"xt2utK-kxfH"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer qqzv:\n\nWe deeply appreciate your time and acknowledgement to our previous responses!\n\nThanks a lot and best wishes,\n\nAuthors\n\n",
"The paper presents a transformer-based centralized method to approach the multi-agent control problem in cyber-physical systems (CPS). They formulate the graph-... | [
-1,
6,
-1,
-1,
-1,
5,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
4,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"SaJ4O2byK4e",
"nips_2021_DJ6fmWG4qvW",
"oPO5awMaZXV",
"esIFeegtB0W",
"ggDtXgd29-0",
"nips_2021_DJ6fmWG4qvW",
"_05PhrNmcv6",
"nips_2021_DJ6fmWG4qvW",
"FOA-rN_gui3",
"TRNKIG0AqqU",
"43HqzP3UFA",
"Y3c0b8w34lG",
"xt2utK-kxfH",
"BZQKO0xWfjT",
"1SvfpiOZZd",
"nips_2021_DJ6fmWG4qvW",
"nips_... |
nips_2021_PIcuKeiWvj- | Explaining Latent Representations with a Corpus of Examples | Modern machine learning models are complicated. Most of them rely on convoluted latent representations of their input to issue a prediction. To achieve greater transparency than a black-box that connects inputs to predictions, it is necessary to gain a deeper understanding of these latent representations. To that aim, we propose SimplEx: a user-centred method that provides example-based explanations with reference to a freely selected set of examples, called the corpus. SimplEx uses the corpus to improve the user’s understanding of the latent space with post-hoc explanations answering two questions: (1) Which corpus examples explain the prediction issued for a given test example? (2) What features of these corpus examples are relevant for the model to relate them to the test example? SimplEx provides an answer by reconstructing the test latent representation as a mixture of corpus latent representations. Further, we propose a novel approach, the integrated Jacobian, that allows SimplEx to make explicit the contribution of each corpus feature in the mixture. Through experiments on tasks ranging from mortality prediction to image classification, we demonstrate that these decompositions are robust and accurate. With illustrative use cases in medicine, we show that SimplEx empowers the user by highlighting relevant patterns in the corpus that explain model representations. Moreover, we demonstrate how the freedom in choosing the corpus allows the user to have personalized explanations in terms of examples that are meaningful for them.
| accept | This paper proposes a method for explaining machine learning models by decomposing the continuous representation of a test example into a combination of examples from a given prototype corpus.
Reviewers find the method convincing, powerful (bridging two large framework of explainable ML: feature-based and example-based), well presented and well demonstrated. The authors have thoroughly addressed all outstanding concerns in the discussion period, including a small user study. I strongly urge the authors to take the time to implement the corresponding improvements in the manuscript. | train | [
"QdEcZDsbex7",
"lhCimr540r",
"6-_hhnqbn6J",
"YtYMLD28lRW",
"P6pC7FnSkO0",
"o9hB4Fwt3Jj",
"IvECYVfK7dZ",
"fbJbepNaLfD",
"7wlE8dM3_pO",
"b0Df7NTtu8t",
"CiuPwSFdvk",
"djvAuY3lpk",
"yr63f5E76Ea",
"6hsq-fgVb4F",
"oFxKqfPnS44"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Many thanks for this feedback and for this thorough review. We will improve the manuscript with these useful comments. \n",
" Many thanks for this feedback and for this thorough review. \nWe will definitely include this section to the supplementary material and a link in the main paper.",
" Thank you for your... | [
-1,
-1,
-1,
7,
7,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
-1,
-1,
3,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"o9hB4Fwt3Jj",
"6-_hhnqbn6J",
"b0Df7NTtu8t",
"nips_2021_PIcuKeiWvj-",
"nips_2021_PIcuKeiWvj-",
"6hsq-fgVb4F",
"nips_2021_PIcuKeiWvj-",
"IvECYVfK7dZ",
"YtYMLD28lRW",
"oFxKqfPnS44",
"oFxKqfPnS44",
"IvECYVfK7dZ",
"P6pC7FnSkO0",
"P6pC7FnSkO0",
"nips_2021_PIcuKeiWvj-"
] |
nips_2021__vypaVMDs51 | Explaining heterogeneity in medial entorhinal cortex with task-driven neural networks | Medial entorhinal cortex (MEC) supports a wide range of navigational and memory related behaviors.Well-known experimental results have revealed specialized cell types in MEC --- e.g. grid, border, and head-direction cells --- whose highly stereotypical response profiles are suggestive of the role they might play in supporting MEC functionality. However, the majority of MEC neurons do not exhibit stereotypical firing patterns. How should the response profiles of these more "heterogeneous" cells be described, and how do they contribute to behavior?In this work, we took a computational approach to addressing these questions. We first performed a statistical analysis that shows that heterogeneous MEC cells are just as reliable in their response patterns as the more stereotypical cell types, suggesting that they have a coherent functional role.Next, we evaluated a spectrum of candidate models in terms of their ability to describe the response profiles of both stereotypical and heterogeneous MEC cells. We found that recently developed task-optimized neural network models are substantially better than traditional grid cell-centric models at matching most MEC neuronal response profiles --- including those of grid cells themselves --- despite not being explicitly trained for this purpose.Specific choices of network architecture (such as gated nonlinearities and an explicit intermediate place cell representation) have an important effect on the ability of the model to generalize to novel scenarios, with the best of these models closely approaching the noise ceiling of the data itself.We then performed in silico experiments on this model to address questions involving the relative functional relevance of various cell types, finding that heterogeneous cells are likely to be just as involved in downstream functional outcomes (such as path integration) as grid and border cells.Finally, inspired by recent data showing that, going beyond their spatial response selectivity, MEC cells are also responsive to non-spatial rewards, we introduce a new MEC model that performs reward-modulated path integration. We find that this unified model matches neural recordings across all variable-reward conditions.Taken together, our results point toward a conceptually principled goal-driven modeling approach for moving future experimental and computational efforts beyond overly-simplistic single-cell stereotypes.
| accept | Dear Authors,
congratulations on your paper being accepted at Neurips. Reviewers founds this a highly interesting and intriguing submission, with results that will likely be of interest to a wide range of researchers, and in particular researchers aiming to understand how diverse cells in the MEC underly the ability of animals to navigate. At the same time, this paper also triggered substantial discussion-- the primary concern by the reviewers was that this was a dense paper loaded with results, but low on detailed explanations, which made it challenging for the reviewers to assess the validity of the methods and results. During the discussion phase, extensive additional explanations and clarifications were given, and we decided that the paper -- with these additional explanations -- would be a great addition to Neurips. We urge you to ensure that these explanations also appear in the final paper (or, in most cases, in the supplement).
With best regards, your AC | train | [
"TQbIXsucx9",
"mOGnZSMwENM",
"wSbE-Rhsd9O",
"ieIKWmxCrj",
"n8wSpQDJb6",
"mE-WqlaOfs",
"uti-u5hE_Dg",
"8HmqnCUOIv1",
"DDHDmvg9Mob",
"PjRlLOBdzka",
"5zFmVkiyiTH",
"xgTEalilst",
"cDgidFK1TvV",
"nJmbtuVI0uF",
"d_8x64WPQsi",
"H-m6a5HfO2d",
"5ca0h8tnYd5",
"njVN10BT-Bq",
"vlLVKhTCHqx"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors employ RNN models under different conditions to model MEC cells with both well-understood (e.g., place cells) and poorly-understood (heterogeneous cells) tuning properties. By training the network to minimize the place cell-mediated path integration, the network performed incredibly well in explaining ... | [
8,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021__vypaVMDs51",
"n8wSpQDJb6",
"mE-WqlaOfs",
"nips_2021__vypaVMDs51",
"cDgidFK1TvV",
"uti-u5hE_Dg",
"8HmqnCUOIv1",
"njVN10BT-Bq",
"5zFmVkiyiTH",
"nips_2021__vypaVMDs51",
"nJmbtuVI0uF",
"TQbIXsucx9",
"vlLVKhTCHqx",
"d_8x64WPQsi",
"PjRlLOBdzka",
"ieIKWmxCrj",
"H-m6a5HfO2d",
"... |
nips_2021_uholDBWSVP | Beyond Smoothness: Incorporating Low-Rank Analysis into Nonparametric Density Estimation | Robert A. Vandermeulen, Antoine Ledent | accept | The authors present an analysis of two classes of histogram-based density estimators that incorporate low-rankness structures analogous to CP and Tucker tensor factorizations, and experiments show that in practice a simple heuristic motivated by the theory outperforms standard histograms. The major contribution of the paper is the theoretical analysis of the convergence of histogram estimators incorporating low-rankedness, both asymptotically and non-asymptotically, that shows their convergence rate is independent of the dimension. These results quantify the benefits of using low-ranked histogram estimators, as the convergence rate of the standard histogram estimator goes down with the dimension. They are significant contributions to the theoretical underpinnings of the growing are of low-rank density estimation. | train | [
"JDrSfKDtgrQ",
"Zc_rB6fdHSD",
"y-6rZaNYTOk",
"Ma74aFL38ee",
"xCSv0Wj2Wi3",
"-bFhqKGBK3",
"V3JeoE5m-xO",
"Q8TOiuQW7lv",
"-8rPFafs5r4",
"1IYay-GVNZc",
"y414y78wY55",
"mqeh8JOLNuR",
"goQqFawOT3C",
"hka3tQMwXk5",
"c8y9nOKvv8"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would very much appreciate it if you would elaborate on the lack of novelty as this was not mentioned in your ”weaknesses” or “points to improve” and this concern stands in stark contrast to the other reviewers (as mentioned in the general response). We are aware of the following works on our topic, low rank n... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"y-6rZaNYTOk",
"-8rPFafs5r4",
"Q8TOiuQW7lv",
"nips_2021_uholDBWSVP",
"-bFhqKGBK3",
"V3JeoE5m-xO",
"1IYay-GVNZc",
"hka3tQMwXk5",
"c8y9nOKvv8",
"Ma74aFL38ee",
"nips_2021_uholDBWSVP",
"goQqFawOT3C",
"nips_2021_uholDBWSVP",
"nips_2021_uholDBWSVP",
"nips_2021_uholDBWSVP"
] |
nips_2021_SV4NhqUoO8 | Multi-View Representation Learning via Total Correlation Objective | Multi-View Representation Learning (MVRL) aims to discover a shared representation of observations from different views with the complex underlying correlation. In this paper, we propose a variational approach which casts MVRL as maximizing the amount of total correlation reduced by the representation, aiming to learn a shared latent representation that is informative yet succinct to capture the correlation among multiple views. To this end, we introduce a tractable surrogate objective function under the proposed framework, which allows our method to fuse and calibrate the observations in the representation space. From the information-theoretic perspective, we show that our framework subsumes existing multi-view generative models. Lastly, we show that our approach straightforwardly extends to the Partial MVRL (PMVRL) setting, where the observations are missing without any regular pattern. We demonstrate the effectiveness of our approach in the multi-view translation and classification tasks, outperforming strong baseline methods.
| accept | This paper proposes a total correlation objective within VAE framework for multi-view representation learning, and the proposed method can handle situations with missing views.
The main concern of the reviewers is with the presentation and clarity.
The overall assessment is that this is an incrementally novel contribution with incrementally improved experimental results. | train | [
"dRZtpqR0yPg",
"pa3NQZQ280D",
"npGxXJt_n8b",
"e3upTJzmhV7",
"Uqtk7KjVrSr",
"xrCd-xs1erC",
"CgI6CgiZ_v7",
"NdAnka7ayU",
"-ivHGmW1Bec",
"JN_GTNKcnfx",
"mn5zSseC5zD",
"go0mFhJT2Xb",
"f7h2crCAJ93"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your additional feedback.\nWe are glad to hear that our formulation is interesting.\nWe will revise our paper to improve the clarity, carefully considering all the suggestions from reviewers.\nThank you.",
" We appreciate your additional feedback.\nWe will carefully reflect all the suggestions fro... | [
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
-1,
2,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"e3upTJzmhV7",
"xrCd-xs1erC",
"nips_2021_SV4NhqUoO8",
"-ivHGmW1Bec",
"nips_2021_SV4NhqUoO8",
"mn5zSseC5zD",
"nips_2021_SV4NhqUoO8",
"f7h2crCAJ93",
"npGxXJt_n8b",
"go0mFhJT2Xb",
"Uqtk7KjVrSr",
"nips_2021_SV4NhqUoO8",
"nips_2021_SV4NhqUoO8"
] |
nips_2021_wZYWwJvkneF | FACMAC: Factored Multi-Agent Centralised Policy Gradients | Bei Peng, Tabish Rashid, Christian Schroeder de Witt, Pierre-Alexandre Kamienny, Philip Torr, Wendelin Boehmer, Shimon Whiteson | accept | This paper generated an involved discussion between the reviewers and the authors, as well as between the reviewers themselves.
The paper essentially combines two well-known baselines in MARL domain, and therefore was judged as a report of experiments for which reproducibility is of particular importance. The reviewers had an intense discussion about reproducibility, two of them tried the code provided by the authors and one of them commented on it based on their experience. The analysis raised significant doubts about reproducibility (it is unfortunate that the authors did not provide configs on their experiments, to enable to rerun them exactly). | train | [
"gfWUtEJTei6",
"p5ITSEL_Gtz",
"9pH_3_SaIbS",
"7fKmydM8PqS",
"Y7u4TjMSRMm",
"O6HA7F3oBI",
"4jRNSWZwPud",
"cF8GTcbzxii",
"hqSf-per0xr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes a new multi-agent deterministic policy gradient method, called FACMAC, which is essentially MADDPG + QMIX. Although the combination is rather trivial, the author claims that they improve MADDPG in the sense that FACMAC optimise in the joint-action space rather than each individual agent's actio... | [
1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_wZYWwJvkneF",
"nips_2021_wZYWwJvkneF",
"7fKmydM8PqS",
"O6HA7F3oBI",
"p5ITSEL_Gtz",
"4jRNSWZwPud",
"gfWUtEJTei6",
"hqSf-per0xr",
"nips_2021_wZYWwJvkneF"
] |
nips_2021_Wp3we5kv6P | EDGE: Explaining Deep Reinforcement Learning Policies | With the rapid development of deep reinforcement learning (DRL) techniques, there is an increasing need to understand and interpret DRL policies. While recent research has developed explanation methods to interpret how an agent determines its moves, they cannot capture the importance of actions/states to a game's final result. In this work, we propose a novel self-explainable model that augments a Gaussian process with a customized kernel function and an interpretable predictor. Together with the proposed model, we also develop a parameter learning procedure that leverages inducing points and variational inference to improve learning efficiency. Using our proposed model, we can predict an agent's final rewards from its game episodes and extract time step importance within episodes as strategy-level explanations for that agent. Through experiments on Atari and MuJoCo games, we verify the explanation fidelity of our method and demonstrate how to employ interpretation to understand agent behavior, discover policy vulnerabilities, remediate policy errors, and even defend against adversarial attacks.
| accept | The authors propose a new approach to explaining deep reinforcement learning policies. The approach applies to complex or black box policies, and learns Gaussian Process (GP) model with custom function to learn to predict and assess the importance of state-action pairs on game outcomes. The authors illustrate feasibility of their approach in Atari and MuJoCo domains and demonstrate several application areas.
Initial reviews assessed the contribution as strong, especially because of the method's wide applicability and strong experimental evidence. Several concerns were raised as well, including clarity (e.g., the complexity of the method and need for more precise explanations) and concerns about potentially restrictive assumptions. Suggestions for improvements included additional empirical evaluations, including a proposed user study to test the effectiveness of the approach. The discussion between reviewers and authors has been highly productive, with many concerns addressed. The authors went above and beyond by running the suggested user study during the rebuttal period.
As a result of the productive discussion, all reviewers indicated that their concerns have been largely addressed and reached the consensus to recommend acceptance. The AC agrees with this recommendation. Given that substantial new insights were developed during the discussion, the authors are strongly encouraged to carefully incorporate all suggestions as they prepare the camera ready version. | train | [
"YjAk_N6079g",
"9AO46XEfAa",
"esjgRcI0yX",
"jyiGzGGxnE",
"L9pQdOseK99",
"jR64fOW7cZg",
"JUX-R8drCY0",
"Tjfq_Rrdnno",
"gefWXv1swob",
"C23Pdc-Q4Kz",
"FFGwK3LsqHz",
"Gz2bQde_c1_",
"3WhoWiHeHf",
"8jV0iIm-B6Y",
"mt0ns00ytVt",
"WtUYrQeq_M0",
"NLQ6WG-5qpH",
"UT2EQ1lEFBi",
"Ef4PSA0M9s8",... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
... | [
" We thank Reviewer mSsX for the reconsideration. Your review was very detailed and helped us find several points that needed further explanation. We believe the paper will be much improved for having had this discussion, and we will make sure this is reflected in the final revision.",
"This paper presents a meth... | [
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
-1,
-1,
4,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"esjgRcI0yX",
"nips_2021_Wp3we5kv6P",
"mt0ns00ytVt",
"WtUYrQeq_M0",
"nips_2021_Wp3we5kv6P",
"C23Pdc-Q4Kz",
"Tjfq_Rrdnno",
"3WhoWiHeHf",
"nips_2021_Wp3we5kv6P",
"FFGwK3LsqHz",
"Gz2bQde_c1_",
"o3GIoJL_Zop",
"Ef4PSA0M9s8",
"o3GIoJL_Zop",
"WtUYrQeq_M0",
"NLQ6WG-5qpH",
"9AO46XEfAa",
"9A... |
nips_2021_ctusEbqyLwO | Learning to Assimilate in Chaotic Dynamical Systems | The accuracy of simulation-based forecasting in chaotic systems is heavily dependent on high-quality estimates of the system state at the beginning of the forecast. Data assimilation methods are used to infer these initial conditions by systematically combining noisy, incomplete observations and numerical models of system dynamics to produce highly effective estimation schemes. We introduce a self-supervised framework, which we call \textit{amortized assimilation}, for learning to assimilate in dynamical systems. Amortized assimilation combines deep learning-based denoising with differentiable simulation, using independent neural networks to assimilate specific observation types while connecting the gradient flow between these sub-tasks with differentiable simulation and shared recurrent memory. This hybrid architecture admits a self-supervised training objective which is minimized by an unbiased estimator of the true system state even in the presence of only noisy training data. Numerical experiments across several chaotic benchmark systems highlight the improved effectiveness of our approach compared to widely-used data assimilation methods.
| accept | From the SAC. This is an instance where the rebuttal and the discussion worked. While the original decision for this paper was to not accept, it is being raised to a recommended accept. The primary reason is the quality of the rebuttal, and the useful technical discussion between authors and reviewers that ensued that seems to have been revealing (in particular, reviewer kG8H). To the authors: I trust that you will take all reviewer feedback into account and most importantly, that all of the things in your rebuttal and discussion that were promised will be done in the next version of the paper. | train | [
"IQaS5ZTqoVu",
"ekGIx1HLhTu",
"rR3FITNs0jv",
"u9d0h3LK5RB",
"eYcUN8EUto5",
"RN-a_FlL380",
"M9MJ-IH0OL",
"bWSHuEOsNWv",
"6HEj7PUKGYT"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer"
] | [
"The overarching goal of this paper is to improve assimilation of (chaotic) dynamical systems. The authors introduced a self-supervised framework, amortized assimilation, which uses methods from Ensemble filters and supervised denoising. \n In Section 3, they develop their own method, first focusing on amortized en... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
5,
-1,
3
] | [
"nips_2021_ctusEbqyLwO",
"eYcUN8EUto5",
"6HEj7PUKGYT",
"RN-a_FlL380",
"IQaS5ZTqoVu",
"bWSHuEOsNWv",
"nips_2021_ctusEbqyLwO",
"M9MJ-IH0OL",
"nips_2021_ctusEbqyLwO"
] |
nips_2021_t4485RO6O8P | Object-aware Contrastive Learning for Debiased Scene Representation | Contrastive self-supervised learning has shown impressive results in learning visual representations from unlabeled images by enforcing invariance against different data augmentations. However, the learned representations are often contextually biased to the spurious scene correlations of different objects or object and background, which may harm their generalization on the downstream tasks. To tackle the issue, we develop a novel object-aware contrastive learning framework that first (a) localizes objects in a self-supervised manner and then (b) debias scene correlations via appropriate data augmentations considering the inferred object locations. For (a), we propose the contrastive class activation map (ContraCAM), which finds the most discriminative regions (e.g., objects) in the image compared to the other images using the contrastively trained models. We further improve the ContraCAM to detect multiple objects and entire shapes via an iterative refinement procedure. For (b), we introduce two data augmentations based on ContraCAM, object-aware random crop and background mixup, which reduce contextual and background biases during contrastive self-supervised learning, respectively. Our experiments demonstrate the effectiveness of our representation learning framework, particularly when trained under multi-object images or evaluated under the background (and distribution) shifted images. Code is available at https://github.com/alinlab/object-aware-contrastive.
| accept | All reviewers are positive about this paper. The idea of using CAM to localize the most salient object in an image and use that to reduce background and contextual bias in scene-centric (as opposed to object-centered) datasets for selfsupervised learning is interesting and seems to indeed alleviate the issues. I recommend to accept the paper as a poster. | val | [
"3csBL3r0WPq",
"wCjXcrWB4S",
"lgxCUp6isr",
"YjD2Wlz3WQt",
"HibXn_xBI7N",
"dUsG7DZzFZ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes a new learning scheme for representation learning. Using class activation maps, augmentations are obtained for contrastive learning that are aware of the objects in the image. Results include results on multiple classification, segmentation, and detection datasets. \n\nNote on 10 sept 2021: reb... | [
7,
7,
-1,
-1,
-1,
6
] | [
3,
4,
-1,
-1,
-1,
5
] | [
"nips_2021_t4485RO6O8P",
"nips_2021_t4485RO6O8P",
"wCjXcrWB4S",
"dUsG7DZzFZ",
"3csBL3r0WPq",
"nips_2021_t4485RO6O8P"
] |
nips_2021_Esd7tGH3Spl | Evaluating Efficient Performance Estimators of Neural Architectures | Conducting efficient performance estimations of neural architectures is a major challenge in neural architecture search (NAS). To reduce the architecture training costs in NAS, one-shot estimators (OSEs) amortize the architecture training costs by sharing the parameters of one "supernet" between all architectures. Recently, zero-shot estimators (ZSEs) that involve no training are proposed to further reduce the architecture evaluation cost. Despite the high efficiency of these estimators, the quality of such estimations has not been thoroughly studied. In this paper, we conduct an extensive and organized assessment of OSEs and ZSEs on five NAS benchmarks: NAS-Bench-101/201/301, and NDS ResNet/ResNeXt-A. Specifically, we employ a set of NAS-oriented criteria to study the behavior of OSEs and ZSEs and reveal that they have certain biases and variances. After analyzing how and why the OSE estimations are unsatisfying, we explore how to mitigate the correlation gap of OSEs from several perspectives. Through our analysis, we give out suggestions for future application and development of efficient architecture performance estimators. Furthermore, the analysis framework proposed in our work could be utilized in future research to give a more comprehensive understanding of newly designed architecture performance estimators.
| accept | This paper goes deep into the issues of one-shot supernet based NAS methods as well as recent zero-cost NAS methods. The authors have done an excellent job analyzing empirically the various issues and suggesting improvements to one-shot estimators like de-isomorphic sampling and the fact that zero-cost ones consistently overestimate the performance of larger networks (and the short proof of it) and so on.
During the rich discussion phase one concern that consistently came up was that the paper was packing a lot of dense information (see concerns of reviewer SU6e) and at times it was hard to read because of that. But the reviewers and authors have excellent suggestions on improving the readability. Please incorporate those aspects into the manuscript.
Reviewer Q7zt has suggested a number of improvements including citations to other relevant work and even concurrent work that should be included in the next version. Also the new results using the original implementation 'relu_logdet' and the changed conclusion with respect to number of params.
Overall this paper presents valuable insights to the NAS community!
| val | [
"OdA6FHdu_oC",
"NAPq3Sn9MoA",
"xrvM1HgtImz",
"9YAxk2H0uag",
"fkxLzFO_3cR",
"mB3C3Q_9nz5",
"2xPYXznkZs-",
"XZ6RTdtDCkS",
"Rz3fEsMacX0",
"SqDHzBrzSxE",
"QdQ0VYwwOHx",
"1wBIqqSlHG-",
"jwMmJ2Qh0_J",
"kHAHG_m_Fg",
"snLJ80NTB6L",
"bpHPokJNxeL",
"QCDmp3989X"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" The authors effectively address all my concerns in the initial review. I do appreciate other reviewers pointed out several detailed issues in the manuscript which I was not aware in the initial review. Especially, the current paper is full of all sorts of details and conclusions, as Reviewer SU6e mentioned. It i... | [
-1,
-1,
-1,
-1,
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"jwMmJ2Qh0_J",
"xrvM1HgtImz",
"kHAHG_m_Fg",
"2xPYXznkZs-",
"Rz3fEsMacX0",
"nips_2021_Esd7tGH3Spl",
"1wBIqqSlHG-",
"nips_2021_Esd7tGH3Spl",
"SqDHzBrzSxE",
"QdQ0VYwwOHx",
"snLJ80NTB6L",
"mB3C3Q_9nz5",
"QCDmp3989X",
"bpHPokJNxeL",
"XZ6RTdtDCkS",
"nips_2021_Esd7tGH3Spl",
"nips_2021_Esd7t... |
nips_2021_lwwEh0OM61b | A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose | While deep learning reshaped the classical motion capture pipeline with feed-forward networks, generative models are required to recover fine alignment via iterative refinement. Unfortunately, the existing models are usually hand-crafted or learned in controlled conditions, only applicable to limited domains. We propose a method to learn a generative neural body model from unlabelled monocular videos by extending Neural Radiance Fields (NeRFs). We equip them with a skeleton to apply to time-varying and articulated motion. A key insight is that implicit models require the inverse of the forward kinematics used in explicit surface models. Our reparameterization defines spatial latent variables relative to the pose of body parts and thereby overcomes ill-posed inverse operations with an overparameterization. This enables learning volumetric body shape and appearance from scratch while jointly refining the articulated pose; all without ground truth labels for appearance, pose, or 3D shape on the input videos. When used for novel-view-synthesis and motion capture, our neural model improves accuracy on diverse datasets.
| accept | The paper proposes A-Nerf, a generative neural body model capable of rendering a human under novel viewpoints and poses. A-Nerf is an extension of Nerf to capture dynamic human bodies. This is achieved by conditioning Nerf on the skeletal body pose information. The paper demonstrates that naively conditioning Nerf on the body pose information does not lead to optimal performance and therefore proposes a set of relative encodings of the pose information that results in significantly better renderings without the need of a full-body mesh model.
Reviewers raised concerns regarding the similarity or advantages to the proposed formulation with NeuralBody, which the rebuttal addressed to some extent. Reviewers agree on the novelty of the formulation, specifically the spatial encodings for ray and query point with respect to the skeleton bones. The paper is recommended for publication.
| train | [
"rWcssMoKfwb",
"ItSU90msyCY",
"aqRVMsFllRg",
"ncjMeM8vNMd",
"xvYTHpNADpk",
"6DpmqYnVLKp",
"_c6RACRKCSE",
"umwukdN2sFY",
"0_mNzLsEJkJ",
"h0xKICCmyvr",
"xXDXF7BahG",
"9R_RcD8nUb2",
"DGs4xtsgtFp",
"ETsXX8uQiv"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your time and valuable feedback. Please check the shared comments above for general remarks.\n\n**How is the background removed from the NeRF rendering?**\n\nThe model is rendered on white background by replacing the background sample in the ray-marching (line 139) with white. This visualization is ... | [
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
4
] | [
-1,
5,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"ETsXX8uQiv",
"nips_2021_lwwEh0OM61b",
"xXDXF7BahG",
"nips_2021_lwwEh0OM61b",
"6DpmqYnVLKp",
"0_mNzLsEJkJ",
"xvYTHpNADpk",
"ncjMeM8vNMd",
"umwukdN2sFY",
"DGs4xtsgtFp",
"ItSU90msyCY",
"nips_2021_lwwEh0OM61b",
"nips_2021_lwwEh0OM61b",
"nips_2021_lwwEh0OM61b"
] |
nips_2021_6oyeQ-1c_91 | Differential Privacy Over Riemannian Manifolds | In this work we consider the problem of releasing a differentially private statistical summary that resides on a Riemannian manifold. We present an extension of the Laplace or K-norm mechanism that utilizes intrinsic distances and volumes on the manifold. We also consider in detail the specific case where the summary is the Fr\'echet mean of data residing on a manifold. We demonstrate that our mechanism is rate optimal and depends only on the dimension of the manifold, not on the dimension of any ambient space, while also showing how ignoring the manifold structure can decrease the utility of the sanitized summary. We illustrate our framework in two examples of particular interest in statistics: the space of symmetric positive definite matrices, which is used for covariance matrices, and the sphere, which can be used as a space for modeling discrete distributions.
| accept | This work studies the question of extending private mechanisms to the case when the output is known to lie on a manifold in R^d. A natural approach in such settings is to add noise as if the output was in R^d, and then project back to the manifold. The authors propose a different noise mechanism that lives on the manifold and can be potentially better in certain settings.
The reviewers had several useful comments that the authors will do well to address. Some of the discussion in the rebuttal helped clarify the paper's contribution and including some of that in the paper will help improve the paper.
I think the paper pushes the research in DP mechanisms forward in an potentially fruitful direction and I recommend accepting it. | train | [
"yhw-yhB5dqu",
"WMb-yVSM0h5",
"qh7bNxL1JHw",
"jhbFT7-dfdV",
"kwJYsSAiqvO",
"bQq7va4kea",
"EX-DqsZwErM",
"3ySTsr-I5IQ",
"JP1kFf1sadr",
"mwfvIpuMOn",
"OVf66NBrhIC",
"1E9wVSt0A2k",
"pz2MakF6lMl",
"dJkHDxNI0fs"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We greatly appreciate it. We agree that the point about the nature of the asymptotic savings could be made more clear and will adjust the paper accordingly. ",
" Any bias inherent in the Laplace mechanism is at least asymptotically negligible (as our utility result guarantees) the the mechanism will concentra... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
5,
9,
7,
9
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"kwJYsSAiqvO",
"qh7bNxL1JHw",
"JP1kFf1sadr",
"nips_2021_6oyeQ-1c_91",
"3ySTsr-I5IQ",
"dJkHDxNI0fs",
"pz2MakF6lMl",
"jhbFT7-dfdV",
"OVf66NBrhIC",
"1E9wVSt0A2k",
"nips_2021_6oyeQ-1c_91",
"nips_2021_6oyeQ-1c_91",
"nips_2021_6oyeQ-1c_91",
"nips_2021_6oyeQ-1c_91"
] |
nips_2021_0jHeZ7-ehGr | How can classical multidimensional scaling go wrong? | Rishi Sonthalia, Greg Van Buskirk, Benjamin Raichel, Anna Gilbert | accept | This seems to be a strong theoretical contribution to NeurIPS, although both some of the reviewers and myself suspect it is less relevant for practical usage. | test | [
"qHNR70RDxqQ",
"jvuXhJC5V2Q",
"sjR5DnpZtD",
"Ioct_nKV2kB",
"y8e1z9_0nQe",
"PgkQ_5phQ3J",
"33uVSjCEHWg",
"6Dxh75e5W_T",
"yrmiwOXMOzV",
"DrhCf6TFcDF",
"L4EgoloxgWc",
"afbWI-6PxvO",
"Nnd81znJcC",
"jRaBx1YkVRj"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The work shows a scenario that the classical MDS algorithm fails. \"When the derived matrix has a significant number of negative eigenvalues,\" the classical MDS algorithm tends to fail when the number of dimensions increases sufficiently large. Then the work designs an algorithm with improved performance bound an... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_0jHeZ7-ehGr",
"afbWI-6PxvO",
"DrhCf6TFcDF",
"y8e1z9_0nQe",
"PgkQ_5phQ3J",
"L4EgoloxgWc",
"nips_2021_0jHeZ7-ehGr",
"yrmiwOXMOzV",
"33uVSjCEHWg",
"jRaBx1YkVRj",
"Nnd81znJcC",
"qHNR70RDxqQ",
"nips_2021_0jHeZ7-ehGr",
"nips_2021_0jHeZ7-ehGr"
] |
nips_2021_chuGnZMuye | Modeling Heterogeneous Hierarchies with Relation-specific Hyperbolic Cones | Hierarchical relations are prevalent and indispensable for organizing human knowledge captured by a knowledge graph (KG). The key property of hierarchical relations is that they induce a partial ordering over the entities, which needs to be modeled in order to allow for hierarchical reasoning. However, current KG embeddings can model only a single global hierarchy (single global partial ordering) and fail to model multiple heterogeneous hierarchies that exist in a single KG. Here we present ConE (Cone Embedding), a KG embedding model that is able to simultaneously model multiple hierarchical as well as non-hierarchical relations in a knowledge graph. ConE embeds entities into hyperbolic cones and models relations as transformations between the cones. In particular, ConE uses cone containment constraints in different subspaces of the hyperbolic embedding space to capture multiple heterogeneous hierarchies. Experiments on standard knowledge graph benchmarks show that ConE obtains state-of-the-art performance on hierarchical reasoning tasks as well as knowledge graph completion task on hierarchical graphs. In particular, our approach yields new state-of-the-art Hits@1 of 45.3% on WN18RR and 16.1% on DDB14 (0.231 MRR). As for hierarchical reasoning task, our approach outperforms previous best results by an average of 20% across the three datasets.
| accept | This paper proposes a knowledge graph embedding method called ConE, which can simultaneously model both hierarchical and non-hierarchical relations by embedding entities into hyperbolic cones with relations as transformations between cones. The reviewers agree that this is a strong paper, and the authors did an excellent job in their rebuttal in addressing any remaining concerns. | test | [
"9uJQb0S-yj",
"XQ8YIDI7-M",
"FXgCjOyTawG",
"nsVXGjo-oua",
"s_fuFv6QNA",
"Pw6bBR8CGTy",
"cI564vmtQ5K",
"kKIADbQJsdE",
"WAgpGLMRqD4",
"Ec4lXJUJwXx",
"rVE6Fhrfkmv",
"mEcgbuByRA",
"bg5ezQVYjR",
"qE-AaSgGLH",
"Mp8WdqZdD2",
"a65_DrjAWmC"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes ConE, a knowledge graph embedding model which embeds entities into hyperbolic cones and represents relations as transformations between the cones. Non-hierarchical relations are modelled using rotations and hierarchical relations using restricted rotations, which impose the cone containment cons... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_chuGnZMuye",
"nsVXGjo-oua",
"s_fuFv6QNA",
"Pw6bBR8CGTy",
"cI564vmtQ5K",
"mEcgbuByRA",
"a65_DrjAWmC",
"nips_2021_chuGnZMuye",
"Ec4lXJUJwXx",
"bg5ezQVYjR",
"Mp8WdqZdD2",
"9uJQb0S-yj",
"qE-AaSgGLH",
"kKIADbQJsdE",
"nips_2021_chuGnZMuye",
"nips_2021_chuGnZMuye"
] |
nips_2021_Ifo8sa57U2f | Non-asymptotic Error Bounds for Bidirectional GANs | We derive nearly sharp bounds for the bidirectional GAN (BiGAN) estimation error under the Dudley distance between the latent joint distribution and the data joint distribution with appropriately specified architecture of the neural networks used in the model. To the best of our knowledge, this is the first theoretical guarantee for the bidirectional GAN learning approach. An appealing feature of our results is that they do not assume the reference and the data distributions to have the same dimensions or these distributions to have bounded support. These assumptions are commonly assumed in the existing convergence analysis of the unidirectional GANs but may not be satisfied in practice. Our results are also applicable to the Wasserstein bidirectional GAN if the target distribution is assumed to have a bounded support. To prove these results, we construct neural network functions that push forward an empirical distribution to another arbitrary empirical distribution on a possibly different-dimensional space. We also develop a novel decomposition of the integral probability metric for the error analysis of bidirectional GANs. These basic theoretical results are of independent interest and can be applied to other related learning problems.
| accept | This paper presents non-asymptotic error analysis for the bidirectional GANs.
In the analysis the authors have succeeded in relaxing those assumptions usually adopted in the GAN literature, such that the latent distribution and the data distribution has the same dimension and that the true data distribution has a bounded support. They have also succeeded in making some of the coefficients in the error bounds explicit. These constitute the main contributions of this paper.
The reviewers raised several questions, and the authors have addressed them adequately, as evidenced by two of the reviewers having raised their scores from below the threshold to above. Now all the review scores are above the threshold, I am happy to recommend acceptance of this paper for presentation at the NeurIPS conference.
Minor points:
- A learnability issue: Lemma 4.2 is based on existence of a neural network $\\psi$, and Theorem 4.3 is based on existence of neural networks $g$ and $e$, with desired properties. These lemma and theorem do not care about the learnability of these neural networks. It would therefore be nicer if some empirical results be presented to at least suggest that the learnability issue would be minor in practice, so that the stated error bounds are indeed relevant.
- Line 34: join(t)
- Line 59: the data joint distribution.( )To the best
- Lines 93, 94: $(\sim)\nu$, $(\sim)\mu$
- Lines 121, 124: vector(s)
- Line 138: metri(ci)zing
- Equation between lines 186 and 187: $h\in(\mathcal{F}\to\mathcal{H})$
- Definitions of $\mathcal{E}_3$ and $\mathcal{E}_4$: In the proof of Lemma 4.1, it seems that the authors used slightly different definitions of these quantities, where $g^*$ and $e^*$ are replaced with $\hat{g}$ and $\hat{e}$, respectively.
- Line 267: cover(ing) numbers
- SM, line 26: $|z_0|\le\log n$ $\to$ $\\\|z_0\\\|\le\log n$ | train | [
"JC0kmOnlqj",
"R4z8Ui4ZiaR",
"a-zrIjS9RKh",
"_wVtLMxbjmH",
"_LnZUCqtwIg",
"PffHsZssxG1",
"6Oa1Ptih2i",
"lUHMm5V2es",
"wpZ_ZuY5FjR",
"jsoS1EEXdaY",
"JMkGl151m9Z",
"nFnXSDK4Z-z",
"xswDShiNSCL",
"pt6qEHuXxO-"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you so much for your positive feedback and for your taking the time to review our paper. We are very grateful to you for your review that helps us improve our paper. ",
" Thank you authors for you clear and thorough response. It was sufficient to address the concerns I raised. ",
" Thank you so much for... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
4
] | [
"R4z8Ui4ZiaR",
"xswDShiNSCL",
"_LnZUCqtwIg",
"nips_2021_Ifo8sa57U2f",
"JMkGl151m9Z",
"pt6qEHuXxO-",
"_wVtLMxbjmH",
"jsoS1EEXdaY",
"nips_2021_Ifo8sa57U2f",
"nFnXSDK4Z-z",
"_wVtLMxbjmH",
"wpZ_ZuY5FjR",
"pt6qEHuXxO-",
"nips_2021_Ifo8sa57U2f"
] |
nips_2021_RcfJUrZzhoL | Confidence-Aware Imitation Learning from Demonstrations with Varying Optimality | Most existing imitation learning approaches assume the demonstrations are drawn from experts who are optimal, but relaxing this assumption enables us to use a wider range of data. Standard imitation learning may learn a suboptimal policy from demonstrations with varying optimality. Prior works use confidence scores or rankings to capture beneficial information from demonstrations with varying optimality, but they suffer from many limitations, e.g., manually annotated confidence scores or high average optimality of demonstrations. In this paper, we propose a general framework to learn from demonstrations with varying optimality that jointly learns the confidence score and a well-performing policy. Our approach, Confidence-Aware Imitation Learning (CAIL) learns a well-performing policy from confidence-reweighted demonstrations, while using an outer loss to track the performance of our model and to learn the confidence. We provide theoretical guarantees on the convergence of CAIL and evaluate its performance in both simulated and real robot experiments.Our results show that CAIL significantly outperforms other imitation learning methods from demonstrations with varying optimality. We further show that even without access to any optimal demonstrations, CAIL can still learn a successful policy, and outperforms prior work.
| accept | This paper considers imitation learning from demonstrations with varying optimality. The authors propose a framework to jointly learn confidence scores for demonstrations and a well-performing policy.
The reviewers find the research problem interesting. The strengths include: A bi-level optimization formulation of the problem, promising theoretical results, and strong empirical results. There is some concern on the applicability of the proposed method to high-dimensional complex tasks. However, there is a clear consensus among the reviewers that the paper should be accepted.
It is recommended that the authors consider if there is a connection between this work and Zhang, et al . Causal imitation learning with unobserved confounders, NeuRIPS 2020.
| test | [
"r7FzMSWOKGV",
"DhqylatnHU8",
"DG8311wsNnw",
"IFz12tFRKvk",
"-M49sLmKMNs",
"DqIpd6CFkf",
"tNXtyq9pEi4",
"3bUsLYg_YEI",
"90RjPUUWHSL",
"1jrwUMQNqZD",
"aWIlyXRAY3u"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"* This paper proposes a novel imitation learning algorithm, Confidence-Aware Imitation Learning (CAIL), to learn policy from suboptimal data.\n * In particular, this proposed algorithm iteratively does two gradient updates:\n 1. learns a function, \\beta, that maps (state, action) to a confidence score using a... | [
8,
7,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_RcfJUrZzhoL",
"nips_2021_RcfJUrZzhoL",
"nips_2021_RcfJUrZzhoL",
"tNXtyq9pEi4",
"nips_2021_RcfJUrZzhoL",
"3bUsLYg_YEI",
"90RjPUUWHSL",
"-M49sLmKMNs",
"DG8311wsNnw",
"r7FzMSWOKGV",
"DhqylatnHU8"
] |
nips_2021_9B0JMeySlZM | Answering Complex Causal Queries With the Maximum Causal Set Effect | The standard tools of causal inference have been developed to answer simple causal queries which can be easily formalized as a small number of statistical estimands in the context of a particular structural causal model (SCM); however, scientific theories often make diffuse predictions about a large number of causal variables. This article proposes a framework for parameterizing such complex causal queries as the maximum difference in causal effects associated with two sets of causal variables that have a researcher specified probability of occurring. We term this estimand the Maximum Causal Set Effect (MCSE) and develop an estimator for it that is asymptotically consistent and conservative in finite samples under assumptions that are standard in the causal inference literature. This estimator is also asymptotically normal and amenable to the non-parametric bootstrap, facilitating classical statistical inference about this novel estimand. We compare this estimator to more common latent variable approaches and find that it can uncover larger causal effects in both real world and simulated data.
| accept | The paper considers a relevant problem, but reviewers have identified some sloppiness regarding causal claims:
The discussion of the mediators case on p. 4 struck several reviewers as confusing (and possibly confused.) The formula given between lines 119 and 120 is sensible with respect to the moderators case, but hardly makes sense with respect to the mediators case. The authors should either explain how to read or understand the formula in the mediators case so that it evidently expresses a meaningful effect differential, or use a different, clearly applicable formula for the mediators case, or at least restraint from suggesting that the said formula is also applicable to the mediators case.
Unfortunately, the authors replied to this issue in a very non-explicit way although it's a crucial point from the causal perspective.
We decided to accept this paper despite this problem. It should be emphasised, however, that we considered it serious enough to discuss rejection since a paper on causality should be careful about causal claims. We therefore ask the authors to consider the issue carefully in the revised version.
| val | [
"kiR96fDTv-E",
"5bvpIq601d",
"2BdW-XECWR8",
"qhKAYMHQl-",
"jpRferD0oqE",
"b3ccrL52SC_",
"m4pLpU-xQui",
"HHKGA2yq8ec"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for pushing us to give more thought to the many mediators case, which we now appreciate is more nuanced than we had initially believed. At this point, we're thinking that the paper would be most improved by removing the many mediators example and focusing on the many causes and moderators ones instead. ... | [
-1,
6,
-1,
-1,
-1,
-1,
5,
8
] | [
-1,
3,
-1,
-1,
-1,
-1,
2,
4
] | [
"2BdW-XECWR8",
"nips_2021_9B0JMeySlZM",
"b3ccrL52SC_",
"HHKGA2yq8ec",
"m4pLpU-xQui",
"5bvpIq601d",
"nips_2021_9B0JMeySlZM",
"nips_2021_9B0JMeySlZM"
] |
nips_2021_VtlGqVzja48 | Identifiability in inverse reinforcement learning | Inverse reinforcement learning attempts to reconstruct the reward function in a Markov decision problem, using observations of agent actions. As already observed in Russell [1998] the problem is ill-posed, and the reward function is not identifiable, even under the presence of perfect information about optimal behavior. We provide a resolution to this non-identifiability for problems with entropy regularization. For a given environment, we fully characterize the reward functions leading to a given policy and demonstrate that, given demonstrations of actions for the same reward under two distinct discount factors, or under sufficiently different environments, the unobserved reward can be recovered up to a constant. We also give general necessary and sufficient conditions for reconstruction of time-homogeneous rewards on finite horizons, and for action-independent rewards, generalizing recent results of Kim et al. [2021] and Fu et al. [2018].
| accept | Two reviewers recommended rejection of the paper (1x reject, 1x weak reject) and two reviewers recommended (weak) acceptance of the paper. Initially the reviewers raised concerns regarding novelty, significance, and relation to existing work. This was acknowledged by reviewers and some of them increased their score for the paper. I discounted the concern regarding comparison with Kim et al. as this paper was not available by the deadline of this conference (as also indicated by the authors). Overall the discussion of the authors with the reviewers and my own reading of the paper led me to the conclusion there are some interesting contributions in the paper and I am therefore recommending acceptance of the paper.
Nevertheless, the paper in its current form does not place the paper clearly in the context of existing results (as evidenced by a very short related work section and the initial confusion of the reviewers) and I strongly encourage the authors to make the relation to existing results more clear in final paper (e.g., by incorporating the responses they gave to reviewers during the discussion period). | train | [
"4BIjrAD00Ia",
"IPpLfxAIO4J",
"O3pg5Yosax4",
"V1yafp0G3oj",
"et20UJFnvUe",
"Z-WCf0ytcW",
"6OrJJXQoYRd",
"fVYbntVLxzV",
"FSwGWjQsho",
"McF9GLA8Vlb",
"ON2x333dpuU"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers the problem of reward identifiability in a maximum entropy inverse optimal control setting when access to the expert policy is available. It is shown that the reward function can only be recovered up to a state-dependent shaping term (which relates to the temporal credit assignment). If access ... | [
5,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_VtlGqVzja48",
"V1yafp0G3oj",
"nips_2021_VtlGqVzja48",
"et20UJFnvUe",
"6OrJJXQoYRd",
"O3pg5Yosax4",
"4BIjrAD00Ia",
"ON2x333dpuU",
"McF9GLA8Vlb",
"nips_2021_VtlGqVzja48",
"nips_2021_VtlGqVzja48"
] |
nips_2021_7e4FLufwij | A Probabilistic State Space Model for Joint Inference from Differential Equations and Data | Mechanistic models with differential equations are a key component of scientific applications of machine learning. Inference in such models is usually computationally demanding because it involves repeatedly solving the differential equation. The main problem here is that the numerical solver is hard to combine with standard inference techniques. Recent work in probabilistic numerics has developed a new class of solvers for ordinary differential equations (ODEs) that phrase the solution process directly in terms of Bayesian filtering. We here show that this allows such methods to be combined very directly, with conceptual and numerical ease, with latent force models in the ODE itself. It then becomes possible to perform approximate Bayesian inference on the latent force as well as the ODE solution in a single, linear complexity pass of an extended Kalman filter / smoother — that is, at the cost of computing a single ODE solution. We demonstrate the expressiveness and performance of the algorithm by training, among others, a non-parametric SIRD model on data from the COVID-19 outbreak.
| accept | The authors present a method for inferring latent functions in ODE model from observations. Reviewers widely praised the manuscript for clarity and motivation.
The discussion focussed on a technical aspect: that the method only requires a single pass of an ODE solver. This is achieved through a reworking of the problem of ODE solving as a state-space inference problem. this is an exciting insight that could change how the community thinks about solving these sorts of inverse problems.
One reviewer raised a serious concern about the lack of a societal impact statement: the authors have provided their statement in the rebuttal with which I am satisfiedL this must make it into the camera-ready edition. | train | [
"fdXhrZncKMH",
"ZxgsfBY90L",
"b93aoK4icc6",
"BiRVaaMb9e",
"DfwqB76E8yN",
"-IpIvqQ1L4",
"tjsKZiZ1hLn",
"s2YPoPSOjUM",
"nlIbtNah2Pu",
"J3KrLce1go_",
"RSp1soY_22r"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a technique for computing approximate ordinary differential equation (ODE) solution posteriors. As shown in previous work [31], the state solutions of mechanistic ODE systems can be approximated by an extended Kalman filter. The authors further extend this methodology by conditioning the filter... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"nips_2021_7e4FLufwij",
"BiRVaaMb9e",
"nips_2021_7e4FLufwij",
"-IpIvqQ1L4",
"tjsKZiZ1hLn",
"b93aoK4icc6",
"J3KrLce1go_",
"RSp1soY_22r",
"fdXhrZncKMH",
"nips_2021_7e4FLufwij",
"nips_2021_7e4FLufwij"
] |
nips_2021_Y10GtvGEgR | On Plasticity, Invariance, and Mutually Frozen Weights in Sequential Task Learning | Plastic neural networks have the ability to adapt to new tasks. However, in a continual learning setting, the configuration of parameters learned in previous tasks can severely reduce the adaptability to future tasks. In particular, we show that, when using weight decay, weights in successive layers of a deep network may become "mutually frozen". This has a double effect: on the one hand, it makes the network updates more invariant to nuisance factors, providing a useful bias for future tasks. On the other hand, it can prevent the network from learning new tasks that require significantly different features. In this context, we find that the local input sensitivity of a deep model is correlated with its ability to adapt, thus leading to an intriguing trade-off between adaptability and invariance when training a deep model more than once. We then show that a simple intervention that "resets" the mutually frozen connections can improve transfer learning on a variety of visual classification tasks. The efficacy of "resetting" itself depends on the size of the target dataset and the difference of the pre-training and target domains, allowing us to achieve state-of-the-art results on some datasets.
| accept | This paper studies the learning dynamics in deep networks by making a novel observation regarding weight decay as mutually frozen weights and their role in generalization. The paper initially received reviews that tended towards rejection. The reviewers had difficulty understanding some details and were concerned whether the results will hold true in real-world settings. The authors provided a thoughtful rebuttal that addressed the reviewers' concerns. The paper was discussed and all the reviewers updated their reviews in the post-rebuttal phase. All reviewers switched their score to weak acceptance (note one reviewer still has a score of 4 in the main review, but they have switched to 6 in comments). AC agrees with the reviewers and suggests acceptance. However, the authors are requested to look at reviewers' feedback and incorporate their comments in the camera-ready. | test | [
"vdNaXbyF6kM",
"8VOop25wmR",
"r1WZzrUuha4",
"I6KzS3ZNOHA",
"SBPUPJD3xf5",
"iIvu4vqWW06",
"fbDVGPNEx43",
"95i8W218YT_",
"xN0WhYO2Zm",
"CnxUOpLcm2F"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper studies the \"optimal task tracking\" problem under the training setting where we use the \"weight decay\" regularization.\nMore specifically, the paper studies the relationship between plasticity-stability tradeoff and shows that a consequence of training with (large enough) weight decay is the \"mutual... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_Y10GtvGEgR",
"95i8W218YT_",
"nips_2021_Y10GtvGEgR",
"SBPUPJD3xf5",
"iIvu4vqWW06",
"xN0WhYO2Zm",
"CnxUOpLcm2F",
"vdNaXbyF6kM",
"r1WZzrUuha4",
"nips_2021_Y10GtvGEgR"
] |
nips_2021_jdIR6KF-uFW | Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning | Due to the broad range of applications of reinforcement learning (RL), understanding the effects of adversarial attacks against RL model is essential for the safe applications of this model. Prior theoretical works on adversarial attacks against RL mainly focus on either reward poisoning attacks or environment poisoning attacks. In this paper, we introduce a new class of attacks named action poisoning attacks, where an adversary can change the action signal selected by the agent. Compared with existing attack models, the attacker’s ability in the proposed action poisoning attack model is more restricted, which brings some design challenges. We study the action poisoning attack in both white-box and black-box settings. We introduce an adaptive attack scheme called LCB-H, which works for most RL agents in the black-box setting. We prove that LCB-H attack can force any efficient RL agent, whose dynamic regret scales sublinearly with the total number of steps taken, to choose actions according to a policy selected by the attacker very frequently, with only sublinear cost. In addition, we apply LCB-H attack against a very popular model-free RL algorithm: UCB-H. We show that, even in black-box setting, by spending only logarithm cost, the proposed LCB-H attack scheme can force the UCB-H agent to choose actions according to the policy selected by the attacker very frequently.
| accept | This paper proposes a new, action poisoning based attack scheme against tabular episodic RL algorithms. The authors investigate this new attack scheme in both white-box and black-box settings. For the white-box setting, the authors propose a simple attack mechanism. For the black-box case, the authors develop a new attack that provably approximately as good as the white-box attack.
I agree with most reviewers that the theoretical findings of this paper are very useful to understand the limitations of RL algorithms. It is worth mentioning thought that the reviewers still have some concerns regarding the scalability of the proposed attacks to non-tabular settings, which is more realistic than the tabular one studied in this paper. Despite this concern, I can conclude that the reviewers did not find any reasons to reject this paper (and neither did I). In fact, all of us are in favour of acceptance. Hence I recommend this paper to be accepted as a poster. | train | [
"lSBwQOvCy_",
"RaUatEZdJT",
"fKjysDLs_R",
"BuB9Hm9B9pz",
"iKKQbeVc-EM",
"3gRYSvTf5WN"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents a new attack framework on Reinforcement Learning systems that modifies the action taken by the agent without the agent noticing in order to make the agent learn the policy that the adversary wants with high probability. Overall, the paper has good technical content but I have the issues stated ... | [
6,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_jdIR6KF-uFW",
"3gRYSvTf5WN",
"iKKQbeVc-EM",
"lSBwQOvCy_",
"nips_2021_jdIR6KF-uFW",
"nips_2021_jdIR6KF-uFW"
] |
nips_2021_oa1AMhWKrS | Fast Approximation of the Sliced-Wasserstein Distance Using Concentration of Random Projections | The Sliced-Wasserstein distance (SW) is being increasingly used in machine learning applications as an alternative to the Wasserstein distance and offers significant computational and statistical benefits. Since it is defined as an expectation over random projections, SW is commonly approximated by Monte Carlo. We adopt a new perspective to approximate SW by making use of the concentration of measure phenomenon: under mild assumptions, one-dimensional projections of a high-dimensional random vector are approximately Gaussian. Based on this observation, we develop a simple deterministic approximation for SW. Our method does not require sampling a number of random projections, and is therefore both accurate and easy to use compared to the usual Monte Carlo approximation. We derive nonasymptotical guarantees for our approach, and show that the approximation error goes to zero as the dimension increases, under a weak dependence condition on the data distribution. We validate our theoretical findings on synthetic datasets, and illustrate the proposed approximation on a generative modeling problem.
| accept | The focus of the submission is the fast approximation of the sliced-Wasserstein distance (SW). Particularly, the authors present a new scheme to tackle this task relying on specifically designed Gaussian projections as an alternative to the widely-used Monte Carlo approximation (where the projection directions are distributed uniformly on the unit sphere and their number often has to be large to achieve highly-accurate approximation), with consistency guarantees. The efficiency of the approach is demonstrated on synthetic examples and in generative modelling (tuning a neural network for image generation with SW objective).
Estimating the discrepancy of probability measures in R^d is a fundamental problem in statistics and machine learning with numerous applications. The submission is a well-organized, clearly-written work in this direction where the authors deliver important tools (SW estimator) with sound theoretical guarantees as assessed by the reviewers; it can be of definite interest to the ML community. | train | [
"Na0cQZUEZq-",
"vWL6VlNaUpC",
"a4FQBB9X1c_",
"BMPrzg64sGQ",
"I6S092DCazt",
"xvYz40fTum3",
"904jKzawXgj",
"LeLOmLajWr1",
"tjLefHWEHuI",
"SrMVNNp0sdl",
"j5kesoTB9R1",
"0VoBS4yhWm",
"sa12KXVsK8I",
"VnXO6iVzFn4"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a (possibly) cheaper approximation of Sliced Wasserstein (SW) distance. The proposed technique is based on calculating 1D Wasserstein distance between Gaussian projection directions. This is motivated by the fact that Wasserstein distance between Gaussian distributions admits a closed-form so... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_oa1AMhWKrS",
"904jKzawXgj",
"xvYz40fTum3",
"I6S092DCazt",
"0VoBS4yhWm",
"j5kesoTB9R1",
"SrMVNNp0sdl",
"nips_2021_oa1AMhWKrS",
"VnXO6iVzFn4",
"LeLOmLajWr1",
"sa12KXVsK8I",
"Na0cQZUEZq-",
"nips_2021_oa1AMhWKrS",
"nips_2021_oa1AMhWKrS"
] |
nips_2021_ckVbQs5zD7_ | Causal Navigation by Continuous-time Neural Networks | Imitation learning enables high-fidelity, vision-based learning of policies within rich, photorealistic environments. However, such techniques often rely on traditional discrete-time neural models and face difficulties in generalizing to domain shifts by failing to account for the causal relationships between the agent and the environment. In this paper, we propose a theoretical and experimental framework for learning causal representations using continuous-time neural networks, specifically over their discrete-time counterparts. We evaluate our method in the context of visual-control learning of drones over a series of complex tasks, ranging from short- and long-term navigation, to chasing static and dynamic objects through photorealistic environments. Our results demonstrate that causal continuous-time deep models can perform robust navigation tasks, where advanced recurrent models fail. These models learn complex causal control representations directly from raw visual inputs and scale to solve a variety of tasks using imitation learning.
| accept | This paper introduces a method for visual navigation using continuous time (CT) neural networks in the continuous liquid time neural networks (LTC). It claims, that this variant is able to learn causal structures, compared to baseline CT models as well as standard deep models like LSTMs, and evaluates these claims on drone navigation tasks.
The paper received 3 expert reviews, which were on the fence. After the discussion phase, two of the reviewers were clearly positive of the paper, and the remaining reviewers was borderline slightly negative.
Several weaknesses were pointed out by the critical reviewer:
- the limitations of positioning as imitation learning
- novelty of the claim that continuous-time formulations increase robustness wrt to causal confusion
- Simplicity of the tasks
The authors could provide answers to most of these issues, but the remaining weakness was the simplicity of the experiments.
The AC's reading of the paper was positive. He agrees on the simplicity of the tasks, but judges that this paper has sufficient merits and novelty to be of interest to the community.
The paper was then discussed between AC and SAC, who confirms the decision. | train | [
"LUBvIdx0oTF",
"eISpQkikMzL",
"vu5QGRlaz3y",
"Ib3Tv77qMBU",
"fKyAK3HZj8o",
"IHd2y1uwrB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the positive evaluation of our work and your constructive feedback on our manuscript. Please find our response to your concerns in the following: \n\n**How would a CNN network work in these scenarios?** As requested by the reviewer, we performed an additional experiment in all cases with CNNs. In t... | [
-1,
-1,
-1,
7,
5,
7
] | [
-1,
-1,
-1,
3,
4,
3
] | [
"IHd2y1uwrB",
"fKyAK3HZj8o",
"Ib3Tv77qMBU",
"nips_2021_ckVbQs5zD7_",
"nips_2021_ckVbQs5zD7_",
"nips_2021_ckVbQs5zD7_"
] |
nips_2021_XI72RT3hnnF | Global Convergence of Online Optimization for Nonlinear Model Predictive Control | We study a real-time iteration (RTI) scheme for solving online optimization problem appeared in nonlinear optimal control. The proposed RTI scheme modifies the existing RTI-based model predictive control (MPC) algorithm, by selecting the stepsize of each Newton step at each sampling time using a differentiable exact augmented Lagrangian. The scheme can adaptively select the penalty parameters of augmented Lagrangian on the fly, which are shown to be stabilized after certain time periods. We prove under generic assumptions that, by involving stepsize selection instead of always using a full Newton step (like what most of the existing RTIs do), the scheme converges globally: for any initial point, the KKT residuals of the subproblems converge to zero. A key step is to show that augmented Lagrangian keeps decreasing as horizon moves forward. We demonstrate the global convergence behavior of the proposed RTI scheme in a numerical experiment.
| accept | Dear authors,
The authors reached a consensus and positively evaluated the paper. Hence I recommend acceptation to the SAC.
I encourage you to pay special care to clarifying the points raised during the discussion regarding stability, MPC and RTI when preparing the final version of the paper. | train | [
"CQ6OeIqLZC",
"4nDJm1IkgHv",
"t9d65I5FZ-l",
"dtzDkpmCGLN",
"GSD6fLojjd8",
"SjckndQAjKb",
"9Nbi-jkjXG3",
"6gO2SEUMPxx"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate that the reviewer could re-evaluate the paper and have positive opinion this time. Indeed, in this case both exact MPC and RTI have unstable closed-loop systems. \n\n\nHowever, for RTI, we are more interested in the question that whether RTI iterates could stably track the exact MPC iterates; becaus... | [
-1,
6,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"t9d65I5FZ-l",
"nips_2021_XI72RT3hnnF",
"GSD6fLojjd8",
"6gO2SEUMPxx",
"4nDJm1IkgHv",
"9Nbi-jkjXG3",
"nips_2021_XI72RT3hnnF",
"nips_2021_XI72RT3hnnF"
] |
nips_2021_6nbpPqUCIi7 | Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions | Generative flows and diffusion models have been predominantly trained on ordinal data, for example natural images. This paper introduces two extensions of flows and diffusion for categorical data such as language or image segmentation: Argmax Flows and Multinomial Diffusion. Argmax Flows are defined by a composition of a continuous distribution (such as a normalizing flow), and an argmax function. To optimize this model, we learn a probabilistic inverse for the argmax that lifts the categorical data to a continuous space. Multinomial Diffusion gradually adds categorical noise in a diffusion process, for which the generative denoising process is learned. We demonstrate that our method outperforms existing dequantization approaches on text modelling and modelling on image segmentation maps in log-likelihood.
| accept | This paper presents two approaches for learning expressive distributions over discrete variables. Although there are valid criticisms regarding the novelty (especially for the multinomial diffusion), the majority of the reviewers found the introduction of argmax flows an interesting and non-trivial extension of prior works. Given the community's interest in discrete variables, I recommend this paper for acceptance. For the final camera-ready version, I'd appreciate it if the authors could expand the discussion of limitations as suggested by 4fJX (in addition to other promised changes). | train | [
"3T76rGzjtZL",
"_0kAPYm_k2-",
"gEjje2ed0LU",
"FU2KrIo4Xkl",
"xRIGBgQzwWn",
"TeUSOdJdni0",
"b7VXlrGH9uJ",
"A8CCwXwF0W3",
"g6CPTz641S",
"MAuU18lQbNM",
"DH8GXEcXRTP"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We thank the reviewer for their response, and for raising the score from 4 to 5. Regarding the second point, we would like to further address the issues that the reviewer has raised, who is not convinced of the usefulness of argmax flows. \n\nTo summarize, the reviewer has two main arguments: 1) Argmax flows can ... | [
-1,
5,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
4,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"_0kAPYm_k2-",
"nips_2021_6nbpPqUCIi7",
"nips_2021_6nbpPqUCIi7",
"MAuU18lQbNM",
"nips_2021_6nbpPqUCIi7",
"nips_2021_6nbpPqUCIi7",
"gEjje2ed0LU",
"_0kAPYm_k2-",
"xRIGBgQzwWn",
"DH8GXEcXRTP",
"nips_2021_6nbpPqUCIi7"
] |
nips_2021_G1jmxFOtY_ | Learning with User-Level Privacy | Daniel Levy, Ziteng Sun, Kareem Amin, Satyen Kale, Alex Kulesza, Mehryar Mohri, Ananda Theertha Suresh | accept | This paper deals with DP in a setting where all users contribute m datapoints to the given dataset, and the curator wishes to maintain DP on a user-level. The main result of the paper is a SGD algorithm (or an iterative aggregator) in which for each user the m datapoints are averaged before an update step takes place w.r.t the avg. The reviewers agree that this is an interesting paper with solid contribution. | train | [
"18St6NK557v",
"nX58SapHjTz",
"MobuXJ1y6Ai",
"e4CWLPMVXaW",
"7Ae0PnnKXfk",
"vPVGOwv_dKu",
"oifY-7UyXvD",
"Ynk0ggLCCtu",
"ri18Y9f8dNj",
"6kcndb0i-Ab"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper studies learning problems under so-called user-level differential privacy: in this setting, each one of n users holds m data points of the training set. This type of privacy then requires that under a change of *any number of data points belonging to a single user*, the distributions over the outputs of ... | [
7,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_G1jmxFOtY_",
"nips_2021_G1jmxFOtY_",
"oifY-7UyXvD",
"nips_2021_G1jmxFOtY_",
"vPVGOwv_dKu",
"e4CWLPMVXaW",
"6kcndb0i-Ab",
"nX58SapHjTz",
"18St6NK557v",
"nips_2021_G1jmxFOtY_"
] |
nips_2021_waWmZSw0mn | Don’t Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence | Although machine learning models trained on massive data have led to breakthroughs in several areas, their deployment in privacy-sensitive domains remains limited due to restricted access to data. Generative models trained with privacy constraints on private data can sidestep this challenge, providing indirect access to private data instead. We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy. DP-Sinkhorn minimizes the Sinkhorn divergence, a computationally efficient approximation to the exact optimal transport distance, between the model and data in a differentially private manner and uses a novel technique for controlling the bias-variance trade-off of gradient estimates. Unlike existing approaches for training differentially private generative models, which are mostly based on generative adversarial networks, we do not rely on adversarial objectives, which are notoriously difficult to optimize, especially in the presence of noise imposed by privacy constraints. Hence, DP-Sinkhorn is easy to train and deploy. Experimentally, we improve upon the state-of-the-art on multiple image modeling benchmarks and show differentially private synthesis of informative RGB images.
| accept | This paper proposes an DP algorithm for generative modeling, based on Sinkhorn divergence. This approach is non-adversarial, leading to a simpler and more stable optimization problem that is better suited for making differentially private. On the other hand, empirically, even non-private version of non-adversarial generative models lead to worse models and this limits the possible impact of this work.
Papers at the intersection of two active areas of research are usually hard to evaluate and this paper is no exception. The reviewers were divided on this work. The improvement over previous work is small in FID score, though FID is not a particularly good measure. The generated images are reasonably good.
I find the approach promising and the authors might want to evaluate if this approach leads to more "stable" optimization compare to GAN-based approaches; e.g. is this algorithm less sensitive to the choice of hyperparameters. The authors are also encouraged to evaluate this approach on medical datasets, such as those used in https://doi.org/10.1101/159756
On balance, this paper may lead to a broader range of ideas being explored in this space, and therefore I would recommend acceptance. | train | [
"_KDIInkMvoD",
"c1Tqa8eOMsd",
"SaHuE16YiRL",
"KeqgvI67_IP",
"JGDIRVZhrwE",
"kC6dwjt9zfA",
"Xe_MvN3qDgY",
"EUW753B92a",
"FEZPH76Ynof",
"jPw-WlDQU6g",
"I4fu7V9OIu5",
"a7EqVmVlQC"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new technique for generating differentially-private synthetic data. The approach avoids adversarial training altogether, which has historically given poor model accuracy due to its instability and sensitivity to noise. Instead, the authors minimize Sinkhorn divergence, which is a computationa... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"nips_2021_waWmZSw0mn",
"Xe_MvN3qDgY",
"nips_2021_waWmZSw0mn",
"I4fu7V9OIu5",
"FEZPH76Ynof",
"a7EqVmVlQC",
"_KDIInkMvoD",
"jPw-WlDQU6g",
"nips_2021_waWmZSw0mn",
"nips_2021_waWmZSw0mn",
"nips_2021_waWmZSw0mn",
"nips_2021_waWmZSw0mn"
] |
nips_2021_mfQxdSMWOF | Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers | Mandela Patrick, Dylan Campbell, Yuki Asano, Ishan Misra, Florian Metze, Christoph Feichtenhofer, Andrea Vedaldi, João F. Henriques | accept | This paper presents work on attention in video transformers. In particular, a novel trajectory-focused self-attention approach is developed, which essentially tracks space-time patches. The reviewers appreciated the novel method, clear presentation, and effectiveness in empirical evaluations. The new approach forms a drop-in block that could be used in video architectures. The reviewers were unanimous in recommending acceptance for the paper.
| train | [
"ZI2b_nX_AFi",
"v_NxjbUo64",
"ZcVAKAWr8ku",
"WvR-VmQgErH",
"PPMvVjr9gi3",
"ChsAdsD5uLY",
"C2p5n0WeO_T",
"gpjiiNuUCuw",
"mxqbqWWUQk",
"28XUWif66fJ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors adequately solved my main concerns in the rebuttal. After checking other reviewers' comments and the rebuttal, I will keep my rating as Accept.",
" Hi,\n\nFirst, I would like to thank the authors for the detailed response and the additional experiments they provide in their rebuttal. After reading t... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
3
] | [
"PPMvVjr9gi3",
"WvR-VmQgErH",
"C2p5n0WeO_T",
"28XUWif66fJ",
"mxqbqWWUQk",
"gpjiiNuUCuw",
"nips_2021_mfQxdSMWOF",
"nips_2021_mfQxdSMWOF",
"nips_2021_mfQxdSMWOF",
"nips_2021_mfQxdSMWOF"
] |
nips_2021_NtivXxYNhjc | Variational Bayesian Optimistic Sampling | Brendan O'Donoghue, Tor Lattimore | accept | This paper introduces a family of stochastically optimistic policies that achieve favourable regret. All of the reviewers were enthusiastic about this paper, noting that the contribution was "technically sound" and "elegant". I agree it is a valuable contribution, and will therefore recommend acceptance. I want to note two concerns that the authors should make sure to address in the camera ready:
* Please de-emphasize the argument that sampling from the posterior is harder than computing integrals w.r.t. the posterior. Unless I've misunderstood, I'm not sure I agree with this argument.
* Multiple reviewers recommended that the name of the method be changed. While I cannot enforce this, I also believe it is in the best interest of the authors to pick a name that the community finds appropriate. | train | [
"NSNtEZt0HOg",
"kExdHMeUVr",
"1oy11O2usg",
"wma_G_FaFny",
"1B9Bf-Uhu7c",
"KVZVPtnIu-n",
"N3DtPZDjaxG",
"INuarME9U2",
"bLOoji2Qxfk",
"E0IYmWfeAn1",
"ubX879JqZyP",
"gYgvm7DuasD",
"yFf93APEUEd",
"X2ztFsBCdWg",
"TVs6KK6-h2o"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your time and consideration!\n\n* \"One motivation offered in the introduction to look beyond TS is that it may require samples from intractable posteriors over the values for each arm. Could you say a bit more about when VTS is tractable, but sampling from these posteriors is not?\"\n\nThanks for r... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
3,
2
] | [
"TVs6KK6-h2o",
"ubX879JqZyP",
"1B9Bf-Uhu7c",
"bLOoji2Qxfk",
"NSNtEZt0HOg",
"N3DtPZDjaxG",
"INuarME9U2",
"nips_2021_NtivXxYNhjc",
"X2ztFsBCdWg",
"INuarME9U2",
"yFf93APEUEd",
"nips_2021_NtivXxYNhjc",
"nips_2021_NtivXxYNhjc",
"nips_2021_NtivXxYNhjc",
"nips_2021_NtivXxYNhjc"
] |
nips_2021_VLQV2vqjLf3 | Cross-modal Domain Adaptation for Cost-Efficient Visual Reinforcement Learning | In visual-input sim-to-real scenarios, to overcome the reality gap between images rendered in simulators and those from the real world, domain adaptation, i.e., learning an aligned representation space between simulators and the real world, then training and deploying policies in the aligned representation, is a promising direction. Previous methods focus on same-modal domain adaptation. However, those methods require building and running simulators that render high-quality images, which can be difficult and costly. In this paper, we consider a more cost-efficient setting of visual-input sim-to-real where only low-dimensional states are simulated. We first point out that the objective of learning mapping functions in previous methods that align the representation spaces is ill-posed, prone to yield an incorrect mapping. When the mapping crosses modalities, previous methods are easier to fail. Our algorithm, Cross-mOdal Domain Adaptation with Sequential structure (CODAS), mitigates the ill-posedness by utilizing the sequential nature of the data sampling process in RL tasks. Experiments on MuJoCo and Hand Manipulation Suite tasks show that the agents deployed with our method achieve similar performance as it has in the source domain, while those deployed with previous methods designed for same-modal domain adaptation suffer a larger performance gap.
| accept | This paper tackles the problem of visual-input sim-to-real and tries to overcome the reality gap between images rendered in simulators and those from the real world. It presents an approach that, instead of building and running simulators that render costly high-quality images, generates only low-dimensional states in simulation to learn the policy, and then learns encoders from and decoders to observations that handle sequential observations. This enables to train and transfer state dynamics from a source environment to a target environment with only high-dimensional observations. It then proposes a mathematical derivation for the cross-modal unsupervised domain adaptation problem. The method is then evaluated on MuJoCo tasks (OpenAI Gym and robot hand manipulation) and compared to several GAN, CycleGAN and temporally stacked GAN baselines, where it shows better domain adaptation and transfer performance.
Reviewer NDFM praised the paper and their minor comments were addressed. Reviewer cBxr’s most negative comment was about the limited impact of the work due to similarity with existing work, but the authors contest this review and justify their contribution as a combination of mathematical derivation of the problem as variational inference and the addition of sequential modeling of the dynamics (as opposed to VAE-GAN). Reviewer 6QJy complained about missing real-world data evaluations to make this work relevant (a claim disputed by the authors and reviewer NDFM), as well as missing comparisons to two recent sim2real techniques that “adapt image and state to hidden state and use that to train the policy'' and that modeled sequences of states from observations (the authors provide a clarification).
Review scores are (5, 5, 6, 8, average 6). Reviewers cBxr (score 5) did not respond to the rebuttal and did not update their scores but I believe they could promote their score. While waiting on this reviewer, I am therefore willing to challenge them and promote this paper to an acceptance.
| test | [
"xv99E0V2ceL",
"QGE7UXWcUu",
"Fb6_gcddUK_",
"R_to-LC6ncb",
"mycXHGvAb4",
"kY8ZT3l0PaC",
"u_qc1am6aAf",
"pdLeUL1Cd7b",
"8fYX59j1lqA",
"LAUj1vf2O96",
"djd87AHsknX",
"pyA55DM2uK5",
"HkcRErajs7B",
"yBJuxkdeLaS",
"g1H6LrqslQ2",
"_F3wfNSZSo",
"gRpN4_j5XE1"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the in-time response. However, we cannot agree with your comments on the following aspects. We would appreciate it if you could give us a more detailed comment because we are somehow confused by what you have mentioned.\n\n1. The main contributions of our work are explicitly stated in the abstract, in... | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"QGE7UXWcUu",
"pdLeUL1Cd7b",
"R_to-LC6ncb",
"LAUj1vf2O96",
"nips_2021_VLQV2vqjLf3",
"nips_2021_VLQV2vqjLf3",
"8fYX59j1lqA",
"_F3wfNSZSo",
"HkcRErajs7B",
"mycXHGvAb4",
"g1H6LrqslQ2",
"mycXHGvAb4",
"g1H6LrqslQ2",
"gRpN4_j5XE1",
"nips_2021_VLQV2vqjLf3",
"nips_2021_VLQV2vqjLf3",
"nips_20... |
nips_2021_4vUZPUKZsr5 | D2C: Diffusion-Decoding Models for Few-Shot Conditional Generation | Conditional generative models of high-dimensional images have many applications, but supervision signals from conditions to images can be expensive to acquire. This paper describes Diffusion-Decoding models with Contrastive representations (D2C), a paradigm for training unconditional variational autoencoders (VAE) for few-shot conditional image generation. D2C uses a learned diffusion-based prior over the latent representations to improve generation and contrastive self-supervised learning to improve representation quality. D2C can adapt to novel generation tasks, conditioned on labels or manipulation constraints, by learning from as few as 100 labeled examples. On conditional generation from new labels, D2C achieves superior performance over state-of-the-art VAEs and diffusion models. On conditional image manipulation, D2C generations are two orders of magnitude faster to produce over StyleGAN2 ones and are preferred by 50% - 60% of the human evaluators in a double-blind study. We release our code at https://github.com/jiamings/d2c.
| accept | The reviewers unanimously believe that the paper should be accepted. The paper focuses on an important problem and some interesting ideas such as guided diffusion and the use of self-supervised representations has been proposed. There are complaints about the quality of the baselines and the rigorousness of the evaluation. The highlighted results do show some artifacts, and the proposed approach sounds like a combination of several disjoint technique. Hence, I recommend acceptance as a poster. | test | [
"_gXZD3EfXW5",
"5HOLKg_kUoL",
"_E5iDLy5bJW",
"_6AaTKAa7Tv",
"nht9o9MVwOw",
"_TGCNuLHD8x",
"v2XLiGrW6VD",
"rUZLFxD2Feq",
"XbC2PqD3suz",
"voydCe4HpWp",
"zBJ92OQurh",
"2Tw1OKNmylg",
"kNBj1N1srUo",
"9iEZ4yS2ZJy",
"HiZoGOGvsjy",
"rTLUJ0YWNHM",
"YFOz8JKixW",
"oqFvtk6IdA6",
"m5T90rb-Z7j... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_re... | [
" Thank you for pointing out this issue about blond males in the samples, which gave us an opportunity to further investigate the \"fairness\" of the model. We will include these additional experimental results in the final version of the paper.",
" Thank you for the comment. We will provide GAN results in the fi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"zBJ92OQurh",
"nht9o9MVwOw",
"_6AaTKAa7Tv",
"v2XLiGrW6VD",
"_TGCNuLHD8x",
"SseVwiHfL1K",
"m5T90rb-Z7j",
"2Tw1OKNmylg",
"oqFvtk6IdA6",
"nips_2021_4vUZPUKZsr5",
"YFOz8JKixW",
"XbC2PqD3suz",
"HiZoGOGvsjy",
"nips_2021_4vUZPUKZsr5",
"nips_2021_4vUZPUKZsr5",
"nips_2021_4vUZPUKZsr5",
"voydC... |
nips_2021_EpL9IFAMa3 | Continual Auxiliary Task Learning | Learning auxiliary tasks, such as multiple predictions about the world, can provide many benefits to reinforcement learning systems. A variety of off-policy learning algorithms have been developed to learn such predictions, but as yet there is little work on how to adapt the behavior to gather useful data for those off-policy predictions. In this work, we investigate a reinforcement learning system designed to learn a collection of auxiliary tasks, with a behavior policy learning to take actions to improve those auxiliary predictions. We highlight the inherent non-stationarity in this continual auxiliary task learning problem, for both prediction learners and the behavior learner. We develop an algorithm based on successor features that facilitates tracking under non-stationary rewards, and prove the separation into learning successor features and rewards provides convergence rate improvements. We conduct an in-depth study into the resulting multi-prediction learning system.
| accept | The reviewers have come to a strong consensus for acceptance of this work. Its strong points include novelty, where the important combination of multi-prediction and non-stationarity is well motivated, as well as pleasing theory and experiments to address the same.
I would ask the authors to carefully consider the suggested writing improvements from GBFf and suggestions for clarity from NevW. I would also like to see additionally comentary about the number of tasks and the potential to discover "useful" tasks/skills in the context of your work (these are not things I expect a solution to today, but something where guidance will help the authors bound to study the topic in the future). | train | [
"IiyJKu32Rge",
"pJM6pEX4IBk",
"FA3pGAx32_",
"PomZpmusl3m",
"JkOoATIkXzv",
"DUDu97Ws9t",
"N3pUFs4PxkB",
"WaJHk3lXFpf",
"EpX1IOGC0h8",
"yiouQBub3MD",
"BWcnrtV3vO",
"ozeWAMvGWoi"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarification!",
" Thanks for the clarifications! I've updated my score after reading the authors' response and other reviews.",
"This paper considers the setting where an RL agent has to learn to make auxiliary predictions, and captures each prediction task as an RL task. In particular, the... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"N3pUFs4PxkB",
"WaJHk3lXFpf",
"nips_2021_EpL9IFAMa3",
"DUDu97Ws9t",
"EpX1IOGC0h8",
"ozeWAMvGWoi",
"BWcnrtV3vO",
"FA3pGAx32_",
"yiouQBub3MD",
"nips_2021_EpL9IFAMa3",
"nips_2021_EpL9IFAMa3",
"nips_2021_EpL9IFAMa3"
] |
nips_2021_LhBigohtN1R | Constrained Two-step Look-Ahead Bayesian Optimization | Recent advances in computationally efficient non-myopic Bayesian optimization offer improved query efficiency over traditional myopic methods like expected improvement, with only a modest increase in computational cost. These advances have been largely limited to unconstrained BO methods with only a few exceptions which require heavy computation. For instance, one existing multi-step lookahead constrained BO method (Lam & Willcox, 2017) relies on computationally expensive unreliable brute force derivative-free optimization of a Monte Carlo rollout acquisition function. Methods that use the reparameterization trick for more efficient derivative-based optimization of non-myopic acquisition functions in the unconstrained setting, like sample average approximation and infinitesimal perturbation analysis, do not extend: constraints introduce discontinuities in the sampled acquisition function surface. Moreover, we argue here that being non-myopic is even more important in constrained problems because fear of violating constraints pushes myopic methods away from sampling the boundary between feasible and infeasible regions, slowing the discovery of optimal solutions with tight constraints. In this paper, we propose a computationally efficient two-step lookahead constrained Bayesian optimization acquisition function (2-OPT-C) supporting both sequential and batch settings. To enable fast acquisition function optimization, we develop a novel likelihood ratio-based unbiased estimator of the gradient of the two-step optimal acquisition function that does not use the reparameterization trick. In numerical experiments, 2-OPT-C typically improves query efficiency by 2x or more over previous methods, and in some cases by 10x or more.
| accept | This manuscript considers the development of nonmyopic Bayesian optimization policies for constrained optimization. Although nonmyopic policies have repeatedly shown success in unconstrained settings, there are several challenges to overcome in the constrained setting.
Over the course of the discussion phase, the reviewers came to the consensus that this paper was notable, offered a significant contribution, was of interest to the NeurIPS community, and could serve as a springboard for future work.
In preparing the final version of the manuscript, I strongly suggest that the authors take the reviewers' suggestions into account. Further, the reviewers agree that the the portfolio optimization example from the discussion should be incorporated into the paper, as it was both relevant and convincing. | train | [
"OLtngdNP5rk",
"PYlnJQvc_LR",
"6ZA8m8VJ9eJ",
"gfMKoM1hzO",
"2GCi-NIL_A",
"FdS-6kxexRy",
"X6H5onfBi7R",
"-YIEf-q8pht",
"Q5EXI0A3OT",
"Pfy1ixipzYR",
"JfVC9lsHGqF",
"z5QjNcprBh-",
"HA-5hecmFmq",
"lhW0utj17d5"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
" We are glad that our previous responses clarified some of the reviewer’s concerns and would like to thank the reviewer for raising the rating of our paper. During the post rebuttal period, we also have run additional experiments on the three synthetic functions and also a new real-world experiment (robot pushing ... | [
-1,
-1,
-1,
-1,
6,
-1,
5,
7,
7,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
3,
-1,
4,
3,
4,
-1,
-1,
-1,
-1,
-1
] | [
"-YIEf-q8pht",
"2GCi-NIL_A",
"FdS-6kxexRy",
"nips_2021_LhBigohtN1R",
"nips_2021_LhBigohtN1R",
"X6H5onfBi7R",
"nips_2021_LhBigohtN1R",
"nips_2021_LhBigohtN1R",
"nips_2021_LhBigohtN1R",
"X6H5onfBi7R",
"Pfy1ixipzYR",
"-YIEf-q8pht",
"2GCi-NIL_A",
"Q5EXI0A3OT"
] |
nips_2021_-1OkHh56c2m | Learning with Labeling Induced Abstentions | Consider a setting where we wish to automate an expensive task with a machine learning algorithm using a limited labeling resource. In such settings, examples routed for labeling are often out of scope for the machine learning algorithm. For example, in a spam detection setting, human reviewers not only provide labeled data but are such high-quality detectors of spam that examples routed to them no longer require machine evaluation. As a consequence, the distribution of examples routed to the machine is intimately tied to the process generating labels. We introduce a formalization of this setting, and give an algorithm that simultaneously learns a model and decides when to request a label by leveraging ideas from both the abstention and active learning literatures. We prove an upper bound on the algorithm's label complexity and a matching lower bound for any algorithm in this setting. We conduct a thorough set of experiments including an ablation study to test different components of our algorithm. We demonstrate the effectiveness of an efficient version of our algorithm over margin sampling on a variety of datasets.
| accept | This work formally proposes a novel learning setup, which, on a high level can be viewed as a combination of active learning and learning with abstentions. The framework works with two function classes, a function class R, that contains regions of where labels should be requested and a (standard) hypothesis class of classifiers H. The goal of the earning is set to minimize the classification loss in the regions where labels are not requested, that is, minimize classification loss conditioned on r=0 (in the regions where r=1, labels are assumed to be always requested, so the classifier does not need to get labels correct). To make this setup non trivial, the the region, where r=1 is budgeted to probability weight rho.
This paper proposes and analyzes this setup. The submission contains upper and lower bound on the query complexity and this is compared with standard active learning. Additionally, the authors provide some experimental evaluation on an adaptation of IWAL to their setup.
This overall appears to be a solid and rounded piece of work. The reviewers have critized a lack of proper motivation as well as clarity issues in the presentation. More intuitive explanation and guidance to the readers could have been given. I agree with the reviewers on these aspects, and would therefore rate this work as borderline, with a storng tendency to accept. | train | [
"JK66BcfCS3N",
"bllytYqRGZ0",
"oZ9XUQx5ytI",
"t2oS8AXjpAI",
"AgdidQOLSJs",
"o7iRjyGbeW"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your questions and comments. We address them below. \n\n-- Empirical comparison to existing approaches in active learning is missing. Are there methods in active learning that can be adapted and compared to the proposed DPL-IWAL algorithm.\n\nIn our experiments, we compare our method to an active le... | [
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
2,
3,
2
] | [
"t2oS8AXjpAI",
"o7iRjyGbeW",
"AgdidQOLSJs",
"nips_2021_-1OkHh56c2m",
"nips_2021_-1OkHh56c2m",
"nips_2021_-1OkHh56c2m"
] |
nips_2021_2CQQ_C1i0b | SQALER: Scaling Question Answering by Decoupling Multi-Hop and Logical Reasoning | State-of-the-art approaches to reasoning and question answering over knowledge graphs (KGs) usually scale with the number of edges and can only be applied effectively on small instance-dependent subgraphs. In this paper, we address this issue by showing that multi-hop and more complex logical reasoning can be accomplished separately without losing expressive power. Motivated by this insight, we propose an approach to multi-hop reasoning that scales linearly with the number of relation types in the graph, which is usually significantly smaller than the number of edges or nodes. This produces a set of candidate solutions that can be provably refined to recover the solution to the original problem. Our experiments on knowledge-based question answering show that our approach solves the multi-hop MetaQA dataset, achieves a new state-of-the-art on the more challenging WebQuestionsSP, is orders of magnitude more scalable than competitive approaches, and can achieve compositional generalization out of the training distribution.
| accept | This work proposes a substantially more scalable approach to KBQA that scales with the number of relation types (rather than the number of edges) in a KG, and operates in a coarse-to-fine manner. The overall idea of working first at relation types, and later refining to individual edges, is interesting, and the results in the setting considered are strong.
The detailed discussion during the reviewing + author response period clarified some aspects (e.g., that the main contribution is not traversing a KG for multi-hop reasoning but identifying in a coarse-to-fine and scalable way to identify subgraphs that contain the answer), and also included additional positive results such as improved performance even on incomplete KGs, which is what the PullNet and GraftNet baselines were originally designed for.
The paper is close to the fence, with several strengths and some weaknesses:
**Strengths**: Strong empirical performance, intuitive overall idea, and authors' active effort in addressing (and, I assume, incorporating in the revised draft) detailed feedback from the reviewers.
**Weaknesses**: Limited novelty (as many pieces of the overall idea exist), room for clarity of presentation of both technical details (some of which are in the appendix) and broad statements like existing methods scaling linearly in the size of the entire KG (whereas they scale linearly in the size of the *retrieved* subgraph, which is often much smaller for 2-4 hop questions). | train | [
"ZyM7DSb2Vs5",
"q6aJpEs8X4",
"WWWUh5u_VJu",
"q436PGSTYzG",
"w2OiLaBORpR",
"USvT_a29Sk",
"oUI5VcxRkF",
"tYKpxgX0p2E",
"uuwfFz5XYq8",
"29fVRN4hzhv",
"lCpq3Jl6x5_",
"okkzdKnUYP"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nthanks again for your work on our manuscript. We are writing to you to point to your attention that we obtained new experimental results on incomplete KGs, in light of the discussion with _Reviewer aN8i_. Unfortunately, we did not receive any reply from the reviewer, though we think that all con... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"oUI5VcxRkF",
"tYKpxgX0p2E",
"q436PGSTYzG",
"w2OiLaBORpR",
"USvT_a29Sk",
"uuwfFz5XYq8",
"okkzdKnUYP",
"lCpq3Jl6x5_",
"29fVRN4hzhv",
"nips_2021_2CQQ_C1i0b",
"nips_2021_2CQQ_C1i0b",
"nips_2021_2CQQ_C1i0b"
] |
nips_2021_-h6Ldc0MO- | Out-of-Distribution Generalization in Kernel Regression | In real word applications, data generating process for training a machine learning model often differs from what the model encounters in the test stage. Understanding how and whether machine learning models generalize under such distributional shifts have been a theoretical challenge. Here, we study generalization in kernel regression when the training and test distributions are different using methods from statistical physics. Using the replica method, we derive an analytical formula for the out-of-distribution generalization error applicable to any kernel and real datasets. We identify an overlap matrix that quantifies the mismatch between distributions for a given kernel as a key determinant of generalization performance under distribution shift. Using our analytical expressions we elucidate various generalization phenomena including possible improvement in generalization when there is a mismatch. We develop procedures for optimizing training and test distributions for a given data budget to find best and worst case generalizations under the shift. We present applications of our theory to real and synthetic datasets and for many kernels. We compare results of our theory applied to Neural Tangent Kernel with simulations of wide networks and show agreement. We analyze linear regression in further depth.
| accept | This paper gives a detailed study for out-of-distribution generalization in kernel regression, going substantially beyond known results in this particular problem, which are both of independent interest and also may help serve as a baseline of understanding for similar results in other methods. Some concerns did come up in the reviews both in terms of the framing – which seem to be generally resolved in the discussion phase, though please make sure that these are reflected in the final version of the paper – and in particular for the style of result in Proposition 1, which is worth thinking a little further about. Overall, though, this will be a nice contribution to the conference. | test | [
"I_Ndv5rwN7B",
"vCeiVtQzycc",
"fVaiv24eXep",
"Z20-lfh081K",
"aTZYcdbMxLx",
"vhvRAzR8ZIa",
"pdTztrg7Ubs",
"8HUBk3kFOSL",
"KCK1iLqtJN2",
"MedvvJA-jPY",
"agXw5DibLpg"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The rebuttal to me, as well as other reviewers, clarified most of my questions. \n",
"The paper applies the so-called \"replica method\" to tackle the generalization issue for kernel regression where there is a distribution shift between training data and testing data. It presents the derivation of kernel regre... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"pdTztrg7Ubs",
"nips_2021_-h6Ldc0MO-",
"aTZYcdbMxLx",
"vCeiVtQzycc",
"vCeiVtQzycc",
"MedvvJA-jPY",
"KCK1iLqtJN2",
"agXw5DibLpg",
"nips_2021_-h6Ldc0MO-",
"nips_2021_-h6Ldc0MO-",
"nips_2021_-h6Ldc0MO-"
] |
nips_2021_96uH8HeGb9G | FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective | Federated learning (FL) is a popular distributed learning framework that trains a global model through iterative communications between a central server and edge devices. Recent works have demonstrated that FL is vulnerable to model poisoning attacks. Several server-based defense approaches (e.g. robust aggregation), have been proposed to mitigate such attacks. However, we empirically show that under extremely strong attacks, these defensive methods fail to guarantee the robustness of FL. More importantly, we observe that as long as the global model is polluted, the impact of attacks on the global model will remain in subsequent rounds even if there are no subsequent attacks. In this work, we propose a client-based defense, named White Blood Cell for Federated Learning (FL-WBC), which can mitigate model poisoning attacks that have already polluted the global model. The key idea of FL-WBC is to identify the parameter space where long-lasting attack effect on parameters resides and perturb that space during local training. Furthermore, we derive a certified robustness guarantee against model poisoning attacks and a convergence guarantee to FedAvg after applying our FL-WBC. We conduct experiments on FasionMNIST and CIFAR10 to evaluate the defense against state-of-the-art model poisoning attacks. The results demonstrate that our method can effectively mitigate model poisoning attack impact on the global model within 5 communication rounds with nearly no accuracy drop under both IID and Non-IID settings. Our defense is also complementary to existing server-based robust aggregation approaches and can further improve the robustness of FL under extremely strong attacks.
| accept | This is an interesting paper, and the reviewers find much to recommend it.
Moreover, the rebuttal and discussion post-review seems to have addressed many, if not all, the concerns of the reviewers.
The issue of the attenuation of the attack was discussed. Perhaps the portion of the appendix where this is empirically addressed can be highlighted in the paper.
| val | [
"vqNRNf5FJ2r",
"mR-YGEg1bx6",
"ohNi1EjETDC",
"v9Cj8Xl6QTu",
"hReYS0tfEZc",
"RHcqq5I6Jsj",
"FwkgMb86mHB",
"SHHQttreLfA",
"YsPks0Mxbln",
"woJIBJ33EoW"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer ZHqh,\n\nThanks for your reply. Actually, we have empirical results showing that our theoretical analysis about AEP reconciles with previous works, which are provided in Appendix B. Figure 5 in Appendix B shows that the misclassification loss would increase even without any defense after the first a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"mR-YGEg1bx6",
"FwkgMb86mHB",
"hReYS0tfEZc",
"RHcqq5I6Jsj",
"woJIBJ33EoW",
"YsPks0Mxbln",
"SHHQttreLfA",
"nips_2021_96uH8HeGb9G",
"nips_2021_96uH8HeGb9G",
"nips_2021_96uH8HeGb9G"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.