paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2021_BfPzZSype5M | RMM: Reinforced Memory Management for Class-Incremental Learning | Class-Incremental Learning (CIL) [38] trains classifiers under a strict memory budget: in each incremental phase, learning is done for new data, most of which is abandoned to free space for the next phase. The preserved data are exemplars used for replaying. However, existing methods use a static and ad hoc strategy for memory allocation, which is often sub-optimal. In this work, we propose a dynamic memory management strategy that is optimized for the incremental phases and different object classes. We call our method reinforced memory management (RMM), leveraging reinforcement learning. RMM training is not naturally compatible with CIL as the past, and future data are strictly non-accessible during the incremental phases. We solve this by training the policy function of RMM on pseudo CIL tasks, e.g., the tasks built on the data of the zeroth phase, and then applying it to target tasks. RMM propagates two levels of actions: Level-1 determines how to split the memory between old and new classes, and Level-2 allocates memory for each specific class. In essence, it is an optimizable and general method for memory management that can be used in any replaying-based CIL method. For evaluation, we plug RMM into two top-performing baselines (LUCIR+AANets and POD+AANets [28]) and conduct experiments on three benchmarks (CIFAR-100, ImageNet-Subset, and ImageNet-Full). Our results show clear improvements, e.g., boosting POD+AANets by 3.6%, 4.4%, and 1.9% in the 25-Phase settings of the above benchmarks, respectively. The code is available at https://gitlab.mpi-klsb.mpg.de/yaoyaoliu/rmm/.
| accept | This paper proposes a reinforcement learning-based for memory management in Class-Incremental Learning (CIL). The reviewers find the paper well written and the idea novel. They also think that the experiments are thorough and positive. There were initially some concerns on the generalizability of the proposed method. The reviewers are generally satisfied by the author responses on the issue. Despite remaining reservations regarding the complexity of the method relative to the performance gains, there is a clear consensus among the reviewers that the paper should be accepted.
| train | [
"xoqrr0cZbSM",
"51o5AESDpOd",
"vTM8ieEx5e1",
"Ud9B3NB6NgP",
"8jGFQ2GYD1",
"5lFHMH2YCq",
"MPue1TLq2LT",
"HnnAH5MzNAY",
"lqiCBrdrfE",
"_b3KKyhe36",
"KsVJYReic7",
"eaALwUyXkSn",
"3xSR5_qf4F",
"9HnlOFMlsgc"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a RL-based memory management policy for CIL problem. They have two-step formulation of policy, which first determines how to allocate the memory to old and new tasks, then determines how many samples to store per each class within the task. A standard RL framework of policy gradient method is us... | [
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_BfPzZSype5M",
"xoqrr0cZbSM",
"Ud9B3NB6NgP",
"KsVJYReic7",
"nips_2021_BfPzZSype5M",
"8jGFQ2GYD1",
"HnnAH5MzNAY",
"lqiCBrdrfE",
"9HnlOFMlsgc",
"xoqrr0cZbSM",
"3xSR5_qf4F",
"8jGFQ2GYD1",
"nips_2021_BfPzZSype5M",
"nips_2021_BfPzZSype5M"
] |
nips_2021_jE5UVpKhkUG | Learning Compact Representations of Neural Networks using DiscriminAtive Masking (DAM) | A central goal in deep learning is to learn compact representations of features at every layer of a neural network, which is useful for both unsupervised representation learning and structured network pruning. While there is a growing body of work in structured pruning, current state-of-the-art methods suffer from two key limitations: (i) instability during training, and (ii) need for an additional step of fine-tuning, which is resource-intensive. At the core of these limitations is the lack of a systematic approach that jointly prunes and refines weights during training in a single stage, and does not require any fine-tuning upon convergence to achieve state-of-the-art performance. We present a novel single-stage structured pruning method termed DiscriminAtive Masking (DAM). The key intuition behind DAM is to discriminatively prefer some of the neurons to be refined during the training process, while gradually masking out other neurons. We show that our proposed DAM approach has remarkably good performance over a diverse range of applications in representation learning and structured pruning, including dimensionality reduction, recommendation system, graph representation learning, and structured pruning for image classification. We also theoretically show that the learning objective of DAM is directly related to minimizing the L_0 norm of the masking layer. All of our codes and datasets are available https://github.com/jayroxis/dam-pytorch.
| accept | The submission proposes a method for learning a network with sparse parameters. The key idea is to optimize an L0 penalized objective (1). This is non-differentiable, so instead a ReLU is applied to a parametrized tanh scaling (2) which will selectively zero out some fraction of the weights, where the fraction is monotonic in the beta parameter. This leads to the main result of the paper (3), which explicitly demonstrates this relationship. I was initially confused by the comment of Reviewer VSsj who stated that an L0 constraint (non-differentiable) could be trained by gradient descent. One clue is that the "equality" in (3) includes a ceiling function, implicitly showing that there is indeed a relaxation of the L0 constraint, through the tanh function, albeit a different relaxation than is given by e.g. the L1 relaxation.
Overall, a majority of the reviewers feel that this paper is ready to be accepted at NeurIPS. It is an interesting and clearly presented contribution to learned network sparsity. Comparison to baseline methods is on the weak side, but overall this paper is acceptable at NeurIPS. | train | [
"_4yMUmILnRd",
"MquAVXp8l3x",
"iXJJx70ZfTP",
"2rFqV4TCA_",
"ZHoCow6Md2U",
"QNBODfoA2qM",
"3j4-aw4buSm",
"9kBi_ks6KvU",
"ykyhFIs1FUN",
"aiRsMl_6Rs",
"LY-E7RkgpS4"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a single stage pruning method (DAM) that jointly prunes and refines weights during training. The method uses a monotonically increasing gate function for the neurons/channels in each layer with one trainable parameter. The gate function only discriminates neurons based on the position of them i... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
8,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"nips_2021_jE5UVpKhkUG",
"LY-E7RkgpS4",
"aiRsMl_6Rs",
"ykyhFIs1FUN",
"_4yMUmILnRd",
"9kBi_ks6KvU",
"nips_2021_jE5UVpKhkUG",
"nips_2021_jE5UVpKhkUG",
"nips_2021_jE5UVpKhkUG",
"nips_2021_jE5UVpKhkUG",
"nips_2021_jE5UVpKhkUG"
] |
nips_2021_dZWFBYWp6UY | Neural Auto-Curricula in Two-Player Zero-Sum Games | When solving two-player zero-sum games, multi-agent reinforcement learning (MARL) algorithms often create populations of agents where, at each iteration, a new agent is discovered as the best response to a mixture over the opponent population. Within such a process, the update rules of "who to compete with" (i.e., the opponent mixture) and "how to beat them" (i.e., finding best responses) are underpinned by manually developed game theoretical principles such as fictitious play and Double Oracle. In this paper, we introduce a novel framework—Neural Auto-Curricula (NAC)—that leverages meta-gradient descent to automate the discovery of the learning update rule without explicit human design. Specifically, we parameterise the opponent selection module by neural networks and the best-response module by optimisation subroutines, and update their parameters solely via interaction with the game engine, where both players aim to minimise their exploitability. Surprisingly, even without human design, the discovered MARL algorithms achieve competitive or even better performance with the state-of-the-art population-based game solvers (e.g., PSRO) on Games of Skill, differentiable Lotto, non-transitive Mixture Games, Iterated Matching Pennies, and Kuhn Poker. Additionally, we show that NAC is able to generalise from small games to large games, for example training on Kuhn Poker and outperforming PSRO on Leduc Poker. Our work inspires a promising future direction to discover general MARL algorithms solely from data.
| accept | The paper presents a method to use meta-learning to automatically discover curricula (LMAC), in the form of the meta-solver component of the Policy-Space Reponse Oracles (PSRO) framework of Lanctot et al, 2017. All of the reviewers agree that this is a novel and interesting direction for the game-theoretic approaches to multiagent reinforcement learning in two-player zero-sum games. The reviews also agreed on the quality of the clarity of exposition in writing/presentation. The result showing transfer from Kuhn to Leduc poker (which is roughly one or several hundred times larger depending on the implementation) is especially promising, but it is also between two similarly-structured poker-like games. As pointed out by the reviewers, what remains unclear is precisely what kind of strategies LMAC is learning and more specifically why they help for generlization, and any specific reasons for the mixed results across domains. Also, as remarked by 1fYy, the use of exploitability to drive the meta-solver loss is inspired by von Neumann's minimax theorem (or definition of Nash equilibrium) hence qualifies as game-theoretic knowledge, so the authors need to rephrase this claim. The paper could also benefit significantly from several other word changes pointed out in the reviews. | train | [
"JCRX5x0FljG",
"W__k5rWl9wM",
"v2rim-xCVZq",
"ymlRgOL9tSe",
"rX1MBtTpfyP",
"C9C4QWkL2_m",
"mHLiGQ6Jbgg",
"lKnCg_CHw4c",
"7Ek1BEAtMmT",
"CPtuM7L5XI",
"3QBaez5cWuB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Hi, thanks for your response!\n\n> Reviewer: Algorithm has a \"require\" statement but no \"ensure\" statement.\n> Response: Could the reviewer please elaborate, we are unsure about what you mean?\n\nI was just asking for you to add a return statement to the pseudocode. \n\n\n",
" Thank you for addressing the r... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"C9C4QWkL2_m",
"rX1MBtTpfyP",
"mHLiGQ6Jbgg",
"3QBaez5cWuB",
"CPtuM7L5XI",
"7Ek1BEAtMmT",
"lKnCg_CHw4c",
"nips_2021_dZWFBYWp6UY",
"nips_2021_dZWFBYWp6UY",
"nips_2021_dZWFBYWp6UY",
"nips_2021_dZWFBYWp6UY"
] |
nips_2021_-1AAgrS5FF | ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis | Autoregressive models and their sequential factorization of the data likelihood have recently demonstrated great potential for image representation and synthesis. Nevertheless, they incorporate image context in a linear 1D order by attending only to previously synthesized image patches above or to the left. Not only is this unidirectional, sequential bias of attention unnatural for images as it disregards large parts of a scene until synthesis is almost complete. It also processes the entire image on a single scale, thus ignoring more global contextual information up to the gist of the entire scene. As a remedy we incorporate a coarse-to-fine hierarchy of context by combining the autoregressive formulation with a multinomial diffusion process: Whereas a multistage diffusion process successively compresses and removes information to coarsen an image, we train a Markov chain to invert this process. In each stage, the resulting autoregressive ImageBART model progressively incorporates context from previous stages in a coarse-to-fine manner. Experiments demonstrate the gain over current autoregressive models, continuous diffusion probabilistic models, and latent variable models. Moreover, the approach enables to control the synthesis process and to trade compression rate against reconstruction accuracy, while still guaranteeing visually plausible results.
| accept | This paper presents several improvements to Taming Transformers through the use of a diffusion instead of autoregressive prior that enable more flexible conditional image generation and improved unconditional image generation and speed. Unlike prior work on diffusion that uses independent distributions for each layer, here autoregressive models for the reverse diffusion process are used which enables far fewer steps but results in slower sampling. The effectiveness of the approach is demonstrated in several conditional image generation tasks (e.g. arbitrary order inpainting, text-to-image generation, panorama generation). Reviewers were somewhat concerned of the novelty over Taming Transformers and the performance of the method on ImageNet. The authors addressed concerns with ImageNet results in the rebuttal, and provided more extensive comparison to TamingTransformers highlighting the utility of the diffusion process with autoregressive conditoinal distributions over the larger one step autoregressive model in that prior work. I'd argue for accepting this paper due to the interesting experimental results and applications, as well as the novelty in the diffusion generative model space of utilizing autoregressive decoders and showing it allows for far fewer steps. | train | [
"CCNX7xKz3Ox",
"SZl5ANpXj65",
"2edV9l-UJC2",
"AE-PkyJ5EzW",
"bDdebgT8qv4",
"7hk2Shq3XKx",
"IY34NyBV7K4",
"gTQ4kVI27P8",
"1bKDIFftldS",
"umwIlIghaCI",
"RcffeCWFGX"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposed a coarse-to-fine image synthesis method which aims to incorporate global context into autoregressive models by a diffusion process. The authors divided the whole task into two subtasks: 1) discrete representation learning task 2) learning a Markov chain to reverse a fixed multinomial diffusion p... | [
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"nips_2021_-1AAgrS5FF",
"IY34NyBV7K4",
"nips_2021_-1AAgrS5FF",
"nips_2021_-1AAgrS5FF",
"1bKDIFftldS",
"umwIlIghaCI",
"CCNX7xKz3Ox",
"RcffeCWFGX",
"AE-PkyJ5EzW",
"nips_2021_-1AAgrS5FF",
"nips_2021_-1AAgrS5FF"
] |
nips_2021_2vyiCxfb6el | From global to local MDI variable importances for random forests and when they are Shapley values | Random forests have been widely used for their ability to provide so-called importance measures, which give insight at a global (per dataset) level on the relevance of input variables to predict a certain output. On the other hand, methods based on Shapley values have been introduced to refine the analysis of feature relevance in tree-based models to a local (per instance) level. In this context, we first show that the global Mean Decrease of Impurity (MDI) variable importance scores correspond to Shapley values under some conditions. Then, we derive a local MDI importance measure of variable relevance, which has a very natural connection with the global MDI measure and can be related to a new notion of local feature relevance. We further link local MDI importances with Shapley values and discuss them in the light of related measures from the literature. The measures are illustrated through experiments on several classification and regression problems.
| accept | This paper presents previously unknown connection between the global Mean Decrease of Impurity (MDI) variable importance scores and Shapley values under certain conditions. The authors also derive local MDI feature importance measure link with Shepley values. The paper presents novel insights, and the local MDIs are likely practically useful for its simplicity. This paper is clearly written and structured well. Overall, this paper constitutes an important contribution to the field and passes the bar for the acceptance to NeurIPS. | train | [
"MkZ2jhEsfDc",
"ZtLQ8SxxiE-",
"FSLwhJwnH5D",
"q_60KSLFw2Q",
"0NkNPc6PmU2",
"ml2HMTtFB-",
"L8dYfhLBjh",
"7w9LD0It7v6",
"q9nqafIKPPw",
"t7Pq6TtEbRn"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper demonstrates a connection between Shapley values and a tree-based measure of global feature importance, then proposes a local version of that tree-based feature importance measure. The paper also offers a discussion of the properties satisfied by the proposed method and runs empirical tests of feature i... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"nips_2021_2vyiCxfb6el",
"FSLwhJwnH5D",
"7w9LD0It7v6",
"0NkNPc6PmU2",
"ml2HMTtFB-",
"t7Pq6TtEbRn",
"q9nqafIKPPw",
"MkZ2jhEsfDc",
"nips_2021_2vyiCxfb6el",
"nips_2021_2vyiCxfb6el"
] |
nips_2021_83A-0x6Pfi_ | Adversarial Robustness of Streaming Algorithms through Importance Sampling | Vladimir braverman, Avinatan Hasidim, Yossi Matias, Mariano Schain, Sandeep Silwal, Samson Zhou | accept | The submission provides a nice overall message that many streaming algorithms are already adversarially robust. This was deemed an important message. There were some concerns with the presentation, as discussed in the reviews and the follow-ups, which the authors are encouraged to address and correct. | train | [
"diQt7o_l5IJ",
"3rO8ZJBkYu",
"QDL7cwW7sTU",
"L5Ycwqn6p4-",
"N-NTu3HXFzc",
"CZN5kXdc-qe",
"9PObFYnufjW",
"QxO_Ie6jKx",
"WbgO97Ms1Q",
"RCighKC95tv",
"xISYOzzARDi",
"swLp40XdfsZ",
"MIuO_MIcKY",
"i_zNTksYpE9"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper show how streaming algorithms based on importance sampling can achieve adversarial robustness in a specific but quite general adversarial model, where the adversary observes all the intermediate outputs of the algorithm and may even learn past random bits, but has no access to the \"new\" random bits us... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_83A-0x6Pfi_",
"N-NTu3HXFzc",
"CZN5kXdc-qe",
"diQt7o_l5IJ",
"diQt7o_l5IJ",
"QxO_Ie6jKx",
"WbgO97Ms1Q",
"RCighKC95tv",
"diQt7o_l5IJ",
"i_zNTksYpE9",
"nips_2021_83A-0x6Pfi_",
"MIuO_MIcKY",
"nips_2021_83A-0x6Pfi_",
"nips_2021_83A-0x6Pfi_"
] |
nips_2021_W9oywyjO8VN | Tractable Regularization of Probabilistic Circuits | Probabilistic Circuits (PCs) are a promising avenue for probabilistic modeling. They combine advantages of probabilistic graphical models (PGMs) with those of neural networks (NNs). Crucially, however, they are tractable probabilistic models, supporting efficient and exact computation of many probabilistic inference queries, such as marginals and MAP. Further, since PCs are structured computation graphs, they can take advantage of deep-learning-style parameter updates, which greatly improves their scalability. However, this innovation also makes PCs prone to overfitting, which has been observed in many standard benchmarks. Despite the existence of abundant regularization techniques for both PGMs and NNs, they are not effective enough when applied to PCs. Instead, we re-think regularization for PCs and propose two intuitive techniques, data softening and entropy regularization, that both take advantage of PCs' tractability and still have an efficient implementation as a computation graph. Specifically, data softening provides a principled way to add uncertainty in datasets in closed form, which implicitly regularizes PC parameters. To learn parameters from a softened dataset, PCs only need linear time by virtue of their tractability. In entropy regularization, the exact entropy of the distribution encoded by a PC can be regularized directly, which is again infeasible for most other density estimation models. We show that both methods consistently improve the generalization performance of a wide variety of PCs. Moreover, when paired with a simple PC structure, we achieved state-of-the-art results on 10 out of 20 standard discrete density estimation benchmarks. Open-source code and experiments are available at https://github.com/UCLA-StarAI/Tractable-PC-Regularization.
| accept | Thanks for submitting your work to NeurIPS. Getting probabilistic circuits (PCs) to scale well is important Here, the paper makes two important contributions to regularization of PCs, namely, data softening and entropy regularization. While both methods are infeasible for many machine learning models, the paper proves that they can be efficiently implemented for PCs. All reviewers agree that this is a very solid and important contribution. And I fully agree with this sentiment. | train | [
"begonc2u0Hc",
"OlRein1azWN",
"z6S69fHYhAH",
"cw40_Gp9sjT",
"zhY4V9YxOuv",
"7mjmiU86p34",
"TiIx2yZVILx",
"O_T_Qfi711W"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their time spent reviewing our work and for evaluating it to be well-written and technically sound.\n\n - Clarification of Lines 48-49:\n\nThank you for spotting this presentation issue. As formally stated in Thm. 3, the linear run-time of data softening is achievable when the PC is dete... | [
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
3,
5,
3,
4
] | [
"O_T_Qfi711W",
"TiIx2yZVILx",
"7mjmiU86p34",
"zhY4V9YxOuv",
"nips_2021_W9oywyjO8VN",
"nips_2021_W9oywyjO8VN",
"nips_2021_W9oywyjO8VN",
"nips_2021_W9oywyjO8VN"
] |
nips_2021_LOHyqjfyra | On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness | Invariance to a broad array of image corruptions, such as warping, noise, or color shifts, is an important aspect of building robust models in computer vision. Recently, several new data augmentations have been proposed that significantly improve performance on ImageNet-C, a benchmark of such corruptions. However, there is still a lack of basic understanding on the relationship between data augmentations and test-time corruptions. To this end, we develop a feature space for image transforms, and then use a new measure in this space between augmentations and corruptions called the Minimal Sample Distance to demonstrate there is a strong correlation between similarity and performance. We then investigate recent data augmentations and observe a significant degradation in corruption robustness when the test-time corruptions are sampled to be perceptually dissimilar from ImageNet-C in this feature space. Our results suggest that test error can be improved by training on perceptually similar augmentations, and data augmentations may not generalize well beyond the existing benchmark. We hope our results and tools will allow for more robust progress towards improving robustness to image corruptions. We provide code at https://github.com/facebookresearch/augmentation-corruption.
| accept | This is an interesting paper, and there was clear agreement among the reviewers that this marks an interesting contribution. | train | [
"WkuhQ-mTLUs",
"fZ6IJ7K2zjn",
"m982qpKguQ_",
"u_HS0GnoQk",
"M6aXRAd4CNQ",
"nFLUwZ4GwdW"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper study the similarities between the augmentations/augmentation schemes with the corruptions and reason that the perceptual similarity between them is a strong predictor for the robustness improvements. The authors propose a new metric, Minimal Sample Distance (MSD), to evaluate the perceptual similarity b... | [
7,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
-1,
-1,
4,
5
] | [
"nips_2021_LOHyqjfyra",
"WkuhQ-mTLUs",
"nFLUwZ4GwdW",
"M6aXRAd4CNQ",
"nips_2021_LOHyqjfyra",
"nips_2021_LOHyqjfyra"
] |
nips_2021__4VxORHq-0g | Dynamic Distillation Network for Cross-Domain Few-Shot Recognition with Unlabeled Data | Most existing works in few-shot learning rely on meta-learning the network on a large base dataset which is typically from the same domain as the target dataset. We tackle the problem of cross-domain few-shot learning where there is a large shift between the base and target domain. The problem of cross-domain few-shot recognition with unlabeled target data is largely unaddressed in the literature. STARTUP was the first method that tackles this problem using self-training. However, it uses a fixed teacher pretrained on a labeled base dataset to create soft labels for the unlabeled target samples. As the base dataset and unlabeled dataset are from different domains, projecting the target images in the class-domain of the base dataset with a fixed pretrained model might be sub-optimal. We propose a simple dynamic distillation-based approach to facilitate unlabeled images from the novel/base dataset. We impose consistency regularization by calculating predictions from the weakly-augmented versions of the unlabeled images from a teacher network and matching it with the strongly augmented versions of the same images from a student network. The parameters of the teacher network are updated as exponential moving average of the parameters of the student network. We show that the proposed network learns representation that can be easily adapted to the target domain even though it has not been trained with target-specific classes during the pretraining phase. Our model outperforms the current state-of-the art method by 4.4% for 1-shot and 3.6% for 5-shot classification in the BSCD-FSL benchmark, and also shows competitive performance on traditional in-domain few-shot learning task.
| accept | The paper received mixed ratings, with three reviewers recommending acceptance and one rejection. The reviewers' main concerns include the novelty of the method, its motivation, missing comparison to some baselines, and the clarity of some parts of the paper. The paper went through several rounds of questions/answers, and the authors feedback addressed the concerns of most reviewers. The most negative reviewer did not update their score, but we believe that the latest feedback from the authors, on which the reviewer did not comment, better addressed some of the concerns raised by the reviewer. As such, we believe that the paper can be accepted to NeurIPS but strongly recommend the authors to incorporate their feedback in the final version. | test | [
"h1oG3iuPjB",
"q2FHqz9jqXp",
"LiJceDcHtTy",
"plBQdP7sLPc",
"vJCrWFv_dvN",
"lYm8-R7Kjur",
"3ScrdOyplhX",
"1ZKxO-u8Jt",
"KeyadZBN13R",
"zLm2oot6aFU",
"t2zAVsJdpIj",
"xnf1nfig3lb",
"nBCRzpPYWIr",
"i0KDxMwMtkT",
"nNkxt9L9rVh",
"_1OH2VM99Ew",
"Ifk09SjtAuc",
"MRRrVKaHyvR",
"Je1va04ToLl... | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_r... | [
" Dear Reviewer pL1o,\n\nThank you for your constructive comments and suggestions. As the discussion phase is nearing its end, we wondered if you might still have any concerns that we could address. We believe our response addressed all your questions/concerns, and hope that the work's impact and results are better... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"vJCrWFv_dvN",
"t2zAVsJdpIj",
"3ScrdOyplhX",
"xnf1nfig3lb",
"lYm8-R7Kjur",
"nBCRzpPYWIr",
"1ZKxO-u8Jt",
"KeyadZBN13R",
"zLm2oot6aFU",
"_1OH2VM99Ew",
"nNkxt9L9rVh",
"Ifk09SjtAuc",
"i0KDxMwMtkT",
"ORwKmDFi06A",
"Z6kxJ5ht5jd",
"MRRrVKaHyvR",
"Je1va04ToLl",
"nips_2021__4VxORHq-0g",
"... |
nips_2021_85h_DhXf3v | Hypergraph Propagation and Community Selection for Objects Retrieval | Spatial verification is a crucial technique for particular object retrieval. It utilizes spatial information for the accurate detection of true positive images. However, existing query expansion and diffusion methods cannot efficiently propagate the spatial information in an ordinary graph with scalar edge weights, resulting in low recall or precision. To tackle these problems, we propose a novel hypergraph-based framework that efficiently propagates spatial information in query time and retrieves an object in the database accurately. Additionally, we propose using the image graph's structure information through community selection technique, to measure the accuracy of the initial search result and to provide correct starting points for hypergraph propagation without heavy spatial verification computations. Experiment results on ROxford and RParis show that our method significantly outperforms the existing query expansion and diffusion methods.
| accept | This seems like a good approach which reviewers considered well motivated, and with promising results. Some weak points were identified such as lack of theoretical guarantees, and minor weaknesses/lack of clarity in the presentation. Reviewers largely appreciated the author response, and it seems if the changes are made the manuscript can be of good value to the NeurIPS audience. | train | [
"v3Ew1FgrSoQ",
"5z_K78ohZm",
"nYJRYFUEYxE",
"C3uMMfGNJCt",
"TWAzzihRpc",
"8l9l0_rQaBt",
"uxxiTFostV",
"W8jvhpw-iRZ",
"Ri_YGYsZ9fN"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper addresses the object retrieval problem from images. Traditional query expansion methods can only utilize very similar images, resulting in low recall. On the other hand, diffusion propagation methods on a image graph often ambiguity problem of propagation. Instead, spatial verification has to be used for... | [
6,
5,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"nips_2021_85h_DhXf3v",
"nips_2021_85h_DhXf3v",
"5z_K78ohZm",
"Ri_YGYsZ9fN",
"C3uMMfGNJCt",
"W8jvhpw-iRZ",
"v3Ew1FgrSoQ",
"nips_2021_85h_DhXf3v",
"nips_2021_85h_DhXf3v"
] |
nips_2021_uSQQH7Fj5U | Deep learning is adaptive to intrinsic dimensionality of model smoothness in anisotropic Besov space | Deep learning has exhibited superior performance for various tasks, especially for high-dimensional datasets, such as images. To understand this property, we investigate the approximation and estimation ability of deep learning on {\it anisotropic Besov spaces}.The anisotropic Besov space is characterized by direction-dependent smoothness and includes several function classes that have been investigated thus far.We demonstrate that the approximation error and estimation error of deep learning only depend on the average value of the smoothness parameters in all directions. Consequently, the curse of dimensionality can be avoided if the smoothness of the target function is highly anisotropic.Unlike existing studies, our analysis does not require a low-dimensional structure of the input data.We also investigate the minimax optimality of deep learning and compare its performance with that of the kernel method (more generally, linear estimators).The results show that deep learning has better dependence on the input dimensionality if the target function possesses anisotropic smoothness, and it achieves an adaptive rate for functions with spatially inhomogeneous smoothness.
| accept | This is a theory paper that analyzes deep neural networks through the lens of anisotropic Besov space. The paper provides a comprehensive study of adaptivity of neural networks to anisotropic (=direction-dependent) smoothness. The paper is clearly written and the authors have carefully positioned their work relative to prior works. | train | [
"B-NQdDykfjA",
"7u0wStwKTBf",
"cKB_ubTfcfi",
"MohfFJAxqb3",
"mm97_UwUnLv",
"YX35xIbwl62",
"1vd11rr2XxP"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"In this paper, the authors investigate the approximation and estimation error of deep neural networks on anisotropic Besov spaces. It has been shown that Holder and Sobolev function spaces are plagued by an unavoidable curse of dimensionality, and a line of research has focused on showing how deep neural network c... | [
7,
7,
-1,
-1,
-1,
-1,
7
] | [
3,
4,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_uSQQH7Fj5U",
"nips_2021_uSQQH7Fj5U",
"YX35xIbwl62",
"7u0wStwKTBf",
"B-NQdDykfjA",
"1vd11rr2XxP",
"nips_2021_uSQQH7Fj5U"
] |
nips_2021_Yowoe1scJOD | QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning | Traditionally, federated learning (FL) aims to train a single global model while collaboratively using multiple clients and a server. Two natural challenges that FL algorithms face are heterogeneity in data across clients and collaboration of clients with diverse resources. In this work, we introduce a quantized and personalized FL algorithm QuPeD that facilitates collective (personalized model compression) training via knowledge distillation (KD) among clients who have access to heterogeneous data and resources. For personalization, we allow clients to learn compressed personalized models with different quantization parameters and model dimensions/structures. Towards this, first we propose an algorithm for learning quantized models through a relaxed optimization problem, where quantization values are also optimized over. When each client participating in the (federated) learning process has different requirements of the compressed model (both in model dimension and precision), we formulate a compressed personalization framework by introducing knowledge distillation loss for local client objectives collaborating through a global model. We develop an alternating proximal gradient update for solving this compressed personalization problem, and analyze its convergence properties. Numerically, we validate that QuPeD outperforms competing personalized FL methods, FedAvg, and local training of clients in various heterogeneous settings.
| accept | The reviewers had a few concerns that were addressed in the authors' response. The majority of the reviewers agreed that the paper provides an interesting contribution to FL. The paper is missing a lot of references, on the quantization side as well as other topics, and I recommend the authors to carefully review the literature and cite the related work properly. | train | [
"36IgirLE-j_",
"epAYobzhPT1",
"5d4DWwpebyK",
"H7uJ4SGQbxx",
"-ZqLZMyfpNm",
"imcnEH8xXf",
"o19hiWW3EoU",
"DycuUuXHT6D",
"YnHuBAyL6N5",
"HEXSZkr3Ihr",
"KyVCutMKLwr",
"U7Lf-Mtmj2V",
"1WAA0feAgS4"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We hope that we have addressed all the concerns raised earlier, and we are happy to clarify any further questions you may have. We would like to update you on some additional numerical results we have regarding client sampling, as described below.\n\n**Client sampling with 0.1 sampling ratio:** We did additional ... | [
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"DycuUuXHT6D",
"YnHuBAyL6N5",
"H7uJ4SGQbxx",
"HEXSZkr3Ihr",
"o19hiWW3EoU",
"nips_2021_Yowoe1scJOD",
"imcnEH8xXf",
"1WAA0feAgS4",
"KyVCutMKLwr",
"U7Lf-Mtmj2V",
"nips_2021_Yowoe1scJOD",
"nips_2021_Yowoe1scJOD",
"nips_2021_Yowoe1scJOD"
] |
nips_2021_0zXJRJecC_ | Model Adaptation: Historical Contrastive Learning for Unsupervised Domain Adaptation without Source Data | Unsupervised domain adaptation aims to align a labeled source domain and an unlabeled target domain, but it requires to access the source data which often raises concerns in data privacy, data portability and data transmission efficiency. We study unsupervised model adaptation (UMA), or called Unsupervised Domain Adaptation without Source Data, an alternative setting that aims to adapt source-trained models towards target distributions without accessing source data. To this end, we design an innovative historical contrastive learning (HCL) technique that exploits historical source hypothesis to make up for the absence of source data in UMA. HCL addresses the UMA challenge from two perspectives. First, it introduces historical contrastive instance discrimination (HCID) that learns from target samples by contrasting their embeddings which are generated by the currently adapted model and the historical models. With the historical models, HCID encourages UMA to learn instance-discriminative target representations while preserving the source hypothesis. Second, it introduces historical contrastive category discrimination (HCCD) that pseudo-labels target samples to learn category-discriminative target representations. Specifically, HCCD re-weights pseudo labels according to their prediction consistency across the current and historical models. Extensive experiments show that HCL outperforms and state-of-the-art methods consistently across a variety of visual tasks and setups.
| accept | The paper tackles a challenging problem of unsupervised domain adaptation (UDA) without access to source data while assuming access to a pretrained source model. The proposed method takes inspiration from contrastive losses (such as InfoNCE) and tailors these for the UDA setting. Reviewers appreciated the significance of the problem tackled by the paper (source-free UDA) and the experiments on a variety of vision tasks that show clear improvement over existing methods. Authors should revise the paper taking into account the reviewers' comments, including a discussion of the prior work pointed out in the reviews on the source-free UDA and universal DA problems. | test | [
"tmchCH5JCEh",
"fo7vpc2fy12",
"8bWFxFolQyM",
"0UrdNTqvB1k",
"wZwuP6jzqiF",
"jzNuiwqssP3",
"SuEGIbpgcXv",
"PCJmjLM1s9N",
"YYN_Y0hd-W"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The authors address unsupervised model adaptation (UMA). While unsupervised domain adaptation (UDA) lets the model access the source domain data during an adaptation to a target domain, such access could raise multiple concerns such as privacy and portability. UMA does not allow the model to access the source doma... | [
8,
7,
-1,
6,
7,
-1,
-1,
-1,
-1
] | [
5,
4,
-1,
4,
4,
-1,
-1,
-1,
-1
] | [
"nips_2021_0zXJRJecC_",
"nips_2021_0zXJRJecC_",
"jzNuiwqssP3",
"nips_2021_0zXJRJecC_",
"nips_2021_0zXJRJecC_",
"0UrdNTqvB1k",
"tmchCH5JCEh",
"fo7vpc2fy12",
"wZwuP6jzqiF"
] |
nips_2021_HCrp4pdk2i | The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations | Feature importance (FI) estimates are a popular form of explanation, and they are commonly created and evaluated by computing the change in model confidence caused by removing certain input features at test time. For example, in the standard Sufficiency metric, only the top-k most important tokens are kept. In this paper, we study several under-explored dimensions of FI explanations, providing conceptual and empirical improvements for this form of explanation. First, we advance a new argument for why it can be problematic to remove features from an input when creating or evaluating explanations: the fact that these counterfactual inputs are out-of-distribution (OOD) to models implies that the resulting explanations are socially misaligned. The crux of the problem is that the model prior and random weight initialization influence the explanations (and explanation metrics) in unintended ways. To resolve this issue, we propose a simple alteration to the model training process, which results in more socially aligned explanations and metrics. Second, we compare among five approaches for removing features from model inputs. We find that some methods produce more OOD counterfactuals than others, and we make recommendations for selecting a feature-replacement function. Finally, we introduce four search-based methods for identifying FI explanations and compare them to strong baselines, including LIME, Anchors, and Integrated Gradients. Through experiments with six diverse text classification datasets, we find that the only method that consistently outperforms random search is a Parallel Local Search (PLS) that we introduce. Improvements over the second best method are as large as 5.4 points for Sufficiency and 17 points for Comprehensiveness.
| accept | The reviews for this paper were mostly borderline, with one review strong negative. While the reviewers thought that the paper addresses an important problem in a novel way, and provides interesting insights, the author response did not lead them to change their scores.
Despite this lukewarm assessment, I believe the paper makes contributions that likely merit acceptance: (1) Tackling an important problem in generating and evaluating explanations in NLP and ML more broadly (the OODness of feature importance attribution methods that are based on masking, ablation, etc., since they produce counterfactual/OOD examples); (2) providing a clear approach to mitigate this problem (adding counterfactuals at training time), which can work well (little drop compared to standard models); (3) evaluating various methods to create explanations and finding that some work well in the new counterfactually trained models (achieve good sufficiency/comprehensiveness scores, which are standard).
Given these contributions, and considering that no paper is perfect, I recommend acceptance. Since my recommendation goes somewhat against the general sense of the reviews, I provide here a detailed discussion of the main issues raised by the reviewers, how the authors responses, and my opinion of the issue. In any case, I urge the authors to take these issues into account in their next revision.
**Comments by Reviewer NMtf**
- argument on social alignment doesn't add much. Authors: it's another perspective on why FI OODness is problematic, and may help. AC: natural application of the social alignment argument from Jacovi and Goldberg to another class of explanations.
- Using accuracy to assess distribution shift is misleading, can conflate OODness and information removal. Authors: remove the same information in all conditions, difference is due to OODness. AC: agree that accuracy change between iid and ood data is a reasonable metric of OODness.
- Not convinced about sufficiency and comprehensiveness metrics. Authors: These are standard in the literature. AC: agree, these are standard faithfulness metrics. What's missing here is an evaluation of plausibility, for example by comparing with human-annotated explanations (highlights, rationales).
- Baseliens do terrible. Authors: LIME does well, IG is likely a poor method. AC: It's tricky to compare when the benchmark is not common. It could have been better to use standard datasets from explanations in NLP, where prior results are available. But, LIME results are indeed quite good.
**Reviewer Lhz2**
- The contributions do not form a consistent story line. Authors disagree and explain that the part on local attributions depends on fixing the OOD problem of explanations. AC: agree, the first part is a nice contribution in itself and it makes the rest possible.
- Counterfactual is an incorrect term here, since a counterfactual explanation means something different. Authors: did not use counterfactual explanation. AC: best to clarify that counterfactual input is not to be confused with the technical term of counterfactual explanation.
- questions the sufficiency/comprehensiveness metrics and prefers more meaningful metrics like SHAP. Here I agree with the authors that it is useful to apply these empirical metrics, which are standard in the literature, even if they do not have some nice theoretical properties like SHAP.
- Changing the training distribution is questionable. Authors: to use FI explanations, have to align training and test distributions, so if we want to keep test time metrics, must change training distribution. AC: indeed, I was also concerned by changing the training distribution and hence the model, but was pleased with the minor drop in accuracy, at least for some of the methods. However, it's possible that a model trained on the original distribution arrives at decisions in a different way than a model trained on the counterfactually-modified distribution, but they have similar performance. This is a hard problem to get around, but one that should be discussed.
- Search methods provide binary importance while existing methods provide scalar scores. Authors argue that their method still brings useful rankings (15 out of 24 vs LINE) and that for selecting features it works best. AC: not convinced by the author response, as 15/24 seems weak, and also as scalar weights can indeed be useful, both for interpretation (users might prefer them) and for downstream tasks (training with explanations, etc.).
- Sufficiency and comprehensiveness are often worse when using the proposed counterfactual training. Authors explain they recommend using CT regardless of these metrics, because using CT guards again OODness. AC: while I agree with the idea in principle, I'm unconvinced by the argument. If using a standard model leads to better explanations according to the metrics in question, then the argument for using CT grows much weaker.
**Reviewer 2JSj**
- Had some questions that seemed to have been answered.
- Not clear that the explanations will truly be more socially aligned. AC: did not see a clear answer to this. The answer to the issue of social alignment and how different models react to OOD counterfactuals is part of the story, but probably not all.
**Reviewer abDP**
- there are existing method to deal with the problem of estimating effects in the case of counterfactuals (mentioning specific studies: CausaLM, INLP, CaCE). Authors: don't see how these particular studies lessen the contribution, since they don't discuss how OODness affects social alignment. Also say that the mentioned methods are not meant to use with individual data points. AC: agree. Those should probably be discussed, but are not at the core of the current work.
- Counterfactual training can be misleading in terms of evaluation. Authors argue that in their case it makes sense, because the CT models work better on OOD data, and they're not interested in different kinds of OODness. AC: agree, this actually seems reasonable for the present use case of evaluating explanations, if not for robustness more broadly. The authors should clarify their goals in this evaluation and distinguish from general model robustness.
| test | [
"YM1yIKcMxph",
"jHzGdLrs7u2",
"0qXOgSCuLPU",
"BT9YkAvDYAm",
"5SsQ_RNiY29",
"HHS46Nzr6PN",
"5T5c2LJQv4o",
"lc5-SiFTu8Q",
"XJUldUx5Nf1"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper addresses a core limitation of existing feature importance methods, where importance scores are generated by comparing original examples to unrealistic counterfactuals. To solve this issue, the authors propose a solution where counterfactuals used to compute feature importance are generated by manipulat... | [
6,
5,
6,
-1,
-1,
-1,
-1,
-1,
3
] | [
4,
3,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_HCrp4pdk2i",
"nips_2021_HCrp4pdk2i",
"nips_2021_HCrp4pdk2i",
"YM1yIKcMxph",
"0qXOgSCuLPU",
"XJUldUx5Nf1",
"jHzGdLrs7u2",
"nips_2021_HCrp4pdk2i",
"nips_2021_HCrp4pdk2i"
] |
nips_2021_e9_UPqMNfi | Control Variates for Slate Off-Policy Evaluation | We study the problem of off-policy evaluation from batched contextual bandit data with multidimensional actions, often termed slates. The problem is common to recommender systems and user-interface optimization, and it is particularly challenging because of the combinatorially-sized action space. Swaminathan et al. (2017) have proposed the pseudoinverse (PI) estimator under the assumption that the conditional mean rewards are additive in actions. Using control variates, we consider a large class of unbiased estimators that includes as specific cases the PI estimator and (asymptotically) its self-normalized variant. By optimizing over this class, we obtain new estimators with risk improvement guarantees over both the PI and the self-normalized PI estimators. Experiments with real-world recommender data as well as synthetic data validate these improvements in practice.
| accept | The paper studies off-policy evaluation with combinatorial action spaces, referred to as "slate recommendation," and developed control-variate approaches that tradeoff bias and variance and provide a more complete story of the estimator landscape in this space. In addition, experimental results are provided, which help bridge the gap between theory and practice.
Overall, the paper is clearly written and well executed and the reviewers found few weaknesses. As such, we recommend acceptance. Congrats! | train | [
"q-CUqF4BZdU",
"N8JjD8Pim-w",
"I7KxuVB41uK",
"mTCWavWZp0X",
"fpOYGKicLKM",
"fEa5Rb_aEOH",
"q0YZTBGBld",
"Gknq7RcPG81",
"_0QYnKHMqAn"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a new off-policy evaluation estimator for slate recommendation. The main idea focuses on a specific setup of Swaminathan et al, 2017, where the target / behavior policy are factorized. The new estimator generalizes the pseudo-inverse estimator by Swaminathan et al, 2017 with control variates, di... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"nips_2021_e9_UPqMNfi",
"fEa5Rb_aEOH",
"_0QYnKHMqAn",
"q-CUqF4BZdU",
"Gknq7RcPG81",
"q0YZTBGBld",
"nips_2021_e9_UPqMNfi",
"nips_2021_e9_UPqMNfi",
"nips_2021_e9_UPqMNfi"
] |
nips_2021_zQvxc8ul2rR | Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation | While agents trained by Reinforcement Learning (RL) can solve increasingly challenging tasks directly from visual observations, generalizing learned skills to novel environments remains very challenging. Extensive use of data augmentation is a promising technique for improving generalization in RL, but it is often found to decrease sample efficiency and can even lead to divergence. In this paper, we investigate causes of instability when using data augmentation in common off-policy RL algorithms. We identify two problems, both rooted in high-variance Q-targets. Based on our findings, we propose a simple yet effective technique for stabilizing this class of algorithms under augmentation. We perform extensive empirical evaluation of image-based RL using both ConvNets and Vision Transformers (ViT) on a family of benchmarks based on DeepMind Control Suite, as well as in robotic manipulation tasks. Our method greatly improves stability and sample efficiency of ConvNets under augmentation, and achieves generalization results competitive with state-of-the-art methods for image-based RL in environments with unseen visuals. We further show that our method scales to RL with ViT-based architectures, and that data augmentation may be especially important in this setting.
| accept | This paper investigates the problems related to the high variance in the targets that arise in TD-learning when the network is trained with data augmentation. Thee paper proposes a simple and effective way to address this problem by applying the data augmentation only on the online Q-network not on the target Qs and a modified objective function. The paper presents results on a range of tasks.
The paper is well-written and it is well-motivated. The authors investigate the proposed method on a range of interesting tasks and provide a very simple but seemingly effective solution.
The authors have provided an extensive rebuttal during the discussion period. However, some of the reviewers complaints are not completely addressed and I would appreciate if they can be addressed in the camera-ready version of this paper:
1) The theoretical justification provided in this paper is not rigorous enough. There is no guarantee that the proposed approach would work on any type of data augmentations. It would be nice if the authors are more careful about this and highlight that in the paper in order to avoid over-claiming.
2) The Reviewer MseA, xDk5 and UP5u asked several questions about the hyperparameters of the model. It would be nice if the authors can provide more analysis on the impact of the different hyperparameters used for different baselines. For example Reviewer UP5u pointed the different hyperparameters of DrQ and suggested more experiments to compare against.
3) More clear descriptions of the experimental setup along with a table of all hyperparameters used for all models (potentially in the appendix).
4) Adding error bars to the figures where applicable.
In addition to those points, I would recommend the authors to address all the other concerns by the reviewers as much as possible. | train | [
"TAJ3pE4xjjQ",
"YwdiW1ZkZKR",
"v9370WMLcg_",
"GsgAgUtP08",
"rkh4eGM1zzd",
"GkfaVwVzFoD",
"t6_cR-L9t8",
"JioO1mU6D2b",
"E5Wri82e8b",
"j0ZB1RhB1d",
"U0pN4EETJhb",
"SsnCSOo29O8",
"Gb8m7TXP42Q",
"kH_8QIb005l"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"(Auxiliary Review only -- the authors need not respond to this).\n\nThis work proposes issues with data augmentation in RL and proposes a couple of techniques to address those issues. It applies augmentation only on the current state and not future state to address erroneous bootstrapping. It also proposes a Q obj... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
5
] | [
"nips_2021_zQvxc8ul2rR",
"kH_8QIb005l",
"GsgAgUtP08",
"JioO1mU6D2b",
"j0ZB1RhB1d",
"nips_2021_zQvxc8ul2rR",
"kH_8QIb005l",
"Gb8m7TXP42Q",
"SsnCSOo29O8",
"U0pN4EETJhb",
"nips_2021_zQvxc8ul2rR",
"nips_2021_zQvxc8ul2rR",
"nips_2021_zQvxc8ul2rR",
"nips_2021_zQvxc8ul2rR"
] |
nips_2021_z36cUrI0jKJ | On Effective Scheduling of Model-based Reinforcement Learning | Model-based reinforcement learning has attracted wide attention due to its superior sample efficiency. Despite its impressive success so far, it is still unclear how to appropriately schedule the important hyperparameters to achieve adequate performance, such as the real data ratio for policy optimization in Dyna-style model-based algorithms. In this paper, we first theoretically analyze the role of real data in policy training, which suggests that gradually increasing the ratio of real data yields better performance. Inspired by the analysis, we propose a framework named AutoMBPO to automatically schedule the real data ratio as well as other hyperparameters in training model-based policy optimization (MBPO) algorithm, a representative running case of model-based methods. On several continuous control tasks, the MBPO instance trained with hyperparameters scheduled by AutoMBPO can significantly surpass the original one, and the real data ratio schedule found by AutoMBPO shows consistency with our theoretical analysis.
| accept | Reviewers agree that the paper is well-motivated, well-written and the the proposed method, motivated by theoretical analysis, shows convincing experimental results on hyperparameter tuning of model-based RL, a hard yet important problem to address. On the other hand, minor concerns remain mainly about whether the theory is connected well with the practice, whether another outside RL loop over model-based RL is "too heavy" in practice and requires high sample complexity. Authors have address many of the concerns raised by the reviewers and I am happy to accept this paper.
| val | [
"IpeyuwuMTfj",
"7YpW6j_zjeu",
"CPpJ5wb43fB",
"KvnNha4Oe-E",
"F5kie_7YVXF",
"Y0aEJeGV6M0",
"KeRA2SRlvhD",
"Tf-cymhYJE",
"HKWkrmq87cV",
"F-iCNIGRPXN",
"uC5bra7GQUn",
"c4Gk6yhl3M",
"ef4d_VUgf1z"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your reply. The hyper-parameter of the outer loop is also one concern of Reviewer SPME, and we have answered the question (see A3) in the first response to him/her. \n\nWe claim finding the hyper-parameter of the outer loop did not take much effort for the following two reasons:\n1. According to the hy... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"7YpW6j_zjeu",
"uC5bra7GQUn",
"c4Gk6yhl3M",
"nips_2021_z36cUrI0jKJ",
"KvnNha4Oe-E",
"c4Gk6yhl3M",
"KvnNha4Oe-E",
"KvnNha4Oe-E",
"F-iCNIGRPXN",
"ef4d_VUgf1z",
"c4Gk6yhl3M",
"nips_2021_z36cUrI0jKJ",
"nips_2021_z36cUrI0jKJ"
] |
nips_2021_lVmIjQiJJSr | Removing Inter-Experimental Variability from Functional Data in Systems Neuroscience | Integrating data from multiple experiments is common practice in systems neuroscience but it requires inter-experimental variability to be negligible compared to the biological signal of interest. This requirement is rarely fulfilled; systematic changes between experiments can drastically affect the outcome of complex analysis pipelines. Modern machine learning approaches designed to adapt models across multiple data domains offer flexible ways of removing inter-experimental variability where classical statistical methods often fail. While applications of these methods have been mostly limited to single-cell genomics, in this work, we develop a theoretical framework for domain adaptation in systems neuroscience. We implement this in an adversarial optimization scheme that removes inter-experimental variability while preserving the biological signal. We compare our method to previous approaches on a large-scale dataset of two-photon imaging recordings of retinal bipolar cell responses to visual stimuli. This dataset provides a unique benchmark as it contains biological signal from well-defined cell types that is obscured by large inter-experimental variability. In a supervised setting, we compare the generalization performance of cell type classifiers across experiments, which we validate with anatomical cell type distributions from electron microscopy data. In an unsupervised setting, we remove inter-experimental variability from the data which can then be fed into arbitrary downstream analyses. In both settings, we find that our method achieves the best trade-off between removing inter-experimental variability and preserving biological signal. Thus, we offer a flexible approach to remove inter-experimental variability and integrate datasets across experiments in systems neuroscience. Code available at https://github.com/eulerlab/rave.
| accept | This paper provides a method for removing inter-experimental variability from functional datasets arising in systems neuroscience. All reviewers agreed that the results are significant and the paper was well written and executed. | train | [
"yKjbJlbU_AN",
"M5z5qFhfxch",
"6Bs6GBYucsg",
"6It3TaZs11C",
"7FRIe7CVXS7",
"5qUWcg4Rk12",
"XPazL88yicp",
"NKXagXmMlFZ",
"8Drfmrptfb",
"wHtdJGqPfF",
"iabngGegWH0",
"izVCyCLLG8"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" No problem! It was my pleasure to read your work. Your additions look great. It is fine to keep in the related works section. That would be the more natural fit.",
" Thank you for the thoughtful response to my comments.",
" It is good to hear that the authors will release the datasets and the code upon public... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
8,
7,
8
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"8Drfmrptfb",
"NKXagXmMlFZ",
"XPazL88yicp",
"nips_2021_lVmIjQiJJSr",
"5qUWcg4Rk12",
"6It3TaZs11C",
"iabngGegWH0",
"izVCyCLLG8",
"wHtdJGqPfF",
"nips_2021_lVmIjQiJJSr",
"nips_2021_lVmIjQiJJSr",
"nips_2021_lVmIjQiJJSr"
] |
nips_2021_o24k_XfIe6_ | Learning Knowledge Graph-based World Models of Textual Environments | World models improve a learning agent's ability to efficiently operate in interactive and situated environments. This work focuses on the task of building world models of text-based game environments. Text-based games, or interactive narratives, are reinforcement learning environments in which agents perceive and interact with the world using textual natural language. These environments contain long, multi-step puzzles or quests woven through a world that is filled with hundreds of characters, locations, and objects. Our world model learns to simultaneously: (1) predict changes in the world caused by an agent's actions when representing the world as a knowledge graph; and (2) generate the set of contextually relevant natural language actions required to operate in the world. We frame this task as a Set of Sequences generation problem by exploiting the inherent structure of knowledge graphs and actions and introduce both a transformer-based multi-task architecture and a loss function to train it. A zero-shot ablation study on never-before-seen textual worlds shows that our methodology significantly outperforms existing textual world modeling techniques as well as the importance of each of our contributions.
| accept | This paper jointly learns a policy (action distribution) and world model (updates to a graph representation of the environment) in a text adventure. While interesting, I'm not seeing the obvious application of this (despite having read the ethics impact section of the paper), and have some concerns about text adventure games as a domain — however the meta-review stage of the review process is not an appropriate point for me to bring these concerns to the authors' attention, so I will happily discount them and focus on the reviews and discussion.
The reviewer consensus leans towards acceptance, after some discussion. I would have liked to see a follow-up response from reviewer FHsa to the author rebuttal, and have no evidence that they have read it or considered updating their score despite my prompts, which is a bit disappointing. Less crucially, it would have been nice to get an author response to the follow-on comments by reviewer cdHj, but this is not as important since that reviewer updated their score to recommend acceptance.
Ultimately, when discounting the score of reviewer FHsa, the median and mean scores are in acceptance territory. I confess I do not fully understand reviewer FHsa's argument against the paper, and felt the author response was detailed. In the absence of this reviewer's willingness to defend their appraisal, I recommend acceptance. | train | [
"-XZLMJeuEcp",
"h9ghy8DPepB",
"_3pHrgupgI5",
"U26nVZ6KrS",
"VzuSAV-lbNy",
"tIxm5BlarnJ",
"NAWiOPZdmxq",
"YuNEwMFaqA",
"JjFIwcd0pWi",
"4dJxwncikMl",
"K4tiEfkTni9",
"_DvX0kirY-r",
"wpQ0HmAizGP",
"IOnWcVOAgfe"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank for providing answers to my questions. The rebuttal has answered most of my questions. I will keep my score of 7. ",
"This paper presents a method for modelling state (via graph prediction) and action distributions in text adventure games. To do the former, this work proposes predicting incremental graph... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"YuNEwMFaqA",
"nips_2021_o24k_XfIe6_",
"JjFIwcd0pWi",
"NAWiOPZdmxq",
"NAWiOPZdmxq",
"NAWiOPZdmxq",
"nips_2021_o24k_XfIe6_",
"IOnWcVOAgfe",
"h9ghy8DPepB",
"_DvX0kirY-r",
"wpQ0HmAizGP",
"nips_2021_o24k_XfIe6_",
"nips_2021_o24k_XfIe6_",
"nips_2021_o24k_XfIe6_"
] |
nips_2021_giEMdtueyZn | Damped Anderson Mixing for Deep Reinforcement Learning: Acceleration, Convergence, and Stabilization | Anderson mixing has been heuristically applied to reinforcement learning (RL) algorithms for accelerating convergence and improving the sampling efficiency of deep RL. Despite its heuristic improvement of convergence, a rigorous mathematical justification for the benefits of Anderson mixing in RL has not yet been put forward. In this paper, we provide deeper insights into a class of acceleration schemes built on Anderson mixing that improve the convergence of deep RL algorithms. Our main results establish a connection between Anderson mixing and quasi-Newton methods and prove that Anderson mixing increases the convergence radius of policy iteration schemes by an extra contraction factor. The key focus of the analysis roots in the fixed-point iteration nature of RL. We further propose a stabilization strategy by introducing a stable regularization term in Anderson mixing and a differentiable, non-expansive MellowMax operator that can allow both faster convergence and more stable behavior. Extensive experiments demonstrate that our proposed method enhances the convergence, stability, and performance of RL algorithms.
| accept | The paper is a nice mix of theoretical and experimental results. It analyzes Anderson mixing in RL, finding a link with quasi-newton methods and providing a convergence rate. The assumptions of the analysis hold for the MellowMax operator (introduced in published work). Fairly thorough experiments on Atari with ablations provide evidence for the usefulness of stable Anderson acceleration and the mellowmax operator. | train | [
"Bdb5QeMMrXZ",
"6Vxn7vsWiaQ",
"Kt6KEe8cJsh",
"C30zBptayA6",
"IZQnqCRtPMJ",
"P979by4ul5",
"HWeVjpG402V"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for their response. I remain convinced that the work is suitable for publication. Thanks for the clarifying comments.",
" Thank you for your constructive feedback and valuable comment. Please kindly find the detailed responses below.\n\n$\\textbf{Q1. About extension to TD3}$\n\nWe apprecia... | [
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"Kt6KEe8cJsh",
"IZQnqCRtPMJ",
"HWeVjpG402V",
"P979by4ul5",
"nips_2021_giEMdtueyZn",
"nips_2021_giEMdtueyZn",
"nips_2021_giEMdtueyZn"
] |
nips_2021_tKMheNMoi2Y | Approximate Decomposable Submodular Function Minimization for Cardinality-Based Components | Minimizing a sum of simple submodular functions of limited support is a special case of general submodular function minimization that has seen numerous applications in machine learning. We develop faster techniques for instances where components in the sum are cardinality-based, meaning they depend only on the size of the input set. This variant is one of the most widely applied in practice, encompassing, e.g., common energy functions arising in image segmentation and recent generalized hypergraph cut functions. We develop the first approximation algorithms for this problem, where the approximations can be quickly computed via reduction to a sparse graph cut problem, with graph sparsity controlled by the desired approximation factor. Our method relies on a new connection between sparse graph reduction techniques and piecewise linear approximations to concave functions. Our sparse reduction technique leads to significant improvements in theoretical runtimes, as well as substantial practical gains in problems ranging from benchmark image segmentation tasks to hypergraph clustering problems.
| accept | The reviewers appreciate the problem and the technical contribution of the paper. | train | [
"XdCSnIQH1wW",
"iz-Y7FD1y_N",
"VtWKPZtOnZR",
"YYfUVuRGVyy",
"UNKRAwpp_t8",
"_O4I3LH8DrI",
"iebkTH1sBYb",
"h6phZYPxeK6"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed review and the comments! We address your two questions in turn:\n\n\"But in practice are there large e's? I understand there are a few of them as in the hypergraph used in the experiment on local clustering, but can't we just ignore those large hyperedges and quickly get a good approxim... | [
-1,
-1,
-1,
-1,
7,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
2,
3,
4,
4
] | [
"h6phZYPxeK6",
"iebkTH1sBYb",
"_O4I3LH8DrI",
"UNKRAwpp_t8",
"nips_2021_tKMheNMoi2Y",
"nips_2021_tKMheNMoi2Y",
"nips_2021_tKMheNMoi2Y",
"nips_2021_tKMheNMoi2Y"
] |
nips_2021_YDGJ5YExiw6 | Episodic Multi-agent Reinforcement Learning with Curiosity-driven Exploration | Efficient exploration in deep cooperative multi-agent reinforcement learning (MARL) still remains challenging in complex coordination problems. In this paper, we introduce a novel Episodic Multi-agent reinforcement learning with Curiosity-driven exploration, called EMC. We leverage an insight of popular factorized MARL algorithms that the ``induced" individual Q-values, i.e., the individual utility functions used for local execution, are the embeddings of local action-observation histories, and can capture the interaction between agents due to reward backpropagation during centralized training. Therefore, we use prediction errors of individual Q-values as intrinsic rewards for coordinated exploration and utilize episodic memory to exploit explored informative experience to boost policy training. As the dynamics of an agent's individual Q-value function captures the novelty of states and the influence from other agents, our intrinsic reward can induce coordinated exploration to new or promising states. We illustrate the advantages of our method by didactic examples, and demonstrate its significant outperformance over state-of-the-art MARL baselines on challenging tasks in the StarCraft II micromanagement benchmark.
| accept | This paper proposes a new exploration algorithm specifically for multi-agent RL with factorized and centralized value function (critic). The idea is to use individual agent's value prediction errors as intrinsic rewards based on the intuition that this captures the influence from other agents thanks to the factorized/centralized aspect. In addition, the paper proposes an episodic memory that exploits past good experiences to further improve sample-efficiency.
Improving exploration in the context of multi-agent RL is also an important problem. All of the reviewers appreciated that the proposed idea is insightful and well-motivated, and the results are also strong. There is still a remaining concern regarding the effectiveness of the episodic memory especially in stochastic environments. The authors added a new result showing that the proposed method is still effective to some degree of stochasticity during the rebuttal period, and the reviewers are generally satisfied with this result. Therefore, I recommend accepting this paper and suggest the authors to include this for the camera-ready version. | val | [
"fDXsAsYqz8W",
"1hFD-UCyNjL",
"2QRynYexGFA",
"Y79grQuRb0_",
"VplA9RydK6h",
"tTZ1T8s79q",
"2-HTik9Mkcg",
"ahHUV8b6lI",
"Ejan7Aj1jp3",
"jk05Zao-qqe",
"768cWqh0i4Y",
"IekhBfmSuP",
"7olfH0PwsXC",
"HNoH9lCzh1",
"z3qDzalj8QU"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We would like to thank the reviewer for the thoughtful comments. The episodic memory plays an important role in EMC, which can effectively accelerate learning process by making the best use of the trajectories collected by the curiosity exploration, as discussed in the ablation study in Section 5.4 in detail. In ... | [
-1,
-1,
-1,
5,
-1,
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
4,
-1,
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3
] | [
"1hFD-UCyNjL",
"2QRynYexGFA",
"768cWqh0i4Y",
"nips_2021_YDGJ5YExiw6",
"ahHUV8b6lI",
"HNoH9lCzh1",
"nips_2021_YDGJ5YExiw6",
"7olfH0PwsXC",
"nips_2021_YDGJ5YExiw6",
"z3qDzalj8QU",
"Y79grQuRb0_",
"jk05Zao-qqe",
"2-HTik9Mkcg",
"Ejan7Aj1jp3",
"nips_2021_YDGJ5YExiw6"
] |
nips_2021_Gi6SHsbxkgY | Two Sides of Meta-Learning Evaluation: In vs. Out of Distribution | Amrith Setlur, Oscar Li, Virginia Smith | accept | The submission discusses the properties of in-distribution (ID, i.e. same task distribution) and out-of-distribution (OOD, i.e. subject to a task distribution shift) meta-learning evaluation. It contends that current few-shot classification benchmarks reflect OOD performance, and that approaches which perform well in the OOD setting may not necessarily perform well in the ID setting. Finally, it highlights model selection pitfalls and issues with the consistency of meta-learning performance comparisons.
Reviewers found that the submission raises interesting questions, as evidenced by their engagement in the discussions. They agree that the discussion surrounding ID and OOD evaluation for few-shot classification is insightful and of value to the research community, especially since a lot of existing theoretical work makes an ID assumption. Other claimed contributions are less appealing to some reviewers:
- The model selection results appear inconclusive.
- The research community is already moving towards larger and more diverse benchmarks, which lessens the impact of the observation that small test sets are unreliable.
- The 6 common ID datasets and 4 common OOD datasets and the number of approaches investigated remains a small sample size to support the claims made in the paper according to some reviewers.
Reviewers weigh these strengths and weaknesses differently, and resultingly their opinion is split on acceptance. I read the paper to form an independent opinion and think that it raises interesting questions, even if imperfectly. In any case, I don't believe any reviewer is _indifferent_ to it, which to me is a good sign that its publication would have an impact on the few-shot learning community. I therefore recommend acceptance. | val | [
"jAGfHl3-dqT",
"F8k7tAozyba",
"VccahVoP-Xz",
"mXzGu_IWWq",
"Mnu6ErV2DZ_",
"Gkd-40OuEkP",
"PU7c-_dfd0E",
"ZH6FpsUqfBm",
"ZkBjDcRAw-s",
"8SadbEyrno8",
"T5r227pZOgU",
"7Pw_oG7LD6X",
"20vwm_d3FEg",
"1jufe27fwS",
"clGACW64hej",
"RGp4KZfi0Y5",
"qFcT-g-vENd",
"BTOo-XhxWND"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"- They explain that in standard meta-learning datasets the evaluation is OOD - the test tasks are not sampled from the same distribution as train tasks. They explain that ID meta-learning is important too.\n- They consider 2 ID meta-learning datasets and 1 OOD meta-learning dataset (Table 1). They show the ranks o... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"nips_2021_Gi6SHsbxkgY",
"VccahVoP-Xz",
"Mnu6ErV2DZ_",
"Gkd-40OuEkP",
"20vwm_d3FEg",
"PU7c-_dfd0E",
"ZkBjDcRAw-s",
"8SadbEyrno8",
"7Pw_oG7LD6X",
"T5r227pZOgU",
"BTOo-XhxWND",
"qFcT-g-vENd",
"jAGfHl3-dqT",
"nips_2021_Gi6SHsbxkgY",
"RGp4KZfi0Y5",
"nips_2021_Gi6SHsbxkgY",
"nips_2021_Gi6... |
nips_2021_Z4ry59PVMq8 | Debiased Visual Question Answering from Feature and Sample Perspectives | Visual question answering (VQA) is designed to examine the visual-textual reasoning ability of an intelligent agent. However, recent observations show that many VQA models may only capture the biases between questions and answers in a dataset rather than showing real reasoning abilities. For example, given a question, some VQA models tend to output the answer that occurs frequently in the dataset and ignore the images. To reduce this tendency, existing methods focus on weakening the language bias. Meanwhile, only a few works also consider vision bias implicitly. However, these methods introduce additional annotations or show unsatisfactory performance. Moreover, not all biases are harmful to the models. Some “biases” learnt from datasets represent natural rules of the world and can help limit the range of answers. Thus, how to filter and remove the true negative biases in language and vision modalities remain a major challenge. In this paper, we propose a method named D-VQA to alleviate the above challenges from the feature and sample perspectives. Specifically, from the feature perspective, we build a question-to-answer and vision-to-answer branch to capture the language and vision biases, respectively. Next, we apply two unimodal bias detection modules to explicitly recognise and remove the negative biases. From the sample perspective, we construct two types of negative samples to assist the training of the models, without introducing additional annotations. Extensive experiments on the VQA-CP v2 and VQA v2 datasets demonstrate the effectiveness of our D-VQA method.
| accept | The paper proposed a new method for debiasing VQA models. A consensus for acceptance emerged very quickly in the discussion phase, helped by a swift rebuttal by the authors and additional experiments on new datasets.
The AC concurs. | test | [
"1p9s08-Drk7",
"KgmpSbfzAQ",
"l-BZ-xtdVRO",
"H_aARHT_J_",
"ZXyCBcEK20G",
"NK1CPyFJJfY",
"k4rTQNlfba8",
"ZKV8Huj7480",
"HDhfHFti2YH",
"E4oL_scp9s"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your reminder. We carefully re-check up the official GitHub repository of the LXMERT, and find that the pre-training process of the LXMERT would use part of the validation set. However, the result of 78.8% is obtained by evaluating our D-VQA + LXMERT on all of the validation set, and thus our D-VQA + L... | [
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"l-BZ-xtdVRO",
"nips_2021_Z4ry59PVMq8",
"ZKV8Huj7480",
"nips_2021_Z4ry59PVMq8",
"E4oL_scp9s",
"HDhfHFti2YH",
"H_aARHT_J_",
"KgmpSbfzAQ",
"nips_2021_Z4ry59PVMq8",
"nips_2021_Z4ry59PVMq8"
] |
nips_2021_fMaIxda5Y6K | Towards a Unified Game-Theoretic View of Adversarial Perturbations and Robustness | This paper provides a unified view to explain different adversarial attacks and defense methods, i.e. the view of multi-order interactions between input variables of DNNs. Based on the multi-order interaction, we discover that adversarial attacks mainly affect high-order interactions to fool the DNN. Furthermore, we find that the robustness of adversarially trained DNNs comes from category-specific low-order interactions. Our findings provide a potential method to unify adversarial perturbations and robustness, which can explain the existing robustness-boosting methods in a principle way. Besides, our findings also make a revision of previous inaccurate understanding of the shape bias of adversarially learned features. Our code is available online at https://github.com/Jie-Ren/A-Unified-Game-Theoretic-Interpretation-of-Adversarial-Robustness.
| accept | This paper focuses on a new perspective to explain different adversarial attacks and adversarial defenses in a unified way. The philosophy behind sounds interesting to me, namely, exploit the multi-order interactions between inputs to analyze the robustness of DNNs. This philosophy leads to the representation of high-order interactions and low-order interactions and some interesting conclusions that I have never seen.
The clarity and novelty are marginally above the bar of NeurIPS. However, there are three key issues proposed by Reviewer T63V that should be merged in your next version. First, the authors reuse the techniques to prove the decomposition of the network output in this paper, which bring limited technical novelty to the community. Please try to explain more and revise that part. Second, due to pretty much the same formulation, the proofs of linearity, nullity, commutativity, and symmetry follow straightforwardly. Third, the equivalence between the multi-order interaction and mutual information seems straightforward as well. By the way, please revise your updated version more clear and accessible, because the reviewer still fails to understand how to compute the proposed measure with seemingly combinatorial complexity.
While all reviewers had some concerns on the significance, the authors did a particularly good job in their rebuttal. Thus, most of us have agreed to accept this paper for publication. Please include the additional explanation and experimental results (especially for Reviewer T63V) in the next version. | train | [
"m4hBLdIuRy",
"WCx-f0CMNeG",
"ayqN9gR1mXh",
"tih0Z3f_BQr",
"HcVn82SLzfY",
"j67O-rQC-2",
"Ued0Cmp1h22",
"2QwrhZxkj75",
"hJse5ZuAOf-",
"RfpaAGV1Zc",
"UxIHPTcxEMY",
"2Y5SWzm0D6",
"w4ITKXLMvx",
"Ey88CeNvAYk"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper provides a new perspective to explain different adversarial attacks and adversarial defenses in a \"unified\" way. Specifically, the authors exploit the multi-order interactions between inputs to analyze the robustness of DNNs. Based on the multi-order interaction, this paper further investigates the re... | [
7,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
5,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_fMaIxda5Y6K",
"j67O-rQC-2",
"2QwrhZxkj75",
"hJse5ZuAOf-",
"j67O-rQC-2",
"UxIHPTcxEMY",
"nips_2021_fMaIxda5Y6K",
"Ey88CeNvAYk",
"w4ITKXLMvx",
"Ued0Cmp1h22",
"Ued0Cmp1h22",
"m4hBLdIuRy",
"nips_2021_fMaIxda5Y6K",
"nips_2021_fMaIxda5Y6K"
] |
nips_2021_q1yLPNF0UFV | On the Out-of-distribution Generalization of Probabilistic Image Modelling | Out-of-distribution (OOD) detection and lossless compression constitute two problems that can be solved by the training of probabilistic models on a first dataset with subsequent likelihood evaluation on a second dataset, where data distributions differ. By defining the generalization of probabilistic models in terms of likelihood we show that, in the case of image models, the OOD generalization ability is dominated by local features. This motivates our proposal of a Local Autoregressive model that exclusively models local image features towards improving OOD performance. We apply the proposed model to OOD detection tasks and achieve state-of-the-art unsupervised OOD detection performance without the introduction of additional data. Additionally, we employ our model to build a new lossless image compressor: NeLLoC (Neural Local Lossless Compressor) and report state-of-the-art compression rates and model size.
| accept | The main concerns about this work shared by the reviewers were around novelty and presentation. One reviewer felt that a key idea underlying the work has already featured in several other works in recent years, and thus the novelty of the work is limited. I am inclined to agree with this, however, I believe this work could be a useful reference to help more people understand the intricacies of OOD detection and the biases of likelihood-based models (especially autoregressive models), and the practical demonstration that a tiny local model is all you need for efficient lossless image compression, is valuable in itself.
Therefore, I have decided to recommend acceptance. This is of course conditional on the authors including in the manuscript all the additional results they have provided, and reworking the parts that were deemed unclear as set out in the authors' responses. | train | [
"wu5R3og72Sk",
"ZjxpiGpJEo8",
"rsNrcNb-ix",
"s9nrCT-mF2q",
"wQzvMLdzCLb",
"T0G702LVtgH",
"d7bE7_2xX1M",
"Qs5ql60zY6W",
"NOcR1nTbULf",
"SyAs8xjPzCd",
"hc-vn9AXeQd",
"5PXY9aNeuPz",
"Uw5oW225wo6",
"030G4q_R4qR"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"The paper discussed the counterintuitive phenomenon in OOD detection using generative models, and proposed a new method called non-local feature density which fixed the issue and achieved the SOTA performance, because non-local features are shown to contain more semantic information. Further, the authors proposed ... | [
6,
-1,
-1,
-1,
6,
5,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
-1,
-1,
-1,
3,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_q1yLPNF0UFV",
"nips_2021_q1yLPNF0UFV",
"nips_2021_q1yLPNF0UFV",
"T0G702LVtgH",
"nips_2021_q1yLPNF0UFV",
"nips_2021_q1yLPNF0UFV",
"T0G702LVtgH",
"nips_2021_q1yLPNF0UFV",
"T0G702LVtgH",
"nips_2021_q1yLPNF0UFV",
"wQzvMLdzCLb",
"T0G702LVtgH",
"wu5R3og72Sk",
"Qs5ql60zY6W"
] |
nips_2021_zJynVlnoObx | Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size Approach | In this paper, we study the application of quasi-Newton methods for solving empirical risk minimization (ERM) problems defined over a large dataset. Traditional deterministic and stochastic quasi-Newton methods can be executed to solve such problems; however, it is known that their global convergence rate may not be better than first-order methods, and their local superlinear convergence only appears towards the end of the learning process. In this paper, we use an adaptive sample size scheme that exploits the superlinear convergence of quasi-Newton methods globally and throughout the entire learning process. The main idea of the proposed adaptive sample size algorithms is to start with a small subset of data points and solve their corresponding ERM problem within its statistical accuracy, and then enlarge the sample size geometrically and use the optimal solution of the problem corresponding to the smaller set as an initial point for solving the subsequent ERM problem with more samples. We show that if the initial sample size is sufficiently large and we use quasi-Newton methods to solve each subproblem, the subproblems can be solved superlinearly fast (after at most three iterations), as we guarantee that the iterates always stay within a neighborhood that quasi-Newton methods converge superlinearly. Numerical experiments on various datasets confirm our theoretical results and demonstrate the computational advantages of our method.
| accept | This paper focuses on characterizing rates of convergence for Quasi-Newton method applied to empirical risk minimization. Traditional stochastic Quasi-Newton methods have not enjoyed guarantees that are faster than their first order counterparts. This paper uses an adaptive sampling scheme to achieve global superlinear convergence. All reviewers thought the idea of combining adaptive sampling with Quasi-Newton methods is interesting and appreciated the thorough global convergence analysis. The reviewers did however raise a few concerns including: (1) one reviewer had concerns about the claim that the proposed approach “exploits the superlinear convergence” of quasi-Newton methods, (2) lack of discussion of some related literature, (3) mismatch in m0 between theory and practice, and (4) a variety of other technical discussions. The authors provided a thorough response and the reviewers had a lively discussion with the authors and together. As a result some of the above issues were resolved and some reviewers raised their score. Issue number (1) however remained and that reviewer increased their score contingent on this issue being revised in the final version. I agree for the most part with the reviewers that the paper is interesting and clearly written with nice results. But also concur with them about the technical issues and that claims of exploiting superlinearity are a bit misleading. Therefore I recommend acceptance with the requirement that a clear discussion on superlinearity must be highlighted in the paper. | train | [
"nCMbRif0Cv",
"-KtAxVzPy0b",
"nGcvuQ3mAPi",
"mLR17h9Vgjs",
"esEYHoN7Z1v",
"zlkpuzs6ZmY",
"iOSM0OGC6eB",
"D2cdPzPccaO",
"XsL8ZQhUzP8",
"Izgv6Mj5-Jd",
"ajL505KjA3l",
"AXVXOUCIh4W",
"RX6ABfoopZK"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper \n- introduces a new quasi-Newton (QN) algorithm called AdaQN to solve the ERM problem. This method is adaptive since it increases the sample size after few (3) runs of BFGS on a small ERM subproblem. \n- gives its corresponding convergence analysis using a Matrix Bernstein bound\n- explains that AdaQN ... | [
7,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_zJynVlnoObx",
"nips_2021_zJynVlnoObx",
"mLR17h9Vgjs",
"esEYHoN7Z1v",
"zlkpuzs6ZmY",
"AXVXOUCIh4W",
"D2cdPzPccaO",
"XsL8ZQhUzP8",
"Izgv6Mj5-Jd",
"nCMbRif0Cv",
"RX6ABfoopZK",
"-KtAxVzPy0b",
"nips_2021_zJynVlnoObx"
] |
nips_2021_wWtk6GxJB2x | PDE-GCN: Novel Architectures for Graph Neural Networks Motivated by Partial Differential Equations | Graph neural networks are increasingly becoming the go-to approach in various fields such as computer vision, computational biology and chemistry, where data are naturally explained by graphs. However, unlike traditional convolutional neural networks, deep graph networks do not necessarily yield better performance than shallow graph networks. This behavior usually stems from the over-smoothing phenomenon. In this work, we propose a family of architecturesto control this behavior by design. Our networks are motivated by numerical methods for solving Partial Differential Equations (PDEs) on manifolds, and as such, their behavior can be explained by similar analysis. Moreover, as we demonstrate using an extensive set of experiments, our PDE-motivated networks can generalize and be effective for various types of problems from different fields. Our architectures obtain better or on par with the current state-of-the-art results for problems that are typically approached using different architectures.
| accept | All ratings were (weak) "accept".
Generally the work is novel and timely. A related paper appeared on arxiv in late June (GRAND), which one reviewer pointed to, but obviously this paper was submitted before that appeared, and doesn't factor into this paper's evaluation (as the reviewer appropriately recognized). To me this supports the relevance and importance of this work. My sense was that the reviewers' ratings were low given the feedback they provided, and the authors addressed the reviewers thoroughly and appropriately, causing one to raise their score.
If accepted, my suggestion to the authors is that they do a careful revision of the text to emphasize the broader scope and applicability of this work, in case some of the reviewers' 6 ratings were due to not clearly seeing connections to other areas. | train | [
"H9u80rJFKMk",
"-Y-lgZhXXs4",
"z5eA-16QFQ5",
"lsWSWsCvNb8",
"8Xp9fBbDHiE",
"ks0k5H7mq1a",
"PcPjIoJWI1",
"LMqcZ4Jns1",
"uqsIYBlFk9b",
"b5rber2jRg"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Motivated by the connections between PDEs and conventional convolutional networks, this paper proposes the view of graph convolutional networks (GCNs) as discretizations of PDEs on manifolds. Using this view, the authors propose a a family of graph convolutional architectures that utilize discrete graph operators ... | [
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_wWtk6GxJB2x",
"uqsIYBlFk9b",
"H9u80rJFKMk",
"nips_2021_wWtk6GxJB2x",
"PcPjIoJWI1",
"LMqcZ4Jns1",
"lsWSWsCvNb8",
"b5rber2jRg",
"H9u80rJFKMk",
"nips_2021_wWtk6GxJB2x"
] |
nips_2021_t5-Mszu1UkO | Information Directed Reward Learning for Reinforcement Learning | For many reinforcement learning (RL) applications, specifying a reward is difficult. In this paper, we consider an RL setting where the agent can obtain information about the reward only by querying an expert that can, for example, evaluate individual states or provide binary preferences over trajectories. From such expensive feedback, we aim to learn a model of the reward function that allows standard RL algorithms to achieve high expected return with as few expert queries as possible. For this purpose, we propose Information Directed Reward Learning (IDRL), which uses a Bayesian model of the reward function and selects queries that maximize the information gain about the difference in return between potentially optimal policies. In contrast to prior active reward learning methods designed for specific types of queries, IDRL naturally accommodates different query types. Moreover, by shifting the focus from reducing the reward approximation error to improving the policy induced by the reward model, it achieves similar or better performance with significantly fewer queries. We support our findings with extensive evaluations in multiple environments and with different types of queries.
| accept | This paper proposes a new active reward learning algorithm, where where the agent is required to reason about unknown reward function by querying an expert. The main idea is to select queries that maximize the information gain about the difference in return between policies with high uncertainties of their return difference. The results show that the proposed method can achieve similar or better performances with fewer queries and can work with different types of queries unlike the prior work.
All of the reviewers appreciated that the proposed objective that focuses on improving policy as opposed to reducing reward model error, which has been done in the prior approaches, is novel and technical sound. The paper also presents both a theoretically exact implementation and a scalable approximation for Deep RL. The results look solid in that the proposed method is much more query-efficient and can work with more types of queries compared to the baseline approaches. Overall, this is a solid and neat paper. Therefore, I recommend to accept the paper. | train | [
"kbDYCWBEcO",
"zKLxJ4aoCCR",
"AHYLBQZBZN",
"LQpoG0PMy4q",
"X67SlgC0QfU",
"37NUWaoikQ0",
"4vjbC0TOvt4",
"O3IS8QA563h",
"rnhyZcElErg"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new active reward learning method called IDRL, designed to select the most informative query for identifying an optimal policy among a set of plausibly optimal policies. First, IDRL selects two candidate policies that maximize the entropy of the difference in expected returns given past queri... | [
6,
9,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
3,
2,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_t5-Mszu1UkO",
"nips_2021_t5-Mszu1UkO",
"LQpoG0PMy4q",
"rnhyZcElErg",
"O3IS8QA563h",
"zKLxJ4aoCCR",
"kbDYCWBEcO",
"nips_2021_t5-Mszu1UkO",
"nips_2021_t5-Mszu1UkO"
] |
nips_2021_AqprMSXI1Wn | SSMF: Shifting Seasonal Matrix Factorization | Given taxi-ride counts information between departure and destination locations, how can we forecast their future demands? In general, given a data stream of events with seasonal patterns that innovate over time, how can we effectively and efficiently forecast future events? In this paper, we propose Shifting Seasonal Matrix Factorization approach, namely SSMF, that can adaptively learn multiple seasonal patterns (called regimes), as well as switching between them. Our proposed method has the following properties: (a) it accurately forecasts future events by detecting regime shifts in seasonal patterns as the data stream evolves; (b) it works in an online setting, i.e., processes each observation in constant time and memory; (c) it effectively realizes regime shifts without human intervention by using a lossless data compression scheme. We demonstrate that our algorithm outperforms state-of-the-art baseline methods by accurately forecasting upcoming events on three real-world data streams.
| accept | This paper proposes a matrix factorization approach for time-series forecasting for streaming data. The reviewers identified novelty as a main concern; indeed there is substantial overlap with a paper that has already been published. In addition, there were concerns about the experimental analysis. | train | [
"IpwRuC-ujxV",
"DIxXSadQuGE",
"fU5X0EEpV-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors propose a matrix factorization method when the matrices evolve over time. For each matrix at time (t), the authors factorize it as a product of factors as well as a \"seasonality tensor\" that captures specifics about the time series at time (t). The proposed method is online, and can be... | [
5,
5,
4
] | [
3,
5,
4
] | [
"nips_2021_AqprMSXI1Wn",
"nips_2021_AqprMSXI1Wn",
"nips_2021_AqprMSXI1Wn"
] |
nips_2021_VuzPO_TZHPc | Associative Memories via Predictive Coding | Associative memories in the brain receive and store patterns of activity registered by the sensory neurons, and are able to retrieve them when necessary. Due to their importance in human intelligence, computational models of associative memories have been developed for several decades now. In this paper, we present a novel neural model for realizing associative memories, which is based on a hierarchical generative network that receives external stimuli via sensory neurons. It is trained using predictive coding, an error-based learning algorithm inspired by information processing in the cortex. To test the model's capabilities, we perform multiple retrieval experiments from both corrupted and incomplete data points. In an extensive comparison, we show that this new model outperforms in retrieval accuracy and robustness popular associative memory models, such as autoencoders trained via backpropagation, and modern Hopfield networks. In particular, in completing partial data points, our model achieves remarkable results on natural image datasets, such as ImageNet, with a surprisingly high accuracy, even when only a tiny fraction of pixels of the original images is presented. Our model provides a plausible framework to study learning and retrieval of memories in the brain, as it closely mimics the behavior of the hippocampus as a memory index and generative model.
| accept | - The paper tackles an interesting direction of realizing an associative memory via predictive coding, which is quite significant considering the importance of both associative memory and predictive coding in ML and neuroscience.
- The argument about biological plausibility is controversial and rather subjective. So, I don't take this aspect into account.
- The reconstruction quality seems impressive compared to the past results.
- I think the rebuttal addressed many concerns well enough including Hopfield Net experiments, baselines, etc. The baseline seems reasonable for this specific line of work (I agree with the argument of the authors in the rebuttal.) although the experiment and discussion can be improved based on the reviewer's feedback.
- The topic matches very well to the interest of NeurIPS.
- This is a kind of very fundamental work for which we would like to see more interesting ideas even if the experiment results is not comparable to that of very well engineered large-scale systems.
- The points raised by the reviewers are important and thus should be discussed in depth in the revision. Particularly, it would make the paper stronger by discussing the limitation more clearly.
- I like the fact that the method is simple!
| train | [
"JedBP9T0zZJ",
"CUu66gWylBW",
"GNhUeHzIjow",
"SiMrV3HsVv",
"y2QEVifRMoU",
"Oq3Sjoo2N4S",
"T1YP82pczg9",
"CLrFSFjiOZR",
"uZgKlANIhfT",
"b7HAWUV9eNo",
"9nXAHS94OaH",
"fgTIEhvyBlc",
"VkD4psP6LM",
"xU47IG_QSow",
"T1b9DWmmtSm",
"JpX9orhCNG_",
"63yLrHWmfHk",
"Idac6SqpE_5",
"senHV3Rn-JZ... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_re... | [
" Here are some pointers that show how relevant PC is in modern neuroscience to study and to understand the functioning of the brain. \n\n1) The prediction on responses to prediction errors has been directly confirmed, see [Attinger17]. Here, the authors record the activity of genetically identified interneurons in... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"uZgKlANIhfT",
"T1b9DWmmtSm",
"nips_2021_VuzPO_TZHPc",
"Oq3Sjoo2N4S",
"b7HAWUV9eNo",
"T1YP82pczg9",
"CLrFSFjiOZR",
"xU47IG_QSow",
"VkD4psP6LM",
"nips_2021_VuzPO_TZHPc",
"fgTIEhvyBlc",
"JpX9orhCNG_",
"Idac6SqpE_5",
"GNhUeHzIjow",
"senHV3Rn-JZ",
"63yLrHWmfHk",
"nips_2021_VuzPO_TZHPc",
... |
nips_2021_CuQoImkKkIj | Robust and differentially private mean estimation | In statistical learning and analysis from shared data, which is increasingly widely adopted in platforms such as federated learning and meta-learning, there are two major concerns: privacy and robustness. Each participating individual should be able to contribute without the fear of leaking one's sensitive information. At the same time, the system should be robust in the presence of malicious participants inserting corrupted data. Recent algorithmic advances in learning from shared data focus on either one of these threats, leaving the system vulnerable to the other. We bridge this gap for the canonical problem of estimating the mean from i.i.d.~samples. We introduce PRIME, which is the first efficient algorithm that achieves both privacy and robustness for a wide range of distributions. We further complement this result with a novel exponential time algorithm that improves the sample complexity of PRIME, achieving a near-optimal guarantee and matching that of a known lower bound for (non-robust) private mean estimation. This proves that there is no extra statistical cost to simultaneously guaranteeing privacy and robustness.
| accept | The reviewers agreed that this is a strong and interesting technical work. The authors are advised to address dependence on the range parameter R, since this is often important in DP estimation tasks, as well as some related work mentioned by reviewers. Some weaknesses identified by reviewers include experiments (which is not the main contribution of the paper) and the suboptimal sample complexity of the efficient algorithm (which seems like a significant open problem), which can both be overlooked. | train | [
"Di4vfQNJ_nb",
"YlLAh4gf1nt",
"XQBlzKSSnYm",
"YznhK4sYj2N",
"ebhi_uacve",
"xPc4EpYcF3N",
"3trdMQkLbsD",
"Ua9pznze5h8",
"HFUkNaSVRq6"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Review remains unaltered.",
"This paper shows results on robust and differentially private mean estimation of sub-Gaussian and covariance bounded distributions. So far, work has mostly been done in the direction of either robustness or of differentially private statistics, but this paper puts both of them toget... | [
-1,
7,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"ebhi_uacve",
"nips_2021_CuQoImkKkIj",
"HFUkNaSVRq6",
"YlLAh4gf1nt",
"Ua9pznze5h8",
"3trdMQkLbsD",
"nips_2021_CuQoImkKkIj",
"nips_2021_CuQoImkKkIj",
"nips_2021_CuQoImkKkIj"
] |
nips_2021_73FeFxePGc | Adaptable Agent Populations via a Generative Model of Policies | In the natural world, life has found innumerable ways to survive and often thrive. Between and even within species, each individual is in some manner unique, and this diversity lends adaptability and robustness to life. In this work, we aim to learn a space of diverse and high-reward policies in a given environment. To this end, we introduce a generative model of policies for reinforcement learning, which maps a low-dimensional latent space to an agent policy space. Our method enables learning an entire population of agent policies, without requiring the use of separate policy parameters. Just as real world populations can adapt and evolve via natural selection, our method is able to adapt to changes in our environment solely by selecting for policies in latent space. We test our generative model’s capabilities in a variety of environments, including an open-ended grid-world and a two-player soccer environment. Code, visualizations, and additional experiments can be found at https://kennyderek.github.io/adap/.
| accept | Inspired by the diversity produced by evolution, this work presents a method of learning a latent space of policies that is conditioned by an RL algorithm like PPO. They show that their model can learn a diverse set of policies, and since the latent space is not too high dimensional, it is easy to adapt to changes in the environment. They demonstrate the model in a grid-world “farm” environment and a two-player soccer environment (I also downloaded the zip file from the supplementary materials, and I found the web page useful to understand some results from the animations.).
The strengths of the paper (best summarized by reviewer Pf7s):
1. The proposed method integrates the goals of quality diversity into deep RL by simulating an entire population of agents via a generative model of policies.
2. The authors evaluated this method using three different experiments and showed that this method was able to learn a more multi-modal and effective policy space than any of the other baselines.
There were issues with the writing and clarity, raised by reviewer Q4DY. The authors have responded to the questions, and have pledged to clarify issues with the paper discussed in the thread.
Reviewer xPri, who wrote a critical, but fair and balanced review, also highlighted a list of important points in their review related to not only improving presentation of the results, but also filling in key missing experiments, and improving statistical confidence in the results. The authors performed several additional studies, and after a long and detailed dialog between reviewer and author (this is where Open Review really shines), an understanding is established and the reviewer is satisfied, improving their score to 6 (even to 7 if additional seeds can be run on remaining experiments).
In summary, this paper presents a simple and promising idea for creating diverse adaptable agent populations in RL; a work that would be of interest to the NeurIPS community. I believe the review process has helped strengthen the paper to a state that we are happy to publish the work at NeurIPS. For these reasons I'm recommending acceptance of the work as a poster.
| val | [
"gGnG3F0cNHH",
"dvsYXskmXBq",
"H8cRaic6zA7",
"9Jmae2LEV5n",
"tZdbO7Gm3fE",
"dsqR8RC2yN3",
"m_6RQ2JXNlS",
"jwzeZmpuiZ9",
"JSp5MJz8mZa",
"ggbSqDLYk8s",
"6_H6wB10yH7",
"bCQ-JaX-C4Z",
"b89cwEOY7W6",
"CfdjB29FgR",
"xrpJwUINvAJ",
"unWvsYGGpxS",
"waX7Egx4UFq",
"eXYEbIAR8Ed",
"DCf0CWSjgl... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"of... | [
"The authors proposed a generative model of policies, which maps a low-dimensional latent space to an agent policy space to learn a space of diverse and high-reward policies on any given environment (without requiring the use of separate policy parameters). The proposed method is able to adapt to changes in our env... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"nips_2021_73FeFxePGc",
"nDwyvSkQs7V",
"nips_2021_73FeFxePGc",
"jwzeZmpuiZ9",
"bCQ-JaX-C4Z",
"m_6RQ2JXNlS",
"JSp5MJz8mZa",
"H8cRaic6zA7",
"eXYEbIAR8Ed",
"CfdjB29FgR",
"b89cwEOY7W6",
"unWvsYGGpxS",
"DCf0CWSjglR",
"waX7Egx4UFq",
"waX7Egx4UFq",
"RodDk2Ko6cv",
"eXYEbIAR8Ed",
"aSrSMbegt... |
nips_2021_twz1QqzU0Hp | A No-go Theorem for Robust Acceleration in the Hyperbolic Plane | In recent years there has been significant effort to adapt the key tools and ideas in convex optimization to the Riemannian setting. One key challenge has remained: Is there a Nesterov-like accelerated gradient method for geodesically convex functions on a Riemannian manifold? Recent work has given partial answers and the hope was that this ought to be possible. Here we prove that in a noisy setting, there is no analogue of accelerated gradient descent for geodesically convex functions on the hyperbolic plane. Our results apply even when the noise is exponentially small. The key intuition behind our proof is short and simple: In negatively curved spaces, the volume of a ball grows so fast that information about the past gradients is not useful in the future.
| accept | While reviewers felt the intuition behind the paper is relatively straightforward, ultimately they agreed that the results could provide useful for practical applications. | train | [
"EVrwvC_WxdV",
"q-j8eRFShL",
"HBirawLHHGU",
"DtcjeiUBt12",
"OgjW74WMdCO",
"ec8Fgiygkeg",
"Jw7RrPCJV-Q",
"8DxFLdadM21",
"-8_EVFdUDJ",
"oM_u2GPZfdQ",
"I0QC5Wsqav-",
"2SaCfhwoKW6",
"waZxYYd3nVf"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the additional (patient) clarification. I take your points, and agree that including this kind of discussion in the revision will go a long way to emphasizing the precise novelty/implications of the result to a broad audience (especially the discussion in the final paragraph of your previous comment... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
5
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"HBirawLHHGU",
"nips_2021_twz1QqzU0Hp",
"DtcjeiUBt12",
"oM_u2GPZfdQ",
"ec8Fgiygkeg",
"8DxFLdadM21",
"2SaCfhwoKW6",
"I0QC5Wsqav-",
"waZxYYd3nVf",
"q-j8eRFShL",
"nips_2021_twz1QqzU0Hp",
"nips_2021_twz1QqzU0Hp",
"nips_2021_twz1QqzU0Hp"
] |
nips_2021_QbYS4dXH0dD | Privately Learning Mixtures of Axis-Aligned Gaussians | Ishaq Aden-Ali, Hassan Ashtiani, Christopher Liaw | accept | This paper makes progress on a fundamental problem: Learning mixtures of axis aligned Gaussian distributions under (eps, delta) differential privacy. The paper is a nice combination of two ideas: reduction to list decodable learning, and a private algorithm for list decodable learning. These ideas (particularly the second) should find applications in other places. It would be nice if the authors add a comment about the optimal sample complexity (the extra k factor). | train | [
"HvSkfhww3c_",
"2GUx36GC2k5",
"7g-pvA9oobo",
"Lmat4oa0FOo",
"13LkC4PPPhW",
"zRtrXNmw4cT"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"As the title suggests, this paper gives algorithms for learning mixtures of $k$ Gaussians to within total variation distance $\\alpha$. It uses a reduction to private list decodable learners from privately learning mixtures of distributions. The number of samples used is linear in $d$ and quadratic in $k$. The flo... | [
7,
-1,
-1,
-1,
6,
7
] | [
4,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_QbYS4dXH0dD",
"HvSkfhww3c_",
"zRtrXNmw4cT",
"13LkC4PPPhW",
"nips_2021_QbYS4dXH0dD",
"nips_2021_QbYS4dXH0dD"
] |
nips_2021_R6nFQy2vwQq | Deep Self-Dissimilarities as Powerful Visual Fingerprints | Features extracted from deep layers of classification networks are widely used as image descriptors. Here, we exploit an unexplored property of these features: their internal dissimilarity. While small image patches are known to have similar statistics across image scales, it turns out that the internal distribution of deep features varies distinctively between scales. We show how this deep self dissimilarity (DSD) property can be used as a powerful visual fingerprint. Particularly, we illustrate that full-reference and no-reference image quality measures derived from DSD are highly correlated with human preference. In addition, incorporating DSD as a loss function in training of image restoration networks, leads to results that are at least as photo-realistic as those obtained by GAN based methods, while not requiring adversarial training.
| accept | This paper addresses a new image descriptor, referred to as deep self dissimilarity (DSD) which measures the dissimilarity between deep-features from the same image presented at different scales. Reviewers agree that the paper presents a novel idea which suggests a measure of scale-wise dissimilarity of deep features as a descriptive measure of images. The paper is well written and the key idea is appreciated. The rebuttal addressed most of concerns raised by reviewers, leading that two of reviewers raised the score during the discussion period. I believe that the paper is deserved to be presented in the conference.
| test | [
"QLcWnjieih",
"Y2DgogIaE9j",
"n7yogkl_hXf",
"4n2-cRq5Sv2",
"pvLvWTjI_Di",
"d0RaIaz2Qd9",
"3NXidBXwxqR",
"zcwBqyJU1nR",
"xQPE0HSFDm",
"hrw5F2REfWT"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new image descriptor, called DSD (Deep Self-Dissimilarities). The authors observe that the very same image may happen to be classified in different ways by deep neural networks when viewed at different scales. For example (Figure 1) an image is classified by the same network as \"Jean\" at fu... | [
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"nips_2021_R6nFQy2vwQq",
"nips_2021_R6nFQy2vwQq",
"pvLvWTjI_Di",
"zcwBqyJU1nR",
"Y2DgogIaE9j",
"hrw5F2REfWT",
"QLcWnjieih",
"xQPE0HSFDm",
"nips_2021_R6nFQy2vwQq",
"nips_2021_R6nFQy2vwQq"
] |
nips_2021_715E7e6j4gU | Invariant Causal Imitation Learning for Generalizable Policies | Consider learning an imitation policy on the basis of demonstrated behavior from multiple environments, with an eye towards deployment in an unseen environment. Since the observable features from each setting may be different, directly learning individual policies as mappings from features to actions is prone to spurious correlations---and may not generalize well. However, the expert’s policy is often a function of a shared latent structure underlying those observable features that is invariant across settings. By leveraging data from multiple environments, we propose Invariant Causal Imitation Learning (ICIL), a novel technique in which we learn a feature representation that is invariant across domains, on the basis of which we learn an imitation policy that matches expert behavior. To cope with transition dynamics mismatch, ICIL learns a shared representation of causal features (for all training environments), that is disentangled from the specific representations of noise variables (for each of those environments). Moreover, to ensure that the learned policy matches the observation distribution of the expert's policy, ICIL estimates the energy of the expert's observations and uses a regularization term that minimizes the imitator policy's next state energy. Experimentally, we compare our methods against several benchmarks in control and healthcare tasks and show its effectiveness in learning imitation policies capable of generalizing to unseen environments.
| accept | This paper proposed a method which exploits principles of causal invariance from causality in order to develop an imitation learning algorithm which is more robust to spurious/distractor features in the expert trajectories. It shows the improved performance of this method in OpenAI Gym tasks and a healthcare dataset. The reviewers were unanimous in recommending acceptance, and no significant argument made by the lone weak accept has made me doubt this paper is worth publishing at NeurIPS. Without much difficulty, I can also recommend acceptance, and hope to see the minor improvements suggested by the reviewers in the final version of the paper. | train | [
"G0EqIIjvVlg",
"hf2dtZ_gJzX",
"pROQGd4wQ76",
"FAqSdXCocjN",
"SRg883xopuQ",
"t_beUBAxAfN",
"He5Fz87V3D-",
"onN9zAyl14y",
"6N0Q8e_AmL_",
"dGmk20xSydQ",
"bAI04TNkNKK",
"xjRx5cSdO5p",
"AgjTBw1RxZa",
"qoL2Pwodluy",
"xz9b_o1f8g",
"cJ9M0EKFVB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed response !\n\nAppreciate your input on clarifying the concerns as well as conducting ablation studies to ground the claims. Looking forward to seeing more of the experiments in the final version. It would be useful to comment that the studied setting is closest to \"domain generalizatio... | [
-1,
7,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
3,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"6N0Q8e_AmL_",
"nips_2021_715E7e6j4gU",
"xjRx5cSdO5p",
"hf2dtZ_gJzX",
"bAI04TNkNKK",
"nips_2021_715E7e6j4gU",
"dGmk20xSydQ",
"cJ9M0EKFVB",
"cJ9M0EKFVB",
"t_beUBAxAfN",
"xz9b_o1f8g",
"hf2dtZ_gJzX",
"hf2dtZ_gJzX",
"hf2dtZ_gJzX",
"nips_2021_715E7e6j4gU",
"nips_2021_715E7e6j4gU"
] |
nips_2021_dUk5Foj5CLf | CoAtNet: Marrying Convolution and Attention for All Data Sizes | Transformers have attracted increasing interests in computer vision, but they still fall behind state-of-the-art convolutional networks. In this work, we show that while Transformers tend to have larger model capacity, their generalization can be worse than convolutional networks due to the lack of the right inductive bias. To effectively combine the strengths from both architectures, we present CoAtNets(pronounced "coat" nets), a family of hybrid models built from two key insights: (1) depthwise Convolution and self-Attention can be naturally unified via simple relative attention; (2) vertically stacking convolution layers and attention layers in a principled way is surprisingly effective in improving generalization, capacity and efficiency. Experiments show that our CoAtNets achieve state-of-the-art performance under different resource constraints across various datasets: Without extra data, CoAtNet achieves 86.0% ImageNet top-1 accuracy; When pre-trained with 13M images from ImageNet-21K, our CoAtNet achieves 88.56% top-1 accuracy, matching ViT-huge pre-trained with 300M images from JFT-300M while using 23x less data; Notably, when we further scale up CoAtNet with JFT-3B, it achieves 90.88% top-1 accuracy on ImageNet, establishing a new state-of-the-art result.
| accept | The rebuttal addressed all of the reviewers concerns, and all reviewers recommend acceptance. The AC agrees with this recommendation. | train | [
"Lse7AVkoMFq",
"84MjZLnAQG4",
"XwC5FXIAMOt",
"z2lYn-yuO0S",
"T-dtMJprmbG",
"mOgHB_Jtsrd",
"1t3GAGy-NdP",
"pbGTmPFalDA",
"DFb6E4HnTji"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" After reading the rebuttal, I still recommend accepting the paper due to its excellent performance and comprehensive experiments. But please note I still think the novelty of the paper is limited and the authors did not address my concerns.",
" \n> The authors mentioned the limitation is that they only focus on... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"84MjZLnAQG4",
"DFb6E4HnTji",
"pbGTmPFalDA",
"1t3GAGy-NdP",
"mOgHB_Jtsrd",
"nips_2021_dUk5Foj5CLf",
"nips_2021_dUk5Foj5CLf",
"nips_2021_dUk5Foj5CLf",
"nips_2021_dUk5Foj5CLf"
] |
nips_2021_QXDePagJ1X3 | Mixed Supervised Object Detection by Transferring Mask Prior and Semantic Similarity | Object detection has achieved promising success, but requires large-scale fully-annotated data, which is time-consuming and labor-extensive. Therefore, we consider object detection with mixed supervision, which learns novel object categories using weak annotations with the help of full annotations of existing base object categories. Previous works using mixed supervision mainly learn the class-agnostic objectness from fully-annotated categories, which can be transferred to upgrade the weak annotations to pseudo full annotations for novel categories. In this paper, we further transfer mask prior and semantic similarity to bridge the gap between novel categories and base categories. Specifically, the ability of using mask prior to help detect objects is learned from base categories and transferred to novel categories. Moreover, the semantic similarity between objects learned from base categories is transferred to denoise the pseudo full annotations for novel categories. Experimental results on three benchmark datasets demonstrate the effectiveness of our method over existing methods. Codes are available at https://github.com/bcmi/TraMaS-Weak-Shot-Object-Detection.
| accept | The reviewers have discussed the paper and are generally positive. In particular, some of the concerns raised in the reviewer were lifted, which resulting in improving the scores. I would like to encourage the authors to incorporate the suggestions into the final version and recommend accepting the paper. | train | [
"omy9r4JNpMC",
"Ag4oW8ZvPFt",
"upTLI8Q7HwG",
"bgPFV_wFUYj",
"z7Yeie67cw7",
"o2HZSZ09NAl",
"mYF1lJ8ymyC",
"3kXyaDe9N1R",
"jtraOEpFSzT",
"gvnJrr41sUu",
"Z-UJKj55SjS",
"MH3L6BP9JE"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies mixed supervised object detection where some object categories are fully supervised and the rest are only labeled with class tags. It introduces a (1) weakly-supervised segmentation module and (2) a semantic similarity learning module to the existing framework to improve the performance. Experim... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_QXDePagJ1X3",
"jtraOEpFSzT",
"3kXyaDe9N1R",
"mYF1lJ8ymyC",
"o2HZSZ09NAl",
"Z-UJKj55SjS",
"gvnJrr41sUu",
"MH3L6BP9JE",
"omy9r4JNpMC",
"nips_2021_QXDePagJ1X3",
"nips_2021_QXDePagJ1X3",
"nips_2021_QXDePagJ1X3"
] |
nips_2021_CO87OIEOGU8 | Celebrating Diversity in Shared Multi-Agent Reinforcement Learning | Recently, deep multi-agent reinforcement learning (MARL) has shown the promise to solve complex cooperative tasks. Its success is partly because of parameter sharing among agents. However, such sharing may lead agents to behave similarly and limit their coordination capacity. In this paper, we aim to introduce diversity in both optimization and representation of shared multi-agent reinforcement learning. Specifically, we propose an information-theoretical regularization to maximize the mutual information between agents' identities and their trajectories, encouraging extensive exploration and diverse individualized behaviors. In representation, we incorporate agent-specific modules in the shared neural network architecture, which are regularized by L1-norm to promote learning sharing among agents while keeping necessary diversity. Empirical results show that our method achieves state-of-the-art performance on Google Research Football and super hard StarCraft II micromanagement tasks.
| accept | The authors present a novel approach to cooperative multi-agent reinforcement learning that focuses on balancing diverse behaviors with shared information. The approach incorporates several new insights (mutual information losses to encourage action and observation diversity, shared and local architecture components and a regularization term) and conceptually and empirically demonstrate how each contributes to the algorithms performance. Empirical validation is extensive and includes challenging environments, often showing dramatic performance improvements compared to baselines. Detailed ablations and analysis provide further insights into how the proposed approach works.
Initial reviews were positive, highlighting the novel insights and strong empirical results achieved in the paper. The paper was assessed as well written and largely clear. Open questions included experimental details (e.g., the procedure for hyper-parameter tuning and impact of hyperparameters on performance), positioning of the paper relative to related work (e.g., DIAYN, and the general question of how this work relates to previous approaches that encourage diversity), and conceptual questions (e.g., potential conflicts between diverse behavior and the need for coordinated exploration, the role of L1 regularization over the independent Q functions).
The authors addressed reviewer questions and suggestions in detail, providing clarifications as well as additional empirical results. As a result, several reviewers increased their scores and all indicated that their concerns and questions had been addressed. The AC agrees with this consensus. | train | [
"4tf2vd6mju",
"2TgnC81O7IW",
"OFDJvqytwkB",
"AAb7Kp8LPXy",
"XSNFLc64lNV",
"5V1lD9kIhZO",
"lUjLlT_etgW",
"7MlawUdEzFM",
"B70iDkIPA62",
"QkuNiu8G8g",
"3uGHdkt3J2u",
"uVdqPh19VVy",
"ib2wtoQoTHH",
"BDXnQUTpsMe"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your positive and thoughtful comments. We will incorporate your suggestions into our next revision.",
" Thank you very much for the detailed responses. I have no more concerns about the manuscript. Please incorporate these responses selectively into the final version of the manuscript.",... | [
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"2TgnC81O7IW",
"B70iDkIPA62",
"5V1lD9kIhZO",
"lUjLlT_etgW",
"nips_2021_CO87OIEOGU8",
"7MlawUdEzFM",
"QkuNiu8G8g",
"XSNFLc64lNV",
"ib2wtoQoTHH",
"uVdqPh19VVy",
"BDXnQUTpsMe",
"nips_2021_CO87OIEOGU8",
"nips_2021_CO87OIEOGU8",
"nips_2021_CO87OIEOGU8"
] |
nips_2021_pHCuidXEinv | Rebounding Bandits for Modeling Satiation Effects | Psychological research shows that enjoyment of many goods is subject to satiation, with short-term satisfaction declining after repeated exposures to the same item. Nevertheless, proposed algorithms for powering recommender systems seldom model these dynamics, instead proceeding as though user preferences were fixed in time. In this work, we introduce rebounding bandits, a multi-armed bandit setup, where satiation dynamics are modeled as time-invariant linear dynamical systems. Expected rewards for each arm decline monotonically with consecutive exposures to it and rebound towards the initial reward whenever that arm is not pulled. Unlike classical bandit settings, methods for tackling rebounding bandits must plan ahead and model-based methods rely on estimating the parameters of the satiation dynamics. We characterize the planning problem, showing that the greedy policy is optimal when the arms exhibit identical deterministic dynamics. To address stochastic satiation dynamics with unknown parameters, we propose Explore-Estimate-Plan (EEP), an algorithm that pulls arms methodically, estimates the system dynamics, and then plans accordingly.
| accept | The rebounding bandits model proposed and analyzed in the paper was appreciated by all the reviewers as an interesting and relevant problem, and by and large the algorithm and the analysis and the presentation was considered to be a decent contribution. The reviewers raised a number of questions in their reviews, and the authors did a reasonable job of addressing most of them. It will be critical that the authors revise the paper to ensure that the issues raised are addressed and to prevent some of the misunderstandings from persisting with the general reader. Two specific issues that lingered with the reviewers: (1) the issue of negative rewards generated some discussion. I do think the authors’ elaboration in the discussion with one of the reviewers is legitimate, but this point should be included/clarified in the paper. (2) The empirical results are quite weak (specifically, number of arms and number of runs in results, baseline (e.g., w=T)) . The author response to all questions about these matters is that they will be addressed “in future manuscripts of the paper.” I’m not sure what this means: in fact, I would almost consider it to be a condition of acceptance that these be included in the revised paper | train | [
"yV_rhtmWle",
"31TFTLnVKz",
"GLMhVSdGss",
"MBgEh3vi1v",
"j4c7lv8amm",
"5U6mDhyU3-t",
"Ayt_MDc-6bK",
"uipc2wwQRN",
"HxBeJhuvurK",
"g5NQQNLBYNo",
"dF34kg4F3hA",
"hJ6khh-ByuS"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer GJkC,\n\nThanks again for taking the time to engage with us. To recap, we are glad to know that you are satisfied with our answers to comments #2, #3, and (somewhat) #4. We wanted to reach out to see if our subsequent response to your question on #1 addressed your concerns. If so, would you consider... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"31TFTLnVKz",
"GLMhVSdGss",
"Ayt_MDc-6bK",
"hJ6khh-ByuS",
"dF34kg4F3hA",
"g5NQQNLBYNo",
"HxBeJhuvurK",
"nips_2021_pHCuidXEinv",
"nips_2021_pHCuidXEinv",
"nips_2021_pHCuidXEinv",
"nips_2021_pHCuidXEinv",
"nips_2021_pHCuidXEinv"
] |
nips_2021__OPHJ7nkZoC | Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond | Cutting-plane methods have enabled remarkable successes in integer programming over the last few decades. State-of-the-art solvers integrate a myriad of cutting-plane techniques to speed up the underlying tree-search algorithm used to find optimal solutions. In this paper we provide sample complexity bounds for cut-selection in branch-and-cut (B&C). Given a training set of integer programs sampled from an application-specific input distribution and a family of cut selection policies, these guarantees bound the number of samples sufficient to ensure that using any policy in the family, the size of the tree B&C builds on average over the training set is close to the expected size of the tree B&C builds. We first bound the sample complexity of learning cutting planes from the canonical family of Chvátal-Gomory cuts. Our bounds handle any number of waves of any number of cuts and are fine tuned to the magnitudes of the constraint coefficients. Next, we prove sample complexity bounds for more sophisticated cut selection policies that use a combination of scoring rules to choose from a family of cuts. Finally, beyond the realm of cutting planes for integer programming, we develop a general abstraction of tree search that captures key components such as node selection and variable selection. For this abstraction, we bound the sample complexity of learning a good policy for building the search tree.
| accept | The paper shows sampling bounds for the problem of identifying good cuts in branch-and-bound search using cutting planes. The reviewers appreciated the importance of the problem and the contribution of the paper. Some reviewers were concerned that sampling bounds by themselves do not provide efficient algorithms for finding good cuts, but these concerns were mostly resolved during the rebuttal phase. | train | [
"WO6C5t0u_mC",
"bOcOA1yrfe",
"wrjb5hUEu-n",
"ngtZkDAJ0cv",
"dSXVokpXSBa",
"betGRm9mO7",
"AjtXoQ5r_MI",
"BaibT0FMh3U",
"LDf8QepKLFV"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper focuses on the sample complexity of learning to select Chvatal-Gomory cuts for integer linear programming. We assume that there is an unknown distribution that generates ILP instances. CG cuts are parametrized by a set of weights, one per constraint. How large should the set of training instances be for ... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
2,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021__OPHJ7nkZoC",
"dSXVokpXSBa",
"nips_2021__OPHJ7nkZoC",
"wrjb5hUEu-n",
"WO6C5t0u_mC",
"LDf8QepKLFV",
"BaibT0FMh3U",
"nips_2021__OPHJ7nkZoC",
"nips_2021__OPHJ7nkZoC"
] |
nips_2021_Aeo-xqtb5p | IQ-Learn: Inverse soft-Q Learning for Imitation | In many sequential decision-making problems (e.g., robotics control, game playing, sequential prediction), human or expert data is available containing useful information about the task. However, imitation learning (IL) from a small amount of expert data can be challenging in high-dimensional environments with complex dynamics. Behavioral cloning is a simple method that is widely used due to its simplicity of implementation and stable convergence but doesn't utilize any information involving the environment’s dynamics. Many existing methods that exploit dynamics information are difficult to train in practice due to an adversarial optimization process over reward and policy approximators or biased, high variance gradient estimators. We introduce a method for dynamics-aware IL which avoids adversarial training by learning a single Q-function, implicitly representing both reward and policy. On standard benchmarks, the implicitly learned rewards show a high positive correlation with the ground-truth rewards, illustrating our method can also be used for inverse reinforcement learning (IRL). Our method, Inverse soft-Q learning (IQ-Learn) obtains state-of-the-art results in offline and online imitation learning settings, significantly outperforming existing methods both in the number of required environment interactions and scalability in high-dimensional spaces, often by more than 3x.
| accept | This paper received very positive reviews initially. The reviewers liked the originality of the work, it's theoretical soundness and the good performance of the algorithm. They had some concerns and wanted to see more about generalization capacities, stronger baselines and a larger set of test environments. The discussion was rich enough to convince the reviewers to raise their scores. The authors made additional experiments and addressed all the concerns of the reviewers. | test | [
"s79uNiUeA8",
"mrNf-vGClYY",
"B9BCgZugYkI",
"ienAKCNKh3",
"7tK8oTCE5fa",
"noKc_A6hlf_",
"qopBTkpTbUF",
"Ng7ljXOjfM_",
"JvFqJFPddcF",
"mNGHdCRhhh7",
"ely3iozJQWl",
"SU9VwUegmF",
"FwkLps5vtC",
"bCY9U_usDXQ",
"4rQzNccBZO"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper suggests an alternative approach, equivalent to adversarial IRL (that learns both the policy and the reward in an adversarial fashion), but in a direct way (learning the Q-function that encompasses both the reward and the policy). As in adversarial methods (GAIL, AIRL etc) they use the maximum entropy a... | [
8,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
9
] | [
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"nips_2021_Aeo-xqtb5p",
"Ng7ljXOjfM_",
"ely3iozJQWl",
"qopBTkpTbUF",
"SU9VwUegmF",
"nips_2021_Aeo-xqtb5p",
"JvFqJFPddcF",
"s79uNiUeA8",
"bCY9U_usDXQ",
"noKc_A6hlf_",
"noKc_A6hlf_",
"4rQzNccBZO",
"bCY9U_usDXQ",
"nips_2021_Aeo-xqtb5p",
"nips_2021_Aeo-xqtb5p"
] |
nips_2021_4orlVaC95Bo | Task-Agnostic Undesirable Feature Deactivation Using Out-of-Distribution Data | A deep neural network (DNN) has achieved great success in many machine learning tasks by virtue of its high expressive power. However, its prediction can be easily biased to undesirable features, which are not essential for solving the target task and are even imperceptible to a human, thereby resulting in poor generalization. Leveraging plenty of undesirable features in out-of-distribution (OOD) examples has emerged as a potential solution for de-biasing such features, and a recent study shows that softmax-level calibration of OOD examples can successfully remove the contribution of undesirable features to the last fully-connected layer of a classifier. However, its applicability is confined to the classification task, and its impact on a DNN feature extractor is not properly investigated. In this paper, we propose Taufe, a novel regularizer that deactivates many undesirable features using OOD examples in the feature extraction layer and thus removes the dependency on the task-specific softmax layer. To show the task-agnostic nature of Taufe, we rigorously validate its performance on three tasks, classification, regression, and a mix of them, on CIFAR-10, CIFAR-100, ImageNet, CUB200, and CAR datasets. The results demonstrate that Taufe consistently outperforms the state-of-the-art method as well as the baselines without regularization.
| accept | This paper proposes a simple regularisation that encourages features to be close to zero on a pre-specified OOD training set.
Reviewers think the proposed approach is novel, simple, and effectively demonstrated on a variety of experiments. An ethical issue has been raised on the 80mTiny dataset used in experiments, for which the authors promise to remove it. So it comes down to the question whether the remaining result is significant enough to demonstrate the claims, for which the reviewers do not see it as an issue.
Because of this issue with the 80mTiny dataset, the paper is being marked as conditionally accept.
Reviewers raise another concern on missing comparison to semi-supervised learning approaches, and the authors agree to add in experiments in revision. Label/logit smoothing and feature denoising have also been extensively discussed in adversarial robustness literature, it would improve the paper if the authors can evaluate the proposed approach on adversarial examples.
Also I would suggest to openly discuss the limitation of the proposed approach in the main text rather than in appendix.
----
UPDATE: Upon reviewing the revision, the decision has been updated to accept. | train | [
"RTh1sT9u1ku",
"NjbXS93ghch",
"5A-483qFUL",
"95i1OkbQQii",
"XE999zYGJo0",
"PzyxE-0BGNK",
"28nmt5T7-zg",
"q0Dq5xlg57M",
"4B9c6h4UD0j",
"vcyOHarecC_",
"aGKtBRGj7-J",
"EGWngUKwG5a"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a simple regularization that encourages logits to be close to zero on a designated out-distribution during training. The authors show that this improves accuracy across various tasks and datasets. Strengths:\nThe proposed method is quite simple and the results in the paper indicate that it can ... | [
6,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
4,
5
] | [
"nips_2021_4orlVaC95Bo",
"XE999zYGJo0",
"95i1OkbQQii",
"nips_2021_4orlVaC95Bo",
"28nmt5T7-zg",
"nips_2021_4orlVaC95Bo",
"RTh1sT9u1ku",
"aGKtBRGj7-J",
"EGWngUKwG5a",
"PzyxE-0BGNK",
"nips_2021_4orlVaC95Bo",
"nips_2021_4orlVaC95Bo"
] |
nips_2021_-16dlERMZkO | Private Non-smooth ERM and SCO in Subquadratic Steps | Janardhan Kulkarni, Yin Tat Lee, Daogao Liu | accept | This paper proposes a new algorithm for differentially private stochastic convex optimization and ERM with non-smooth convex losses. Previous algorithms for the problem required a quadratic number of gradient computations. This work uses a combination of a new idea and existing techniques in optimization to reduce the number of steps to essentially $N^{11/8}$. This is a significant progress on a fairly basic question in DP optimization. Therefore I recommend acceptance. | train | [
"BNa8VYjUMFs",
"HNI4YngZ_bw",
"02kSg9YVQsI",
"1WzWgzq42Sw",
"0arfl4-rZ4f",
"8vfVZeyrG_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the problem of differentially private ERM and SCO for non-smooth function, aiming mainly to improve the gradient query complexity of previous algorithms. The authors develop algorithms for DP-SCO and ERM that achieve the optimal rates with a subquadratic qradient query complexity which improves ... | [
8,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_-16dlERMZkO",
"02kSg9YVQsI",
"8vfVZeyrG_",
"0arfl4-rZ4f",
"nips_2021_-16dlERMZkO",
"nips_2021_-16dlERMZkO"
] |
nips_2021_1XwPDFrJObw | Towards Instance-Optimal Offline Reinforcement Learning with Pessimism | Ming Yin, Yu-Xiang Wang | accept | This paper presents adaptive sample complexity bounds for offline RL. While the technique is not entirely novel, the results are new and interesting in the literature and thus the AC recommends acceptance. The authors are encouraged to incorporate comments from the reviewers. | val | [
"4xJabM2Heph",
"f87J4-vXexO",
"3yDGi4Ll4gB",
"rx-DdE7wD2H",
"K8KRyKaJqc8",
"NQdl4wt5NC",
"WdvYiyzfEFh",
"s-u9rJrJBgz",
"VBuUA9WRXP7",
"4o9QL9WSJ2",
"jWz8nSLw4JS"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes and analyzes the adaptive pessimistic value iteration algorithm for offline reinforcement learning. The authors study the sample complexity of the proposed algorithm and prove matching upper and lower bound. The paper also studies the regime where no assumption about the behaviour policy is made... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5
] | [
"nips_2021_1XwPDFrJObw",
"3yDGi4Ll4gB",
"s-u9rJrJBgz",
"K8KRyKaJqc8",
"NQdl4wt5NC",
"WdvYiyzfEFh",
"4xJabM2Heph",
"4o9QL9WSJ2",
"jWz8nSLw4JS",
"nips_2021_1XwPDFrJObw",
"nips_2021_1XwPDFrJObw"
] |
nips_2021_8V2hZW0d2aS | Speedy Performance Estimation for Neural Architecture Search | Reliable yet efficient evaluation of generalisation performance of a proposed architecture is crucial to the success of neural architecture search (NAS). Traditional approaches face a variety of limitations: training each architecture to completion is prohibitively expensive, early stopped validation accuracy may correlate poorly with fully trained performance, and model-based estimators require large training sets. We instead propose to estimate the final test performance based on a simple measure of training speed. Our estimator is theoretically motivated by the connection between generalisation and training speed, and is also inspired by the reformulation of a PAC-Bayes bound under the Bayesian setting. Our model-free estimator is simple, efficient, and cheap to implement, and does not require hyperparameter-tuning or surrogate training before deployment. We demonstrate on various NAS search spaces that our estimator consistently outperforms other alternatives in achieving better correlation with the true test performance rankings. We further show that our estimator can be easily incorporated into both query-based and one-shot NAS methods to improve the speed or quality of the search.
| accept | This is a strong paper in a somewhat crowded research area, performance prediction for NAS. Even within such a crowded area, this paper stands out for the thoughtfulness of its experiments and the grounding of the approach proposed. The reviewers asked for more depth in the related work section and for some additional experiments, but overall have no remaining concerns regarding the paper. | train | [
"Z88HmPaJbCN",
"T9WqE7X6wyF",
"6zzmz2P1NqV",
"bmt1Dh2b9P",
"V6tF0UEmuKQ",
"3YbytolMt4m",
"6YDsxmL3LlN",
"x8cVLoKVkW-",
"29x_lyDOO8A",
"4sXRfQDCzef",
"mqHIHGL-Fy",
"So_r6Repy8L",
"b0fw2s7hXaO",
"Vpw0Vrtd88p"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for answering my questions and running the addition comparison with LGBoost and XGBoost. I believe it is a strong pape.",
" I apologize for the delay in my responses.\n1) The author's responses are informative (In particular the answer to Comment 1) and addressed my questions.\... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
4
] | [
"4sXRfQDCzef",
"x8cVLoKVkW-",
"V6tF0UEmuKQ",
"3YbytolMt4m",
"29x_lyDOO8A",
"6YDsxmL3LlN",
"So_r6Repy8L",
"Vpw0Vrtd88p",
"mqHIHGL-Fy",
"b0fw2s7hXaO",
"nips_2021_8V2hZW0d2aS",
"nips_2021_8V2hZW0d2aS",
"nips_2021_8V2hZW0d2aS",
"nips_2021_8V2hZW0d2aS"
] |
nips_2021_jV5m8NAWb0E | How Tight Can PAC-Bayes be in the Small Data Regime? | Andrew Foong, Wessel Bruinsma, David Burt, Richard Turner | accept | This work deepens our understanding of PAC-Bayes bounds. While the study is mostly exploratory, and perhaps unconventional for a NeurIPS paper, the reviewers believe that there is clear value to the community, as do I. The connections studied in this work are interesting and have pedagogical value, and it is quite likely that interesting discussions will come out of this work. While the open questions/problems are not yet resolved, the understanding gained may lead us closer to their being resolved. In more detail, Theorem 4 is quite interesting (and one reviewer mentioned that the proof checks out). Especially from the discussion phase, all reviewers support acceptance of this paper. This paper will make for an interesting contribution to the NeurIPS proceedings, and I hope serve as a great reference for future works as well. | val | [
"K7gdhYcRcQ7",
"24m5K8lKz-Y",
"V2m19JDl5yS",
"PdU0l4rwxzj",
"DOjKzbJbGTp",
"yPBPxEQsaIb",
"hNEcHs2LgG",
"O2zu4-xG_LS",
"lF6RsgGHbhw",
"uVf3Qaek8n",
"QzqsHOCDCzm",
"5BNEmId-03H",
"Slc193W8yi5",
"VTx157nMovC",
"ia9fWjQmwSB",
"CvTTHPZ9QQJ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"As far as I see, this submission is motivated by the desire to understand the ability of PAC-Bayes bounds to give tight numerical values such that they could be used as numerical measures intended to certify the post-training performance of randomised classifiers. The submission leverages the so-called generic PAC... | [
8,
-1,
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
5,
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_jV5m8NAWb0E",
"K7gdhYcRcQ7",
"DOjKzbJbGTp",
"nips_2021_jV5m8NAWb0E",
"VTx157nMovC",
"nips_2021_jV5m8NAWb0E",
"O2zu4-xG_LS",
"yPBPxEQsaIb",
"CvTTHPZ9QQJ",
"K7gdhYcRcQ7",
"K7gdhYcRcQ7",
"K7gdhYcRcQ7",
"K7gdhYcRcQ7",
"PdU0l4rwxzj",
"nips_2021_jV5m8NAWb0E",
"nips_2021_jV5m8NAWb0... |
nips_2021_Joy2imuk604 | Deep Synoptic Monte-Carlo Planning in Reconnaissance Blind Chess | This paper introduces deep synoptic Monte Carlo planning (DSMCP) for large imperfect information games. The algorithm constructs a belief state with an unweighted particle filter and plans via playouts that start at samples drawn from the belief state. The algorithm accounts for uncertainty by performing inference on "synopses," a novel stochastic abstraction of information states. DSMCP is the basis of the program Penumbra, which won the official 2020 reconnaissance blind chess competition versus 33 other programs. This paper also evaluates algorithm variants that incorporate caution, paranoia, and a novel bandit algorithm. Furthermore, it audits the synopsis features used in Penumbra with per-bit saliency statistics.
| accept | The reviewers had several concerns about this work, but after discussion these were generally agreed to be fixable issues - with one exception. One reviewer felt very strongly that RBC would not be sufficient to demonstrate the generality of the contribution. I suspect that some of the disagreement here may be an issue of different sub communities having different expectations. While RBC may be a familiar and accepted benchmark in one sub community, it may not be familiar in others - even some that aren't that far away.
The potential narrowness of the appeal of this paper indicates that a poster may be the more appropriate route.
I would also encourage the authors to find some way to make their contribution appeal more broadly if is is practical to (convincingly) make the case that it has broader application than to a single benchmark. | train | [
"WZG8cXE6dCM",
"fVOrpvT4V5z",
"0sOoR7PScUv",
"lPy2uRXxCBH",
"Zz1vD5_EddO",
"qu1XKq3KKE1",
"hCa7vhSwF9A",
"Z2TwqJ_JOCG",
"dHRWiOM0Gkb"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response! I plan to keep my score. I hope the clarifications can be addressed in a revised version.",
"This paper presents Deep Synoptic Monte Carlo Planning, the technique behind the 2020 champion Reconnaissance Blind Chess (RBC) program Penumbra. This method samples possible world states to ap... | [
-1,
6,
-1,
-1,
-1,
-1,
7,
7,
5
] | [
-1,
3,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"lPy2uRXxCBH",
"nips_2021_Joy2imuk604",
"dHRWiOM0Gkb",
"Z2TwqJ_JOCG",
"fVOrpvT4V5z",
"hCa7vhSwF9A",
"nips_2021_Joy2imuk604",
"nips_2021_Joy2imuk604",
"nips_2021_Joy2imuk604"
] |
nips_2021_u14Kuxl8fN | Dynamic Analysis of Higher-Order Coordination in Neuronal Assemblies via De-Sparsified Orthogonal Matching Pursuit | Coordinated ensemble spiking activity is widely observable in neural recordings and central in the study of population codes, with hypothesized roles including robust stimulus representation, interareal communication of neural information, and learning and memory formation. Model-free measures of synchrony characterize the coherence of pairwise activity, but not higher-order interactions; this limitation is transcended by statistical models of ensemble spiking activity. However, existing model-based analyses often impose assumptions about the relevance of higher-order interactions and require multiple repeated trials in order to characterize dynamics in the correlational structure of ensemble activity. To address these shortcomings, we propose an adaptive greedy filtering algorithm based on a discretized mark point-process model of ensemble spiking and a corresponding precise statistical inference framework to identify significant coordinated higher-order spiking activity. In the course of developing the statistical inference procedures, we also show that confidence intervals can be constructed for greedily estimated parameters. We demonstrate the utility of our proposed methods on simulated neuronal assemblies. Applied to multi-electrode recordings of human cortical ensembles, our proposed methods provide new insights into the dynamics underlying localized population activity during transitions between brain states.
| accept | This paper presents a model for multi-neuronal spike trains called the mGLM. The observation in time bin $t$ is a binary vector of length $N$, where $N$ is the number of neurons, and it is modeled as a categorical random variable that can take on one of $2^N$ values. The categorical probabilities are modeled via a GLM, with past spiking and external inputs as covariates. The authors then allow for time-varying parameters and propose an OMP-based fitting procedure.
The reviewers were generally favorable of this paper. However, as Reviewer KgmN pointed out, the core idea of a multinomial GLM to capture dependencies in instantaneous counts is one that has been explored previously (Ba et al, 2014), so the main technical advance here is in the time-varying parameters and the OMP fitting procedure. I'd also note that many have recognized the limitations of conditional independence assumptions in standard GLMs for neural data; indeed this is a main motivation for latent variable models like the Poisson LDS (see Macke et al, NeurIPS 2011, e.g.) and the recurrent linear model (Pachitariu, NeurIPS 2011). I'm also skeptical of the exponential complexity in model parameters in the mGLM, and the time-varying aspect seems to exacerbate the problem.
Despite my reservations, I will go along with the reviewers and recommend acceptance. | val | [
"S5ti1ZupQV",
"rLRozKiT9q",
"xuYijwGr8Dy",
"_V5gDn_bDyS",
"28pjIF-FJ-9",
"Yyux9ooMg-b",
"Kl0vp6u181C",
"TL8-ziGYCLA"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your follow-up inquiry about our modeling motivations. First, regarding literature on sparse network interactions, we would like to point out the following representative references\n- From statistical modeling and signal processing: GLMs with sparse history coefficients are suitable for explaining ... | [
-1,
-1,
-1,
-1,
-1,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"rLRozKiT9q",
"xuYijwGr8Dy",
"Kl0vp6u181C",
"TL8-ziGYCLA",
"Yyux9ooMg-b",
"nips_2021_u14Kuxl8fN",
"nips_2021_u14Kuxl8fN",
"nips_2021_u14Kuxl8fN"
] |
nips_2021_824xC-SgWgU | Efficient Training of Retrieval Models using Negative Cache | Factorized models, such as two tower neural network models, are widely used for scoring (query, document) pairs in information retrieval tasks. These models are typically trained by optimizing the model parameters to score relevant positive" pairs higher than the irrelevantnegative" ones. While a large set of negatives typically improves the model performance, limited computation and memory budgets place constraints on the number of negatives used during training. In this paper, we develop a novel negative sampling technique for accelerating training with softmax cross-entropy loss. By using cached (possibly stale) item embeddings, our technique enables training with a large pool of negatives with reduced memory and computation. We also develop a streaming variant of our algorithm geared towards very large datasets. Furthermore, we establish a theoretical basis for our approach by showing that updating a very small fraction of the cache at each iteration can still ensure fast convergence. Finally, we experimentally validate our approach and show that it is efficient and compares favorably with more complex, state-of-the-art approaches.
| accept | This paper presents a novel and more efficient strategy for approximate negative sampling in IR system training. Theoretical analysis of convergence is included, and empirical results show efficiency gains over the state-of-the-art. All reviewers recommend acceptance, with the majority voting strongly for acceptance. Generally, reviewers praise clarity and motivation, as well as theoretical and empirical results. One reviewer initially had concerns about generalization to additional datasets. This concern was adequately addressed by author response. Overall I agree with reviewers and recommend acceptance. | train | [
"LyBapISVn7q",
"l2d76m-STmu",
"Fkgc5hSlDE",
"2ha2r_qk_4U",
"QH2lnO7kPVR",
"BNL03RxUkj",
"HgKiDUfHDWq",
"6a_h7m-7QB",
"Wv8fZdHzWs",
"jU-o7CMFSok",
"CUBI_W0CCbe",
"C8PVLJqJVkz",
"jK-Ln3-XdKC",
"umDkeF11RHF",
"hcK4rWPUBRb",
"6_9wUi3YZ72",
"1bUcF2vZpPB",
"p4xbWFMRM7r"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" For the contribution and your answer, this is a valuable paper for the IR/NLP community I think",
" It would be good to have these new results included in the paper",
"This paper is about finding good negative samples for contrastive training. The authors take a caching approach, and show how to make it work ... | [
-1,
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"jK-Ln3-XdKC",
"HgKiDUfHDWq",
"nips_2021_824xC-SgWgU",
"QH2lnO7kPVR",
"C8PVLJqJVkz",
"nips_2021_824xC-SgWgU",
"1bUcF2vZpPB",
"nips_2021_824xC-SgWgU",
"BNL03RxUkj",
"Fkgc5hSlDE",
"hcK4rWPUBRb",
"umDkeF11RHF",
"1bUcF2vZpPB",
"Fkgc5hSlDE",
"p4xbWFMRM7r",
"BNL03RxUkj",
"nips_2021_824xC-S... |
nips_2021_d9FjReQr-q- | Understanding Partial Multi-Label Learning via Mutual Information | To deal with ambiguities in partial multilabel learning (PML), state-of-the-art methods perform disambiguation by identifying ground-truth labels directly. However, there is an essential question:“Can the ground-truth labels be identified precisely?". If yes, “How can the ground-truth labels be found?". This paper provides affirmative answers to these questions. Instead of adopting hand-made heuristic strategy, we propose a novel Mutual Information Label Identification for Partial Multilabel Learning (MILI-PML), which is derived from a clear probabilistic formulation and could be easily interpreted theoretically from the mutual information perspective, as well as naturally incorporates the feature/label relevancy considerations. Extensive experiments on synthetic and real-world datasets clearly demonstrate the superiorities of the proposed MILI-PML.
| accept | The author response clarified the main concerns regarding the paper, notably regarding the assumptions and the presentation. The reviewers agree that the paper makes a strong contribution to the PML setting, and that the experiments convincingly support the approach. | train | [
"a8uI-gD3uWl",
"pHcSZa0ZdK_",
"0N8I-N8acL-",
"v851TJ0_wRs",
"LU_VYIhH7Xx",
"ickSDwRtBtx",
"-U0i5vmb1s",
"WmpkcDTV8Z"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper addresses the problem setting of partial multi-label learning (PML). The task is to select the ground-truth labels from a set of label candidates and disregard false-positive or noisy labels that may occur in the labelling process. The authors propose MILI-PML that exploits dependencies between labels an... | [
6,
-1,
-1,
-1,
-1,
7,
8,
8
] | [
3,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_d9FjReQr-q-",
"a8uI-gD3uWl",
"WmpkcDTV8Z",
"-U0i5vmb1s",
"ickSDwRtBtx",
"nips_2021_d9FjReQr-q-",
"nips_2021_d9FjReQr-q-",
"nips_2021_d9FjReQr-q-"
] |
nips_2021_CeByDMy0YTL | Environment Generation for Zero-Shot Compositional Reinforcement Learning | Many real-world problems are compositional – solving them requires completing interdependent sub-tasks, either in series or in parallel, that can be represented as a dependency graph. Deep reinforcement learning (RL) agents often struggle to learn such complex tasks due to the long time horizons and sparse rewards. To address this problem, we present Compositional Design of Environments (CoDE), which trains a Generator agent to automatically build a series of compositional tasks tailored to the RL agent’s current skill level. This automatic curriculum not only enables the agent to learn more complex tasks than it could have otherwise, but also selects tasks where the agent’s performance is weak, enhancing its robustness and ability to generalize zero-shot to unseen tasks at test-time. We analyze why current environment generation techniques are insufficient for the problem of generating compositional tasks, and propose a new algorithm that addresses these issues. Our results assess learning and generalization across multiple compositional tasks, including the real-world problem of learning to navigate and interact with web pages. We learn to generate environments composed of multiple pages or rooms, and train RL agents capable of completing wide-range of complex tasks in those environments. We contribute two new benchmark frameworks for generating compositional tasks, compositional MiniGrid and gMiniWoB for web navigation. CoDE yields 4x higher success rate than the strongest baseline, and demonstrates strong performance of real websites learned on 3500 primitive tasks.
| accept | This paper makes several solid contributions — 1) a method to decompose tasks in a compositional manner, so as to enable RL agents to generalize to unseen, complex test scenarios, and 2) new open-source benchmarks for testing compositional generalization in RL agents, on gridworld and web-based domains. The reviewers appreciated the novelty of ideas presented and agree that the paper is solving an important problem. The authors in their responses promise to add in ablation studies and additional clarifications on the framework (state space, actions, etc.), better comparison to related work on curriculum/self-supervised learning, and adding more complex environments to the open source benchmarks. With these changes, I think this paper will be quite valuable to the community and enable future research on compositional generalization in RL. | train | [
"KY6atilQwoC",
"I7CW1r2ZcWW",
"ds8EItboH-7",
"pTqk8kN-ZX3",
"vgoEQcFcDOh",
"thG86RzTdLM",
"POqYINhuPpM",
"d6FL7YwNQYK",
"DCBh0zSHOX",
"srUKXqu0eat",
"_Kjzx3d2TUJ",
"mnpGJM0hRgo",
"JtLdYhjFqQT",
"S8prBssoBVl"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for this valuable and encouraging feedback!\n\nWe will work on adding more complex environments with more domains to the open source benchmark that we will release. We would like to also remind that using a much larger real primitive database in our framework, we see a bigger challenge and a drop in per... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"ds8EItboH-7",
"pTqk8kN-ZX3",
"mnpGJM0hRgo",
"d6FL7YwNQYK",
"nips_2021_CeByDMy0YTL",
"vgoEQcFcDOh",
"srUKXqu0eat",
"_Kjzx3d2TUJ",
"nips_2021_CeByDMy0YTL",
"S8prBssoBVl",
"vgoEQcFcDOh",
"JtLdYhjFqQT",
"nips_2021_CeByDMy0YTL",
"nips_2021_CeByDMy0YTL"
] |
nips_2021_Tc6Uk03Te7g | Optimizing Conditional Value-At-Risk of Black-Box Functions | This paper presents two Bayesian optimization (BO) algorithms with theoretical performance guarantee to maximize the conditional value-at-risk (CVaR) of a black-box function: CV-UCB and CV-TS which are based on the well-established principle of optimism in the face of uncertainty and Thompson sampling, respectively. To achieve this, we develop an upper confidence bound of CVaR and prove the no-regret guarantee of CV-UCB by utilizing an interesting connection between CVaR and value-at-risk (VaR). For CV-TS, though it is straightforwardly performed with Thompson sampling, bounding its Bayesian regret is non-trivial because it requires a tail expectation bound for the distribution of CVaR of a black-box function, which has not been shown in the literature. The performances of both CV-UCB and CV-TS are empirically evaluated in optimizing CVaR of synthetic benchmark functions and simulated real-world optimization problems.
| accept | This paper addresses Bayesian optimization to search a maximizer of the CVaR of the black-box objective function. Two variants inspired by UCB and Thompson sampling are presented. For CV-UCB, the cumulative regret bound has been proved and for CV-TS, the upper bound for the Bayesian cumulative regret has been proved. The paper is well written and the topic is interesting. A few reviewers pointed out the weakness of empirical evaluations. The authors promised to include an additional experiment whose preliminary results are reported in the rebuttal.The rebuttal was not fully satisfactory but the efforts in providing preliminary results were appreciated. During the committee discussion period, two of reviewers raised the score. Some concerns in the limited novelty over previous work still remains. However, the paper presents a solid theoretical guarantee, which might not be obviously derived from known results.
| train | [
"HZJz7X81fFa",
"pdpgjKHrRG",
"LbuewDX2dv1",
"gWnVyIwB5qw",
"k0T7D69ZG9D",
"iLO1mVBeg2H",
"UqTCN9kwuRY",
"gTrmWW33Ux3",
"iHRnjayj4oC",
"nYL3iJXnV1K",
"wJxGzARpPX_",
"KBd69bWadh",
"ymA54d-TakD",
"v4_FVKPs0H2",
"C5sG5e1A1D",
"ULyjGmLde7h",
"rd0PhZUCuac",
"ZyYM9XQHFEQ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your time and effort in providing us constructive responses and helpful suggestions. We will consider them seriously in the revised paper and release the code.",
" Dear authors,\n\nThank you for conducting this new experiment. I still think that your empirical evaluation requires some polishing bu... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"pdpgjKHrRG",
"iLO1mVBeg2H",
"nips_2021_Tc6Uk03Te7g",
"k0T7D69ZG9D",
"v4_FVKPs0H2",
"gTrmWW33Ux3",
"KBd69bWadh",
"iHRnjayj4oC",
"nYL3iJXnV1K",
"ULyjGmLde7h",
"nips_2021_Tc6Uk03Te7g",
"C5sG5e1A1D",
"ZyYM9XQHFEQ",
"rd0PhZUCuac",
"wJxGzARpPX_",
"LbuewDX2dv1",
"nips_2021_Tc6Uk03Te7g",
... |
nips_2021_N5hQI_RowVA | E(n) Equivariant Normalizing Flows | This paper introduces a generative model equivariant to Euclidean symmetries: E(n) Equivariant Normalizing Flows (E-NFs). To construct E-NFs, we take the discriminative E(n) graph neural networks and integrate them as a differential equation to obtain an invertible equivariant function: a continuous-time normalizing flow. We demonstrate that E-NFs considerably outperform baselines and existing methods from the literature on particle systems such as DW4 and LJ13, and on molecules from QM9 in terms of log-likelihood. To the best of our knowledge, this is the first flow that jointly generates molecule features and positions in 3D.
| accept | All reviewers agree on acceptance. This work is a highly significant contribution by introducing equivariant graph neural networks into the
(equivariant) continuous normalizing flow framework to obtain invertible equivariant functions. They show that their flow beats prior equivariant baselines and allows the sampling of molecular configurations with positions, atom types, and charges. A clear acceptance decision.
Pros:
- High significant contribution using equivariant graph neural networks and continuous normalizing flows to obtain invertible equivariant functions.
- The model can lean distributions over molecules considering positions as well as atom types, a clear improvement over previous related normalizing flows, which were focused on positional data.
- The reported molecular generation experiments serve as a clear demonstration for the benefits of their method over other normalizing flows.
- The theory of equivariant flows is heavily based on the derivation from Köhler et al. (2019, 2020) and Rezende et al. (2019), but they contribute a proof for equivalence of the Jacobian of the used subspace and the whole space.
- Sets a new baseline for molecular structure generation.
- Very well-written paper. The authors motivate this work well, describe all important details of their method, and properly discuss limitations and impact of their work.
Limitations:
- Remains unclear how it performs in comparison to other, nonflow, generative approaches.
- Energies of the generated molecules are not reported.
The above list of pros and few existing limitations make this work highly relevant and worth an oral presentation. | train | [
"oAFw02xPefF",
"JAArzTgpG8X",
"82RMhb6We_9",
"FnjEPgqBj9D",
"dLPMxBYSJ47",
"9Hl7gkQQzq",
"uyUJX8gTq92",
"onOvuHvktx"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for including the additional information about the Hutchinson’s estimator and especially the stability measure in the final paper. \nI agree that it is difficult to find a suitable evaluation metric, and I am looking forward to the detailed explanation of your method in the final paper.\n\nMy rating rem... | [
-1,
7,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"9Hl7gkQQzq",
"nips_2021_N5hQI_RowVA",
"dLPMxBYSJ47",
"JAArzTgpG8X",
"onOvuHvktx",
"uyUJX8gTq92",
"nips_2021_N5hQI_RowVA",
"nips_2021_N5hQI_RowVA"
] |
nips_2021_sRojdWhXJx | Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning | Studies on self-supervised visual representation learning (SSL) improve encoder backbones to discriminate training samples without labels. While CNN encoders via SSL achieve comparable recognition performance to those via supervised learning, their network attention is under-explored for further improvement. Motivated by the transformers that explore visual attention effectively in recognition scenarios, we propose a CNN Attention REvitalization (CARE) framework to train attentive CNN encoders guided by transformers in SSL. The proposed CARE framework consists of a CNN stream (C-stream) and a transformer stream (T-stream), where each stream contains two branches. C-stream follows an existing SSL framework with two CNN encoders, two projectors, and a predictor. T-stream contains two transformers, two projectors, and a predictor. T-stream connects to CNN encoders and is in parallel to the remaining C-Stream. During training, we perform SSL in both streams simultaneously and use the T-stream output to supervise C-stream. The features from CNN encoders are modulated in T-stream for visual attention enhancement and become suitable for the SSL scenario. We use these modulated features to supervise C-stream for learning attentive CNN encoders. To this end, we revitalize CNN attention by using transformers as guidance. Experiments on several standard visual recognition benchmarks, including image classification, object detection, and semantic segmentation, show that the proposed CARE framework improves CNN encoder backbones to the state-of-the-art performance.
| accept | The rebuttal addressed all of the reviewers concerns, and all reviewers recommend acceptance. The AC agrees with this recommendation. | train | [
"3dFQxGqTaFx",
"RvZAG2Tc7Mm",
"a_e6WlRhzz",
"V8JgSNgQrtq",
"9TaFvsFN2yn",
"ZGS6IxXfzPR",
"eUoy5PHTHl",
"tmr19zLz9Zd",
"2uZstnAeJ3N"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"In this paper, a self-supervised visual representation learning approach is developed to improve CNN backbone encoders. The transformers are introduced to improve visual attention abilities of transformers. The evaluation on the experiments has shown the proposed approach achieves favorable performance. The main ... | [
8,
7,
8,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_sRojdWhXJx",
"nips_2021_sRojdWhXJx",
"nips_2021_sRojdWhXJx",
"nips_2021_sRojdWhXJx",
"3dFQxGqTaFx",
"2uZstnAeJ3N",
"a_e6WlRhzz",
"RvZAG2Tc7Mm",
"nips_2021_sRojdWhXJx"
] |
nips_2021_vU96vWPrWL | A Critical Look at the Consistency of Causal Estimation with Deep Latent Variable Models | Using deep latent variable models in causal inference has attracted considerable interest recently, but an essential open question is their ability to yield consistent causal estimates. While they have demonstrated promising results and theory exists on some simple model formulations, we also know that causal effects are not even identifiable in general with latent variables. We investigate this gap between theory and empirical results with analytical considerations and extensive experiments under multiple synthetic and real-world data sets, using the causal effect variational autoencoder (CEVAE) as a case study. While CEVAE seems to work reliably under some simple scenarios, it does not estimate the causal effect correctly with a misspecified latent variable or a complex data distribution, as opposed to its original motivation. Hence, our results show that more attention should be paid to ensuring the correctness of causal estimates with deep latent variable models.
| accept | The paper takes a deep dive into issues of consistency and so-called "model identifiability" of CEVAE, a method for causal inference with proxy variables. The authors outline multiple possible failure modes of CEVAE, which could also extend to other existing deep learning based methods for proxy-based effect estimation and possibly even more generally to other latent variable models. These findings are validated over a set of generally well-designed experiments. This is one of only a few papers which look into the assumptions at the basis of using flexible general-purpose latent variable models such as VAEs for learning tasks which go beyond prediction. Further, it is the first to specifically address the problem of these assumption when working at the intersection of VAEs and causal inference, thus highlighting several points which should be of interest to all future work in the sub-field.
The reviewers brought up several points relating to the clarity of presentation which I trust the authors will address in the final version; in addition, I would appreciate a discussion of methods other than CEVAE for which some of the findings might be relevant.
| train | [
"dE72Qbwbkp",
"1qYJnyTpfDm",
"AavkgLLi5l",
"LqtXAnvTNp",
"2M2VRxeGUEx",
"UxWmmfAH6ne",
"SCBwMdfUXAr",
"dyOMPc9b-hB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper investigated the consistency property of CEVAE, mainly under unknown parametric form of generative model (p(x|z),p(t|z),p(y|t,z)), the unknown distribution of latent variables, and correlated proxies. Theoretical analysis under some special cases is given. Most conclusions are based on empirical observa... | [
6,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
3,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"nips_2021_vU96vWPrWL",
"UxWmmfAH6ne",
"SCBwMdfUXAr",
"dyOMPc9b-hB",
"dE72Qbwbkp",
"nips_2021_vU96vWPrWL",
"nips_2021_vU96vWPrWL",
"nips_2021_vU96vWPrWL"
] |
nips_2021_0NXUSlb6oEu | Improving Robustness using Generated Data | Sven Gowal, Sylvestre-Alvise Rebuffi, Olivia Wiles, Florian Stimberg, Dan Andrei Calian, Timothy A. Mann | accept | All reviewers agree that the paper should be accepted. In addition, the empirical results are strong (substantial robustness gains on CIFAR-10).
It was discovered late in the review process that this paper makes use of the 80 million tiny images dataset, which has been retracted (https://groups.csail.mit.edu/vision/TinyImages/). Following the NeurIPS ethical guidelines (https://neurips.cc/public/EthicsGuidelines), this dataset should not be used. For this reason, the paper is being conditionally accepted.
UPDATE: The authors have revised the paper to satisfy the conditions and it has now been officially accepted. | train | [
"vRscI7c55dc",
"YzVhuen8bHd",
"aJls8haSM_C",
"KXBCwSGKGje",
"KDQ_76STn5B",
"7_7dHW0swNA",
"IMHaJOnaNP7",
"0sHLbvYwrt2",
"9OmSyHL2HbM",
"b3QI7Tlhshi",
"pfCL8dpiT1u",
"ryWhilG5NzF",
"XATIexA2n5Z",
"YVU1iWRY3k",
"KKTxAGoZ3oD",
"wB2hYNg8I-O"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This work shows that data augmentation using good generative models helps improve adversarial robustness. Various experiments are conducted to show the importance of the quality of the generated images. The proposed method achieves state-of-the-art adversarial accuracy on CIFAR-10 and CIFAR-100 datasets, while the... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_0NXUSlb6oEu",
"9OmSyHL2HbM",
"pfCL8dpiT1u",
"7_7dHW0swNA",
"0sHLbvYwrt2",
"YVU1iWRY3k",
"nips_2021_0NXUSlb6oEu",
"KKTxAGoZ3oD",
"ryWhilG5NzF",
"nips_2021_0NXUSlb6oEu",
"XATIexA2n5Z",
"wB2hYNg8I-O",
"b3QI7Tlhshi",
"vRscI7c55dc",
"IMHaJOnaNP7",
"nips_2021_0NXUSlb6oEu"
] |
nips_2021_nZnYVf0k0yY | An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias | Structured non-convex learning problems, for which critical points have favorable statistical properties, arise frequently in statistical machine learning. Algorithmic convergence and statistical estimation rates are well-understood for such problems. However, quantifying the uncertainty associated with the underlying training algorithm is not well-studied in the non-convex setting. In order to address this shortcoming, in this work, we establish an asymptotic normality result for the constant step size stochastic gradient descent (SGD) algorithm---a widely used algorithm in practice. Specifically, based on the relationship between SGD and Markov Chains [DDB19], we show that the average of SGD iterates is asymptotically normally distributed around the expected value of their unique invariant distribution, as long as the non-convex and non-smooth objective function satisfies a dissipativity property. We also characterize the bias between this expected value and the critical points of the objective function under various local regularity conditions. Together, the above two results could be leveraged to construct confidence intervals for non-convex problems that are trained using the SGD algorithm.
| accept | This paper studies the asymptotic normality and the bias for constant step size stochastic gradient descent (SGD) algorithm in the non-convex and non-smooth setting. It shows that if the non-convex and non-smooth objective function satisfies a dissipativity property, the average of SGD iterates is asymptotically normally distributed around the expected value of their unique invariant distribution. It also characterizes the bias between this expected value and the critical points of the objective function under different local regularity conditions. The paper is well-written and studies an important problem. The authors are encouraged to incorporate the feedback from the reviewers in the revision. | train | [
"TFuhN1hZbF",
"d7JR6uSDOyW",
"oHeW7doYaDe",
"Twx2dN2DwCO",
"KG_TAJpoiTF",
"vB04mLaFZ37",
"7rc-aznfSM",
"XVvKxYD6LkB",
"4rMtjXzuGLw"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for the detailed rebuttal. They already addressed my concerns and I am happy with the current paper. The authors can try to incorporate my comments and other reviewers' comments in their revision.",
" We thank the reviewer for their positive feedback.\n\n1. We agree that a sub... | [
-1,
-1,
-1,
-1,
-1,
4,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"oHeW7doYaDe",
"4rMtjXzuGLw",
"XVvKxYD6LkB",
"7rc-aznfSM",
"vB04mLaFZ37",
"nips_2021_nZnYVf0k0yY",
"nips_2021_nZnYVf0k0yY",
"nips_2021_nZnYVf0k0yY",
"nips_2021_nZnYVf0k0yY"
] |
nips_2021_ZqabiikWeyt | Learning to Learn Graph Topologies | Xingyue Pu, Tianyue Cao, Xiaoyun Zhang, Xiaowen Dong, Siheng Chen | accept | The paper contains much novelty in the formulation of topological constraints and the method of enrolling the iterative procedure of optimization and demonstrates favorable empirical results. Some reviewers raised unclear points in the paper, but the authors' feedbacks clarified them in detail. We judge the paper is acceptable in NeurIPS, but we hope the feedback will be reflected in the final version. | train | [
"DmVCe5U0eWc",
"QnXKTQ_vt3B",
"XvuQbGjwpNP",
"-lAmGCkqfi",
"jYYQgtuhxkL",
"pcaIL_hGAOd",
"_ZXJKLrTxz",
"smMRTfPa56O",
"yBwXkfYkCHN"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I have read the authors' response, and would like to thank the authors for the detailed information. It answered my questions raised in the initial review.",
"The authors study graph structure learning. The paper's main contribution is to improve the unrolling primal-dual iterative algorithm used to solve equat... | [
-1,
6,
6,
-1,
-1,
-1,
-1,
8,
7
] | [
-1,
3,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"pcaIL_hGAOd",
"nips_2021_ZqabiikWeyt",
"nips_2021_ZqabiikWeyt",
"XvuQbGjwpNP",
"QnXKTQ_vt3B",
"yBwXkfYkCHN",
"smMRTfPa56O",
"nips_2021_ZqabiikWeyt",
"nips_2021_ZqabiikWeyt"
] |
nips_2021_tvDBe6K8L5o | Invertible Tabular GANs: Killing Two Birds with One Stone for Tabular Data Synthesis | Tabular data synthesis has received wide attention in the literature. This is because available data is often limited, incomplete, or cannot be obtained easily, and data privacy is becoming increasingly important. In this work, we present a generalized GAN framework for tabular synthesis, which combines the adversarial training of GANs and the negative log-density regularization of invertible neural networks. The proposed framework can be used for two distinctive objectives. First, we can further improve the synthesis quality, by decreasing the negative log-density of real records in the process of adversarial training. On the other hand, by increasing the negative log-density of real records, realistic fake records can be synthesized in a way that they are not too much close to real records and reduce the chance of potential information leakage. We conduct experiments with real-world datasets for classification, regression, and privacy attacks. In general, the proposed method demonstrates the best synthesis quality (in terms of task-oriented evaluation metrics, e.g., F1) when decreasing the negative log-density during the adversarial training. If increasing the negative log-density, our experimental results show that the distance between real and fake records increases, enhancing robustness against privacy attacks.
| accept | The reviewers have come to an agreement that the paper is both original in content and provides a through experimental development of the technique, with adequate comparisons to strong baselines.
Even when already polished, I would still recommend to try to polish a bit further the narrative in regards to the discussion with RvTau and your reply to 1ZTc. The approach is sophisticated and could benefit from it.
AC.
| train | [
"iVJjXjiqMHT",
"dVwl9BQ3BZ",
"SHIDCrz1nUc",
"GGyfaV0lZ7A",
"I1qRRt5N9eR",
"DDe8B3ZaQ7",
"bmDuAIrEqLV",
"EHfdXa9Z1YG",
"ntnjWgvEqD"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks you for your response.\nI am happy with your reply. Thank you for pointing out differences between your approaches. I somehow missed the specification of IT-GAN.\n\nI'll be updating my score accordingly.",
"The paper proposes a new architecture for tabular data synthesis. In particular, the proposed arch... | [
-1,
7,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"I1qRRt5N9eR",
"nips_2021_tvDBe6K8L5o",
"bmDuAIrEqLV",
"ntnjWgvEqD",
"dVwl9BQ3BZ",
"EHfdXa9Z1YG",
"nips_2021_tvDBe6K8L5o",
"nips_2021_tvDBe6K8L5o",
"nips_2021_tvDBe6K8L5o"
] |
nips_2021_DYpstddnfN | Reducing Collision Checking for Sampling-Based Motion Planning Using Graph Neural Networks | Sampling-based motion planning is a popular approach in robotics for finding paths in continuous configuration spaces. Checking collision with obstacles is the major computational bottleneck in this process. We propose new learning-based methods for reducing collision checking to accelerate motion planning by training graph neural networks (GNNs) that perform path exploration and path smoothing. Given random geometric graphs (RGGs) generated from batch sampling, the path exploration component iteratively predicts collision-free edges to prioritize their exploration. The path smoothing component then optimizes paths obtained from the exploration stage. The methods benefit from the ability of GNNs of capturing geometric patterns from RGGs through batch sampling and generalize better to unseen environments. Experimental results show that the learned components can significantly reduce collision checking and improve overall planning efficiency in challenging high-dimensional motion planning tasks.
| accept | The paper seeks to improve the computational efficiency of sample-based motion planning by reducing the number of collision checks. The paper proposes an algorithm that learns to prioritize the order in which edges in a sample-based planning graph are checked for collision. Following this prioritization, a path is generated between the start and goal nodes and then smoothed to reduce cost. Results on a variety of different planning domains demonstrate that the proposed approach is more computationally efficient than contemporary methods, while still providing paths with similar cost.
The paper considers an important problem in sample-based robot motion planning, namely reducing the computational cost associated with collision checking. The reviewers agree that the work is well motivated and that the proposal to use graph neural networks to guide edge evaluations is technically sound, novel, and principled. The experiments effectively convey the advantages of this strategy. The reviewers raised some initial concerns, notably those related to the merits relative to lazy motion planning strategies, most if not all of which were adequately addressed in the author response and discussion phase. The authors are encouraged to make sure that the next revision of the paper reflects this discussion. | train | [
"sdsBRfPrHQ_",
"w6STCWWkXG",
"7PCOkJuLBUp",
"7icriwRkTzo",
"S8zXiQbN6AE",
"vXT9p68VxBe",
"GczG2mv8eZs",
"wRhk8JDZayS",
"V58vRaYEbU",
"zsCOIDeu-do"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" We thank the reviewer for the comments and for reading our response. We will be sure to update the paper according to the suggestions.",
"At a high-level, this work proposes GNNs as models for biasing the sampling of edges and nodes to connect in a random geometric graph. The chief goal is to reduce the amount ... | [
-1,
6,
-1,
-1,
-1,
7,
-1,
-1,
-1,
7
] | [
-1,
3,
-1,
-1,
-1,
5,
-1,
-1,
-1,
4
] | [
"7PCOkJuLBUp",
"nips_2021_DYpstddnfN",
"GczG2mv8eZs",
"S8zXiQbN6AE",
"V58vRaYEbU",
"nips_2021_DYpstddnfN",
"w6STCWWkXG",
"zsCOIDeu-do",
"vXT9p68VxBe",
"nips_2021_DYpstddnfN"
] |
nips_2021__gG02Imo9jO | Sample Complexity Bounds for Active Ranking from Multi-wise Comparisons | Wenbo Ren, Jia Liu, Ness Shroff | accept | This paper considers the problem of ranking from multi-wise comparisons where the goal is either to find the set of top-k items or to find a complete ranking over the items. The paper considers a model for multi-wise comparisons where the feedback is either-- (1) deterministic where the correct result is returned; or (2) probabilistic where the feedback is correct with probability larger than 1/2. The paper assumes that there is an underlying ordering over items and considers two models of feedback-- (1) winner feedback where the best item in a set is returned; (2) full-ranking feedback where a complete ordering over arms is returned. The paper proposes lower and upper bounds on sample complexity for each of these 8 problem settings and outlines the reduction in sample complexity due to the use of multi-wise comparisons. There is a near consensus amongst reviewer that this is a "border-line" work. | train | [
"pyBOtdeYxcF",
"fc8fvE86Arn",
"Dqxf2lL39Fv",
"vis2zK1OBgL",
"R6S5okv22gY",
"-sii72zaCyV",
"s630BiJVmW",
"kjmIrm8iLAk",
"l8QWvgyEJOr"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer Qykc,\n\nBased on your comments on modeling assumptions and the technical novelty of our paper, we have tried our best to address your concerns. Please see our response last time.\n\nSince there are only a few days left in the discussion stage, could you kindly check and see whether our responses ha... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"s630BiJVmW",
"vis2zK1OBgL",
"R6S5okv22gY",
"l8QWvgyEJOr",
"kjmIrm8iLAk",
"s630BiJVmW",
"nips_2021__gG02Imo9jO",
"nips_2021__gG02Imo9jO",
"nips_2021__gG02Imo9jO"
] |
nips_2021_fWLDGNIOhYU | Efficient Bayesian network structure learning via local Markov boundary search | We analyze the complexity of learning directed acyclic graphical models from observational data in general settings without specific distributional assumptions. Our approach is information-theoretic and uses a local Markov boundary search procedure in order to recursively construct ancestral sets in the underlying graphical model. Perhaps surprisingly, we show that for certain graph ensembles, a simple forward greedy search algorithm (i.e. without a backward pruning phase) suffices to learn the Markov boundary of each node. This substantially improves the sample complexity, which we show is at most polynomial in the number of nodes. This is then applied to learn the entire graph under a novel identifiability condition that generalizes existing conditions from the literature. As a matter of independent interest, we establish finite-sample guarantees for the problem of recovering Markov boundaries from data. Moreover, we apply our results to the special case of polytrees, for which the assumptions simplify, and provide explicit conditions under which polytrees are identifiable and learnable in polynomial time. We further illustrate the performance of the algorithm, which is easy to implement, in a simulation study. Our approach is general, works for discrete or continuous distributions without distributional assumptions, and as such sheds light on the minimal assumptions required to efficiently learn the structure of directed graphical models from data.
| accept | The final reviewer scores are borderline, with the merits being valuable contributions to the ongoing literature on Bayesian network learning (e.g., generalising the "equal variance" condition, which was a recent success case), but also some concerns. After carefully discussing and considering these concerns, my overall recommendation is "weak accept". The reasons are as follows:
- One concern was that Condition 2 is too strong. However, this was largely a misunderstanding, and the authors did indeed already point out in the paper that the condition is *sufficient* for Condition 1, but that the latter is more general. Addressing this potential confusion should only be a minor change, e.g., Condition 2 could be labeled with (Sufficient Condition for Condition 1), or changed to regular text (instead of bold-headed environment), etc.
- Another concern was that the d^2 dependence seems high and there are no lower bounds to compare against. However, this dependence appears even in simpler settings (e.g., assuming linearity), and lower bounds are largely open, so insisting on these may be asking too much. Overall, this setup appears to be much more of a work in progress compared to learning undirected Markov Random Fields, so it is natural that more gaps still remain. Having said this, the authors should carefully compare to the most related works (as they partly did in the discussion) in the revised paper, as currently the related work seems to be written without enough emphasis on the relevant sample complexities and assumptions. Please also highlight the relevant limitations/gaps/open problems carefully.
Beyond these specific points, I ask the authors to carefully consider all reviewer comments when editing the paper. | train | [
"tv5HGeCHOLd",
"R6DoEhLIsYA",
"UrFW0nn6Cne",
"NQHbPggzRux",
"BwM3hs8A-jn",
"XXPnwF3eXF5",
"DXauYAUewc8",
"-m0hffB1M4",
"mjUJCGggjie",
"ISqbEso0lwz",
"_ZEAYUU8Vw",
"4Lkh14WxiBV",
"tAbXqG0UAR",
"Hew0AptLot7",
"ux7qGxKby-Y",
"L2tS8XnjFgs"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers learning DAGs from iid sampled data without specific distributional assumptions. The sample complexity is derived and is shown to be polynomial in the number of nodes. One advantage is that the faithfulness condition that is often assumed in this line of work is no longer required. Strengths... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"nips_2021_fWLDGNIOhYU",
"nips_2021_fWLDGNIOhYU",
"R6DoEhLIsYA",
"nips_2021_fWLDGNIOhYU",
"tAbXqG0UAR",
"DXauYAUewc8",
"-m0hffB1M4",
"mjUJCGggjie",
"ISqbEso0lwz",
"_ZEAYUU8Vw",
"tv5HGeCHOLd",
"R6DoEhLIsYA",
"L2tS8XnjFgs",
"ux7qGxKby-Y",
"nips_2021_fWLDGNIOhYU",
"nips_2021_fWLDGNIOhYU"
... |
nips_2021_X7GEA3KiJiH | Learning Dynamic Graph Representation of Brain Connectome with Spatio-Temporal Attention | Functional connectivity (FC) between regions of the brain can be assessed by the degree of temporal correlation measured with functional neuroimaging modalities. Based on the fact that these connectivities build a network, graph-based approaches for analyzing the brain connectome have provided insights into the functions of the human brain. The development of graph neural networks (GNNs) capable of learning representation from graph structured data has led to increased interest in learning the graph representation of the brain connectome. Although recent attempts to apply GNN to the FC network have shown promising results, there is still a common limitation that they usually do not incorporate the dynamic characteristics of the FC network which fluctuates over time. In addition, a few studies that have attempted to use dynamic FC as an input for the GNN reported a reduction in performance compared to static FC methods, and did not provide temporal explainability. Here, we propose STAGIN, a method for learning dynamic graph representation of the brain connectome with spatio-temporal attention. Specifically, a temporal sequence of brain graphs is input to the STAGIN to obtain the dynamic graph representation, while novel READOUT functions and the Transformer encoder provide spatial and temporal explainability with attention, respectively. Experiments on the HCP-Rest and the HCP-Task datasets demonstrate exceptional performance of our proposed method. Analysis of the spatio-temporal attention also provide concurrent interpretation with the neuroscientific knowledge, which further validates our method. Code is available at https://github.com/egyptdj/stagin
| accept | The paper introduces a method for learning dynamic graph representation of brain connectome that
achieves state-of-the art performance while providing spatio-temporal explainability.
The work was appreciated by reviewers, as it put together novel tools (scaled dot attention / squeeze
excitation, self attention) with a graph neural network model to analyze dynamic graph data, specifically
4D fMRI. The application to the brain imaging data analysis was thought to be convincing. The reviewers
also consider that the paper is well-written.
However, it was found that
• not enough baselines had been used in applications, questioning the practical usefulness of the
method.
• some technical details were still somewhat arbitrary, questioning the generalizability of the method.
• comparisons against alternatives, in particular, attention-based methods, were not sufficient
Overall, the authors did a good work addressing the comments used, in particular regarding
comparisons/benchmarks against state of the art. However, identifying more difficult benchmarks would
indeed make the paper more convincing overall.
The final consensus is thus a weak accept. | train | [
"qEsW9E_zm6b",
"bnxSzflgayN",
"nTBVunpSdyQ",
"7hwP9mA5SSQ",
"z6Eekv4ZyYg",
"zw7r09F38zH",
"6BXKtsASUcp",
"txvbpgbfnoV",
"rWf-oEdwMcq",
"iosqeJvF-hJ",
"ag_NLsqhOl",
"3f_deHiQr5",
"6tC9eo-xS1j",
"nYIc_7di55",
"0IepzLrhyV3",
"UJwsZQXk-CK",
"0zaSasLYf5P",
"I2l-cvl-D3_",
"JsTzhL8J57",... | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
... | [
" A new study was recently published as a preprint which attempted to apply MS-G3D to the resting-state fMRI data [1].\nMS-G3D is an extension of the ST-GCN model, developed for action recognition [2].\nWe have re-experimented the model with the code provided by the authors (https://github.com/metrics-lab/ST-fMRI/)... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
3,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
5,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"fpOmmZrnX2p",
"6tC9eo-xS1j",
"ag_NLsqhOl",
"6tC9eo-xS1j",
"ag_NLsqhOl",
"fpOmmZrnX2p",
"txvbpgbfnoV",
"ukEHG4vcWl-",
"nYIc_7di55",
"3f_deHiQr5",
"nips_2021_X7GEA3KiJiH",
"0IepzLrhyV3",
"nips_2021_X7GEA3KiJiH",
"5ZmNKwqT0mr",
"G76PtovCNig",
"nips_2021_X7GEA3KiJiH",
"UJwsZQXk-CK",
"... |
nips_2021_d87PBvj7LA7 | Understanding the Generalization Benefit of Model Invariance from a Data Perspective | Machine learning models that are developed to be invariant under certain types of data transformations have shown improved generalization in practice. However, a principled understanding of why invariance benefits generalization is limited. Given a dataset, there is often no principled way to select "suitable" data transformations under which model invariance guarantees better generalization. This paper studies the generalization benefit of model invariance by introducing the sample cover induced by transformations, i.e., a representative subset of a dataset that can approximately recover the whole dataset using transformations. For any data transformations, we provide refined generalization bounds for invariant models based on the sample cover. We also characterize the "suitability" of a set of data transformations by the sample covering number induced by transformations, i.e., the smallest size of its induced sample covers. We show that we may tighten the generalization bounds for "suitable" transformations that have a small sample covering number. In addition, our proposed sample covering number can be empirically evaluated and thus provides a guidance for selecting transformations to develop model invariance for better generalization. In experiments on multiple datasets, we evaluate sample covering numbers for some commonly used transformations and show that the smaller sample covering number for a set of transformations (e.g., the 3D-view transformation) indicates a smaller gap between the test and training error for invariant models, which verifies our propositions.
| accept | The reviewers have come to a consensus in favor of this paper being accepted. I agree with this consensus. While there are still some issues described in the reviewers' comments, I expect that these can be addressed satisfactorily in the camera ready, as described in the author response. | train | [
"Ybh0oEtnMmB",
"r7deJ4o8Byu",
"q3vieTWj_kw",
"nJsDs3Oefpp",
"v9UB294uMxT",
"LE7kuAcQ9XD",
"LIu6FArfDL",
"ycwEowHbpm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the thoughtful response.\n\nI am optimistic that a future iteration of this paper will be accepted for publication. Without considering the effects on empirical risk more fully, the story is incomplete.",
"The authors tried to understand the generalization benefit based on the invariant transforma... | [
-1,
6,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
3,
4
] | [
"v9UB294uMxT",
"nips_2021_d87PBvj7LA7",
"nJsDs3Oefpp",
"r7deJ4o8Byu",
"ycwEowHbpm",
"LIu6FArfDL",
"nips_2021_d87PBvj7LA7",
"nips_2021_d87PBvj7LA7"
] |
nips_2021_DMkdzO--w24 | Improved Variance-Aware Confidence Sets for Linear Bandits and Linear Mixture MDP | Zihan Zhang, Jiaqi Yang, Xiangyang Ji, Simon S. Du | accept | This paper considers the problem of adapting to noise variance in linear bandits and reinforcement learning (linear mixture MDPs). The reviewers agree that the main technical result in the paper (a new confidence set construction which is adaptive to variance) is interesting and highly non-trivial. This is a timely result, and I am confident this technique will find broader use. In addition, there are many interesting directions (e.g., improving computational efficiency) for future work.
While the reviewers felt that the paper is generally well-written, the authors are encouraged to incorporate their suggestions to improve the clarity and organization of the main body and appendix. | test | [
"V-83XDCLzh",
"LrvypxSb8o",
"TEqS7yey3G",
"ekkTkh_3ber",
"LFnjpFa2j1",
"wBZ1F88fDv",
"DqTMWWbNu4",
"fczbIwCa9Z9",
"ISl749R5G8_",
"JGZa3PpxHD-",
"8ScR5zc16jK"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper discusses an interesting topic, the variance awareness approach in online learning and decision making (e.g., bandit and linear MDP). Two algorithms VOFUL (Variance-awareness OFUL) and VARLin (Variance-awareness RL with linear model) are proposed. By adaptive shrink the feasible parameter set \\Theta_{... | [
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_DMkdzO--w24",
"nips_2021_DMkdzO--w24",
"ekkTkh_3ber",
"LFnjpFa2j1",
"V-83XDCLzh",
"8ScR5zc16jK",
"LrvypxSb8o",
"JGZa3PpxHD-",
"V-83XDCLzh",
"nips_2021_DMkdzO--w24",
"nips_2021_DMkdzO--w24"
] |
nips_2021_RErCy8dT7u | How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness? | The fine-tuning of pre-trained language models has a great success in many NLP fields. Yet, it is strikingly vulnerable to adversarial examples, e.g., word substitution attacks using only synonyms can easily fool a BERT-based sentiment analysis model. In this paper, we demonstrate that adversarial training, the prevalent defense technique, does not directly fit a conventional fine-tuning scenario, because it suffers severely from catastrophic forgetting: failing to retain the generic and robust linguistic features that have already been captured by the pre-trained model. In this light, we propose Robust Informative Fine-Tuning (RIFT), a novel adversarial fine-tuning method from an information-theoretical perspective. In particular, RIFT encourages an objective model to retain the features learned from the pre-trained model throughout the entire fine-tuning process, whereas a conventional one only uses the pre-trained weights for initialization. Experimental results show that RIFT consistently outperforms the state-of-the-arts on two popular NLP tasks: sentiment analysis and natural language inference, under different attacks across various pre-trained language models.
| accept | This work provides a well-motivated approach to fine-tune a model for robustness while retaining features from pretrained weights. Adversarial robustness in language is an important problem since in general it's trivial for an attacker to slightly change the input. The paper is well-written and the analysis is interesting. The method is well supported theoretically and there is a good set of experiments, however, the improvements in robustness have limited significance while there is still a drop in accuracy on the vanilla evaluation. The rebuttal was well-written and answered the reviewers' questions. | train | [
"exhyi9Fy_t",
"JKGSs1FcuZz",
"Gg7W2ZJV7es",
"sZDp5mC7zzf",
"W1WDsoCKEd1",
"KcgZB0og1b",
"-z-5UhteLC",
"W9Br0Ese4Ni",
"MblRO8TQSTQ",
"8D9JBiK8nMf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response!",
" Thanks for the response. All my questions are properly resolved.",
"This paper studies the catastrophic forgetting problem when fine-tuning pre-trained language models towards adversarial robustness. They argue that existing robust training methods, such as adversarial trainin... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"KcgZB0og1b",
"W1WDsoCKEd1",
"nips_2021_RErCy8dT7u",
"W9Br0Ese4Ni",
"Gg7W2ZJV7es",
"8D9JBiK8nMf",
"MblRO8TQSTQ",
"nips_2021_RErCy8dT7u",
"nips_2021_RErCy8dT7u",
"nips_2021_RErCy8dT7u"
] |
nips_2021_qdphcA9jEbJ | Recursive Bayesian Networks: Generalising and Unifying Probabilistic Context-Free Grammars and Dynamic Bayesian Networks | Probabilistic context-free grammars (PCFGs) and dynamic Bayesian networks (DBNs) are widely used sequence models with complementary strengths and limitations. While PCFGs allow for nested hierarchical dependencies (tree structures), their latent variables (non-terminal symbols) have to be discrete. In contrast, DBNs allow for continuous latent variables, but the dependencies are strictly sequential (chain structure). Therefore, neither can be applied if the latent variables are assumed to be continuous and also to have a nested hierarchical dependency structure. In this paper, we present Recursive Bayesian Networks (RBNs), which generalise and unify PCFGs and DBNs, combining their strengths and containing both as special cases. RBNs define a joint distribution over tree-structured Bayesian networks with discrete or continuous latent variables. The main challenge lies in performing joint inference over the exponential number of possible structures and the continuous variables. We provide two solutions: 1) For arbitrary RBNs, we generalise inside and outside probabilities from PCFGs to the mixed discrete-continuous case, which allows for maximum posterior estimates of the continuous latent variables via gradient descent, while marginalising over network structures. 2) For Gaussian RBNs, we additionally derive an analytic approximation of the marginal data likelihood (evidence) and marginal posterior distribution, allowing for robust parameter optimisation and Bayesian inference. The capacity and diverse applications of RBNs are illustrated on two examples: In a quantitative evaluation on synthetic data, we demonstrate and discuss the advantage of RBNs for segmentation and tree induction from noisy sequences, compared to change point detection and hierarchical clustering. In an application to musical data, we approach the unsolved problem of hierarchical music analysis from the raw note level and compare our results to expert annotations.
| accept | There have been mixed opinions about the manuscript, even after the short discussion. The paper describes an interesting model, some properties and algorithms. I share some concerns of a reviewer who is unsatisfied about how the work is presented, as it can give the impression to do more than it actually does. Indeed there are many direction here that could be worth investigating, and the presentation and choices could be better. It is unclear to me how tractable and how general things really are: they do not seem as efficient/tractable/general as one gets the impression in the beginning. On the other hand, the remaining reviewers are positive, in particular because there is potential for RBNs to be an important model, and the steps described here may be enough for justifying publication at neurips 2021. | train | [
"HJRz74BqmX",
"G-VIKeclOsq",
"GupXCkqIRrO",
"TMZc3Q0IKHU",
"gdrI2Ic2SM",
"zMfOJ1YO-6k",
"2vfai_weUmr",
"SQkWhRkmeOg",
"76eT9ojFbq",
"v7Zquvu7bM",
"RBs_eNLw7uX",
"6CXgSdmSjoX",
"E_EoFsBvb1X",
"KvQMW7kQng",
"CFVDney9dO"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your comments. Some additions for completeness:\n\n- We _do_ describe how MAP inference in general RBNs can be achieved via gradient descent (Section 2.2 Inference: Maximum Posterior Inference) but also point out that the problem is highly non-convex. Also note that full-posterior inference in general ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"G-VIKeclOsq",
"TMZc3Q0IKHU",
"2vfai_weUmr",
"zMfOJ1YO-6k",
"SQkWhRkmeOg",
"v7Zquvu7bM",
"CFVDney9dO",
"KvQMW7kQng",
"E_EoFsBvb1X",
"6CXgSdmSjoX",
"nips_2021_qdphcA9jEbJ",
"nips_2021_qdphcA9jEbJ",
"nips_2021_qdphcA9jEbJ",
"nips_2021_qdphcA9jEbJ",
"nips_2021_qdphcA9jEbJ"
] |
nips_2021_8ygF02Zm51q | EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback | Peter Richtarik, Igor Sokolov, Ilyas Fatkhullin | accept | The paper's strength and appeal is in proposing a simple modification to the Error Feedback mechanism that has both theoretical and empirical benefits. The idea and mechanism proposed in the paper could have significant implications in important problems include distributed optimization and in optimization under various constraints (communication restricted, memory restricted, privacy restricted, etc). The proposed EF21 mechanism is also nicely and clearly motivated and presented. For these reason, I think the paper can make for an interesting oral presentation.
The ACs and myself were disappointed by the the authors choosing to restrict the scope of the paper to presenting the core idea only, without much applications and extensions. It is much more useful to the community and makes for a much stronger paper to include these in the paper, so as to justify importance of the core idea. While the paper is certainly "acceptable" without them, in order to be a milestone paper and a foundational reference, these further applications and extensions are appropriate. This is certainly much preferred to splitting the contribution into multiple papers each with 1 MPU (minimum publishable unit) worth of content. In a sense the recommendation for an oral presentation is on credit, and based on the expectation that this will urge the authors to make the paper more comprehensive and complete.
I should also say that I found the comment regarding reviewer scores at a workshop entirely inappropriate. While I appreciate the author's frustration with scores they disagree with, the author response is meant to point out factual issues with the reviews and address reviewer questions, and the authors also have the option of commenting on review quality. But the paper should speak for it's own merit, and including recommendation letters for the paper, let alone vague and anonymous scores without justification, is not part of the reviewing process. Furthermore, the criteria in a broad conference such as NeurIPS and in a technical workshop are different. In particular, for a broad conference, it would be appropriate to include a better discussion and examples of applications and extensions so as to help show the significance of the results to the reviewers, as representatives of the general audience. | val | [
"5R74FbnEw3G",
"Lx8KBm2Smw3",
"Eix05e9HUfC",
"SQ62uVpwE1",
"tXTvQvkvplF",
"uyv_jjSuioV",
"coCcwiB7E2Y",
"IpA40a-jHIR",
"fSi9DvQ4yYE",
"AYL4DhFakQ1",
"23FgCwUyuRU",
"6u7Bd9nFsXy",
"V_DwIGLZac6",
"8ZyjTMxyfAt",
"gqTREloI_cV",
"C17qFTNGBme",
"dCWReDynRa9",
"4yGA1FgAd2rN",
"FI13B9BJg... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",... | [
" Dear Reviewer EY42,\n\nYou have raised some concerns regarding the applicability of EF21. You mentioned/hinted/noticed that \n- the simple **stochastic approximation** extension we outlined in the appendix is not formalized, \n- we do not consider **partial participation** that is important in federated learning,... | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"tXTvQvkvplF",
"SQ62uVpwE1",
"coCcwiB7E2Y",
"uyv_jjSuioV",
"nips_2021_8ygF02Zm51q",
"tXTvQvkvplF",
"IpA40a-jHIR",
"fSi9DvQ4yYE",
"23FgCwUyuRU",
"23FgCwUyuRU",
"tXTvQvkvplF",
"tXTvQvkvplF",
"tXTvQvkvplF",
"tXTvQvkvplF",
"tXTvQvkvplF",
"nips_2021_8ygF02Zm51q",
"I-_EvWs6zqi",
"I-_EvWs... |
nips_2021_4lXCXb0Ru04 | Mixture weights optimisation for Alpha-Divergence Variational Inference | Kamélia Daudel, randal douc | accept | Variational approximations are approximations of the posterior in Bayesian statistics, obtained by minimizing a divergence between the approximation and the posterior, where the approximation is constrained to belong to a given set (for example, the set of Gaussian distributions, or the set of mixtures of Gaussian distributions). The divergence chosen is usually the Kullback-Leibler divergence (KL). However, some recent works highlighted the possible benefits of using alpha-divergences instead (note that KL is recovered as the special case alpha=1). [17] proposed an iterative scheme, referred to as the Power Descent algorithm, to minimize the alpha-divergence in practice. A nice property of this scheme is that it leads to a sequence of approximations such that the divergence with respect to the posterior is decreasing.
This paper extends the analysis of the Power Descent algorithm initiated in [17] to approximations by mixtures models. The author(s) then study the convergence of the sequence of approximations to the minimizer of the divergence. In the case alpha>1, this is a direct consequence of Theorem 1 (stated in the paper, but was already proven in [17]). The convergence in the case alpha<1 is given by Theorem 2 (this is a new, and difficult result). Finally, they analyze the case alpha=1 by taking the limit of the algorithm updates. They refer to this algorithm as "Renyi descent", for which they prove a convergence in 1/N (N being the number of iterations, this is Theorem 3). They compare their approach with Entropic Mirror Descent in a set of numerical experiments.
Three Reviewers think that while this is strongly based on [17], there are useful and nontrivial extensions, and I agree with them. One of the members of the Committee already reviewed an earlier version of this paper, and was satisfied to see many of his/her comments taken into account in this version of the paper. One of them points some weaknesses in the empirical evaluation, but it clear to me that the main contribution here is theoretical. I will therefore recommend to accept the paper.
It would be nice if you could improve the writing of the paper before sending the camera ready version. Reviewer ssFh wrote the paper is too dense, Reviewer M13e found some undefined notations and Reviewer B46Q asked you to rephrase parts of Section 2, that explain the context of [17] but that are too close to the original. | train | [
"jm553UrJl-e",
"5mz55tFKrpg",
"Vayr7o92YVY",
"tbxxQp-WeHV",
"iIaMGviwbG5",
"PgzKmpa_2GV",
"kJ4SglpYT_g",
"GP4tCiQCFN",
"hfNU08JeI4N",
"9t4rcKvizEl",
"5ys5Itoc0nZ",
"VrLnZlz_F-h",
"ZrbD76R96o1"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper extends a previous work [17] in providing a convergence rate for the case of \\alpha<1. My main concern is that this work is too close to the previous work [17]. The case of \\alpha<1 has been discussed in the experiment of [17] in a similar setting. The proof technique is also similar. More importantl... | [
4,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_4lXCXb0Ru04",
"5ys5Itoc0nZ",
"iIaMGviwbG5",
"nips_2021_4lXCXb0Ru04",
"GP4tCiQCFN",
"kJ4SglpYT_g",
"hfNU08JeI4N",
"tbxxQp-WeHV",
"ZrbD76R96o1",
"VrLnZlz_F-h",
"jm553UrJl-e",
"nips_2021_4lXCXb0Ru04",
"nips_2021_4lXCXb0Ru04"
] |
nips_2021_d20KTY2VrNk | Instance-dependent Label-noise Learning under a Structural Causal Model | Yu Yao, Tongliang Liu, Mingming Gong, Bo Han, Gang Niu, Kun Zhang | accept | The paper considers the problem of label noise, but with the additional assumption that there is a causal relationship from the label to the instance (for example in image classification). By using a structural causal model, the authors propose a instance dependent noise model that shows good performance on benchmarks.
Three reviewers considered the paper, and all agree that the paper considers an important unsolved problem (instance dependent label noise), and provides a novel solution (via a structural causal model). The reviewers had several concerns which the authors satisfactorily addressed in their rebuttal. While one score was lower, the reviewer indicated in comments that they would support acceptance after the author rebuttal. A brief discussion post rebuttal among the reviewers resulted in all agreeing that the paper should be accepted. Therefore it is with great pleasure that I recommend the paper for NeurIPS. | train | [
"mlWSuwWUnWF",
"ZoAtDid9KQ",
"Wes8KOtIGfc",
"BTvz-fpwTvH",
"y0HZT3BVMoe",
"t3C_hb3UfL4",
"_4J6BNAYMy",
"fcH6-RPMArF"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear All,\n\nThanks for your constructive comments that help us improve this paper. We hope our rebuttal has cleared up the concerns you might have had. Please do not hesitate to let us know if there are any other concerns. \n\nMany Thanks,\n\nPaper5377 Authors",
" Dear Reviewer 38W1,\n\nWe have tried our best ... | [
-1,
-1,
-1,
-1,
-1,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"nips_2021_d20KTY2VrNk",
"t3C_hb3UfL4",
"fcH6-RPMArF",
"_4J6BNAYMy",
"t3C_hb3UfL4",
"nips_2021_d20KTY2VrNk",
"nips_2021_d20KTY2VrNk",
"nips_2021_d20KTY2VrNk"
] |
nips_2021_Pkzvd9ONEPr | Combining Human Predictions with Model Probabilities via Confusion Matrices and Calibration | An increasingly common use case for machine learning models is augmenting the abilities of human decision makers. For classification tasks where neither the human nor model are perfectly accurate, a key step in obtaining high performance is combining their individual predictions in a manner that leverages their relative strengths. In this work, we develop a set of algorithms that combine the probabilistic output of a model with the class-level output of a human. We show theoretically that the accuracy of our combination model is driven not only by the individual human and model accuracies, but also by the model's confidence. Empirical results on image classification with CIFAR-10 and a subset of ImageNet demonstrate that such human-model combinations consistently have higher accuracies than the model or human alone, and that the parameters of the combination method can be estimated effectively with as few as ten labeled datapoints.
| accept | Strengths:
- Simple, well-executed approach
- Clever combination of machine and human strengths (probabilistic vs categorical predictions)
- Clear idea, clear writing
Weaknesses:
- Some unclarity regarding robustness of empirical results
- Needs discussion regarding individual vs. collective human-labeling behavior
- Authors should be more precise about limitations, and address them
Summary:
After some helpful discussions between the reviewers and the authors (including detailed responses by the authors, which the reviewers appreciated), reviewers are mostly in agreement that this is a strong submission which should be of interest to the community. Concerns mostly regarded the experimental setup. Here, reviewers were concerned about whether the empirical results would prove to be results under reasonable variations of the setup. Relatedly, reviewers also pointed out that results could be specific to how the authors chose to select labelers (at random), and asked the authors to clarify their reasoning behind this and they way in which they present their results in light of this experimental design decision. Authors are encouraged to clarify the issues regarding diversity in individual user behavior vs. collective user behavior, and how it relates to their experiments.
Finally, two reviewers were glad to see that the authors acknowledged some of the limitations of their approach, but at the same time, urged the authors to discuss them with more precision, and – more importantly – take steps to address them in the final version of the paper.
| train | [
"QVr82zXCdFK",
"2RsFHHh2-m",
"vLQK5dDUbBV",
"i2GtoVE1TE",
"FH7xrv-KYjh",
"Qc5UseFEu_L",
"R-wNAzaFYMG",
"pG0qZnQ4M0t",
"CulHYVHD6MV",
"9pnsPGNMHXJ",
"WcFJ9br9a33"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" After reading the other reviews and responses, I am still in favor of acceptance.",
" I wish to thank the authors for the extremely insightful response and for the additional experiments conducted. \nMost of my concerns were addressed and I would be happy to see the paper in the main conference.",
" I would l... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
3
] | [
"FH7xrv-KYjh",
"i2GtoVE1TE",
"Qc5UseFEu_L",
"pG0qZnQ4M0t",
"CulHYVHD6MV",
"WcFJ9br9a33",
"9pnsPGNMHXJ",
"nips_2021_Pkzvd9ONEPr",
"nips_2021_Pkzvd9ONEPr",
"nips_2021_Pkzvd9ONEPr",
"nips_2021_Pkzvd9ONEPr"
] |
nips_2021_yrmvFIh_e5o | $\texttt{LeadCache}$: Regret-Optimal Caching in Networks | Debjit Paria, Abhishek Sinha | accept | Overall, this is a quality submission. The paper addresses an important problem of interest to the community. They were given new algorithmic and analysis insights into bounding regret for this problem. The reviewers felt the paper was written well and would be of interest. | train | [
"ZLqI5s7RMR9",
"2RuWVpZ9ZSV",
"n4uKiEz2yFh",
"ndqGNxY7Pyk",
"vafF0K1Xlom",
"lVDHxR_JWt",
"miQDOzqQeY",
"ov3dUrxf0a0"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies the Bipartite Caching problem, from an online learning perspective. A bipartite graph describes how n users are connected to m caches, each of which supports C files. At each time t, N files are distributed across m caches (same file can be placed in multiple caches), and each user requests one f... | [
7,
8,
-1,
-1,
-1,
-1,
5,
6
] | [
3,
3,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_yrmvFIh_e5o",
"nips_2021_yrmvFIh_e5o",
"ov3dUrxf0a0",
"ZLqI5s7RMR9",
"miQDOzqQeY",
"2RuWVpZ9ZSV",
"nips_2021_yrmvFIh_e5o",
"nips_2021_yrmvFIh_e5o"
] |
nips_2021_JpDlWGTBHB | Probabilistic Attention for Interactive Segmentation | Prasad Gabbur, Manjot Bilkhu, Javier Movellan | accept | This paper builds and analyzes a probabilistic mixture model for which softmax attention mechanisms are a special case. Reviewers find the construction enlightening, educational, and with high impact potential. They strongly recommend improving the presentation to make the "story" and motivation much clearer upfront: the reasons for the construction don't reveal themselves until the connections to attention. The constructions seem different, but uthors should discuss the relationship with other probabilstic attention work (e.g. variational attention by Deng et al, but also Heo et al, NeurIPS 19, Bahuleyan et al, Coling 19). The font sizes and line markers/colors in the plot should be changed to improve readability and skimmability of the paper. | train | [
"6YJ-gTs7CxZ",
"h2QfGeZQxqK",
"WKukDrF3Bdg",
"RIYZ6NUd1qN",
"HxQZd7C9Z6A",
"7aWNS1vy9fQ",
"fYbiCf8o0u",
"-qYO1Ju30P",
"G_BnryUP19y",
"OkWztLNdmyk"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing my questions, and I will keep my rating.",
" I would like to thank the authors for the response, which answers my questions. I keep my score and would like to see the paper at NeurIPS. ",
" Our sincere thanks for taking time to review our paper and provide feedback. Please find our re... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
3,
3
] | [
"WKukDrF3Bdg",
"7aWNS1vy9fQ",
"OkWztLNdmyk",
"G_BnryUP19y",
"-qYO1Ju30P",
"fYbiCf8o0u",
"nips_2021_JpDlWGTBHB",
"nips_2021_JpDlWGTBHB",
"nips_2021_JpDlWGTBHB",
"nips_2021_JpDlWGTBHB"
] |
nips_2021_FYDE3I9fev0 | Influence Patterns for Explaining Information Flow in BERT | While attention is all you need may be proving true, we do not know why: attention-based transformer models such as BERT are superior but how information flows from input tokens to output predictions are unclear. We introduce influence patterns, abstractions of sets of paths through a transformer model. Patterns quantify and localize the flow of information to paths passing through a sequence of model nodes. Experimentally, we find that significant portion of information flow in BERT goes through skip connections instead of attention heads. We further show that consistency of patterns across instances is an indicator of BERT’s performance. Finally, we demonstrate that patterns account for far more model performance than previous attention-based and layer-based methods.
| accept | All reviewers agree that this paper proposes a novel method to understand functioning of Transformer models through influence patterns. 3/4 reviewers find the contributions interesting and substantial and voted for acceptance. One reviewer raised concerns about the purpose of such techniques and how they can help in design better architectures. Authors responded by saying such understanding is important for building trust and debugging. I completely agree with Authors on this and think such analysis studies are very important and am glad to recommend acceptance. I encourage Authors to include suggested changes/new comparisons brought up by reviewers in the final version. | train | [
"RVaWU0NS3T",
"-Xwl5XEBzp1",
"cgQerHKebMp",
"fACUZkkz-Ow",
"Dd5ynOyQZ6A",
"G2Gitii7P9Z",
"8kNDpT-HOt9",
"yRYXEW8KwMJ",
"xRdi4UOMCY7",
"VHNNWYyweZ1"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces the concept of influence patterns which provides a novel way to quantify and analyse the flow of information through a sequence of model nodes in a neural network. Using influence patterns, the paper shows the end to end flow of information in a BERT architecture and experiments also show tha... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_FYDE3I9fev0",
"G2Gitii7P9Z",
"8kNDpT-HOt9",
"VHNNWYyweZ1",
"yRYXEW8KwMJ",
"RVaWU0NS3T",
"xRdi4UOMCY7",
"nips_2021_FYDE3I9fev0",
"nips_2021_FYDE3I9fev0",
"nips_2021_FYDE3I9fev0"
] |
nips_2021_pu6loAVvBZb | Robust Regression Revisited: Acceleration and Improved Estimation Rates | We study fast algorithms for statistical regression problems under the strong contamination model, where the goal is to approximately optimize a generalized linear model (GLM) given adversarially corrupted samples. Prior works in this line of research were based on the \emph{robust gradient descent} framework of \cite{PrasadSBR20}, a first-order method using biased gradient queries, or the \emph{Sever} framework of \cite{DiakonikolasKK019}, an iterative outlier-removal method calling a stationary point finder. We present nearly-linear time algorithms for robust regression problems with improved runtime or estimation guarantees compared to the state-of-the-art. For the general case of smooth GLMs (e.g.\ logistic regression), we show that the robust gradient descent framework of \cite{PrasadSBR20} can be \emph{accelerated}, and show our algorithm extends to optimizing the Moreau envelopes of Lipschitz GLMs (e.g.\ support vector machines), answering several open questions in the literature. For the well-studied case of robust linear regression, we present an alternative approach obtaining improved estimation rates over prior nearly-linear time algorithms. Interestingly, our algorithm starts with an identifiability proof introduced in the context of the sum-of-squares algorithm of \cite{BakshiP21}, which achieved optimal error rates while requiring large polynomial runtime and sample complexity. We reinterpret their proof within the Sever framework and obtain a dramatically faster and more sample-efficient algorithm under fewer distributional assumptions.
| accept | The reviewers all agree that this paper makes solid contributions to the robust estimation problem and improves upon existing work in several aspects. The authors should incorporate the reviewers' suggestions into the final version of the paper. | train | [
"_ywFdzY2a33",
"6ZugU_l4JFG",
"Pz1Ubt3mvxy",
"jG6uJ9Jqmxq",
"BrQUhxqVl3f",
"7RD43xt6d_w",
"fMpvKF8trW1",
"r2McGEyR6G8",
"XLRz_9Km6z"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper consider the robust regression problem with adversarial corruption, including linear regression, generalized linear models (GLMs) and Moreau envelopes of Lipschitz GLMs. Leveraging recent advances in robust mean estimation and proximal point methods, they provide improved algorithms in terms of optimiza... | [
7,
7,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"nips_2021_pu6loAVvBZb",
"nips_2021_pu6loAVvBZb",
"XLRz_9Km6z",
"r2McGEyR6G8",
"6ZugU_l4JFG",
"_ywFdzY2a33",
"nips_2021_pu6loAVvBZb",
"nips_2021_pu6loAVvBZb",
"nips_2021_pu6loAVvBZb"
] |
nips_2021_KCd-3Pz8VjM | Automatic Unsupervised Outlier Model Selection | Given an unsupervised outlier detection task on a new dataset, how can we automatically select a good outlier detection algorithm and its hyperparameter(s) (collectively called a model)? In this work, we tackle the unsupervised outlier model selection (UOMS) problem, and propose MetaOD, a principled, data-driven approach to UOMS based on meta-learning. The UOMS problem is notoriously challenging, as compared to model selection for classification and clustering, since (i) model evaluation is infeasible due to the lack of hold-out data with labels, and (ii) model comparison is infeasible due to the lack of a universal objective function. MetaOD capitalizes on the performances of a large body of detection models on historical outlier detection benchmark datasets, and carries over this prior experience to automatically select an effective model to be employed on a new dataset without any labels, model evaluations or model comparisons. To capture task similarity within our meta-learning framework, we introduce specialized meta-features that quantify outlying characteristics of a dataset. Extensive experiments show that selecting a model by MetaOD significantly outperforms no model selection (e.g. always using the same popular model or the ensemble of many) as well as other meta-learning techniques that we tailored for UOMS. Moreover upon (meta-)training, MetaOD is extremely efficient at test time; selecting from a large pool of 300+ models takes less than 1 second for a new task. We open-source MetaOD and our meta-learning database for practical use and to foster further research on the UOMS problem.
| accept | All reviewers have praised the importance of selecting outlier detectors in an automatic way and recognized the proposed meta-learning solution as a novel contribution.
Important points were raised during the reviewing and discussion period. They concern:
- the influence of the distribution similarity between train and test to make the meta-learning succeed,
- the effect of some hyperparameters involved and
- some rewriting of the presentation to ease reading (notation + English grammar typos) and to better motivate the use of a collaborative filtering approach.
The paper is accepted subject to the above points are addressed in the camera-ready. | val | [
"G-U8h7lr6RD",
"mKluN8Xo2Hp",
"BaN9-iwNUqb",
"8myVaz9fuGs",
"GNSzzfChAOe",
"z2OTSAPSea",
"Pft9cRjpSE",
"QQkrKuNKsc",
"k2CBuB9TCwA",
"ft0ccHqn_t"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the detailed and helpful response. I believe the work has its novelty and practical value, but at the same time, there are rooms for improvement. So I'd keep my score unchanged.",
" **(1)** >> The features and measures proposed are heuristics, and they need justification. The statistic features are v... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
4
] | [
"GNSzzfChAOe",
"ft0ccHqn_t",
"k2CBuB9TCwA",
"QQkrKuNKsc",
"Pft9cRjpSE",
"nips_2021_KCd-3Pz8VjM",
"nips_2021_KCd-3Pz8VjM",
"nips_2021_KCd-3Pz8VjM",
"nips_2021_KCd-3Pz8VjM",
"nips_2021_KCd-3Pz8VjM"
] |
nips_2021_QCPY2eMXYs | Pruning Randomly Initialized Neural Networks with Iterative Randomization | Pruning the weights of randomly initialized neural networks plays an important role in the context of lottery ticket hypothesis. Ramanujan et al. (2020) empirically showed that only pruning the weights can achieve remarkable performance instead of optimizing the weight values. However, to achieve the same level of performance as the weight optimization, the pruning approach requires more parameters in the networks before pruning and thus more memory space. To overcome this parameter inefficiency, we introduce a novel framework to prune randomly initialized neural networks with iteratively randomizing weight values (IteRand). Theoretically, we prove an approximation theorem in our framework, which indicates that the randomizing operations are provably effective to reduce the required number of the parameters. We also empirically demonstrate the parameter efficiency in multiple experiments on CIFAR-10 and ImageNet.
| accept | The paper presents a new algorithm for learning by pruning. In the original work by Ramanujan et al. the random weights of a network could be pruned away so that the resulting network could achieve arbitrary accuracy for a given task, at the cost of slight overparameterization. The authors in the current paper show that the performance of such learning by pruning schemes can be improved by further rerandomizing the weights at consequent pruning iterations. This results in a reduced overaparemeterization to achieve the same accuracy compared to past results. The results are novel, and interesting with regards to the related literature. The reviewers have posted several points that the work can be improved, and all are addressed/can be addressed by the final version of the paper. | train | [
"8UhlMNB2S1p",
"tZ6UIK3iUVx",
"1cfsRTfSUUS",
"3iapkYMLSbB",
"kvKYuENvGj",
"afBpjgrAM_D",
"jqH3qqd3bV",
"JRJxwn_Igbx",
"K0HIdfhMgt7",
"rJlSL3ReYux",
"u-dSyzeh8Mv"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes an improved method to find a performative subnetwork within a randomly weighted neural network. In contrast to prior work, a fraction of unused weights are periodically reset. This is a novel idea with empirical improvements and theoretical justification. Overall the paper is clear to follow an... | [
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_QCPY2eMXYs",
"nips_2021_QCPY2eMXYs",
"afBpjgrAM_D",
"jqH3qqd3bV",
"rJlSL3ReYux",
"u-dSyzeh8Mv",
"8UhlMNB2S1p",
"nips_2021_QCPY2eMXYs",
"tZ6UIK3iUVx",
"nips_2021_QCPY2eMXYs",
"nips_2021_QCPY2eMXYs"
] |
nips_2021_e0nZIFEpmYh | Probing Inter-modality: Visual Parsing with Self-Attention for Vision-and-Language Pre-training | Vision-Language Pre-training (VLP) aims to learn multi-modal representations from image-text pairs and serves for downstream vision-language tasks in a fine-tuning fashion. The dominant VLP models adopt a CNN-Transformer architecture, which embeds images with a CNN, and then aligns images and text with a Transformer. Visual relationship between visual contents plays an important role in image understanding and is the basic for inter-modal alignment learning. However, CNNs have limitations in visual relation learning due to local receptive field's weakness in modeling long-range dependencies. Thus the two objectives of learning visual relation and inter-modal alignment are encapsulated in the same Transformer network. Such design might restrict the inter-modal alignment learning in the Transformer by ignoring the specialized characteristic of each objective. To tackle this, we propose a fully Transformer visual embedding for VLP to better learn visual relation and further promote inter-modal alignment. Specifically, we propose a metric named Inter-Modality Flow (IMF) to measure the interaction between vision and language modalities (i.e., inter-modality). We also design a novel masking optimization mechanism named Masked Feature Regression (MFR) in Transformer to further promote the inter-modality learning. To the best of our knowledge, this is the first study to explore the benefit of Transformer for visual feature learning in VLP. We verify our method on a wide range of vision-language tasks, including Visual Question Answering (VQA), Visual Entailment and Visual Reasoning. Our approach not only outperforms the state-of-the-art VLP performance, but also shows benefits on the IMF metric.
| accept | The paper initially received divergent recommendations, but after the detailed author response all reviewers lean to accept the paper.
I recommend accept under the expectation that authors will revise the paper according to the detailed author response, addressing the reviewers' concerns. This includes, but is not limited to the following points
1) clarifications & delineations, including to prior work.
2) Additionally results and ablations (tables) provided in author response
3) details on model size in FLOPs
[some can go to supplement if they don't fit in the page limit] | val | [
"xQgl2qvAyW",
"73yQD-okFte",
"U7jvt8YV7_9",
"K_bcmMQeUP",
"-1Wl9NW5z7D",
"JtJgGyjZvq",
"Zb5A4WUTbUq",
"Sk2YCfnIwbP",
"Iq1-AdmN75f"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank the authors for clarifying my concerns. The authors addressed my technical questions in the rebuttal, so I would raise my score from borderline reject to borderline accept.",
"This paper introduces Swin Transformer in the visual domain for vision-language pretraining. Previous VL pretraining models feed i... | [
-1,
6,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"-1Wl9NW5z7D",
"nips_2021_e0nZIFEpmYh",
"Iq1-AdmN75f",
"Sk2YCfnIwbP",
"73yQD-okFte",
"Zb5A4WUTbUq",
"nips_2021_e0nZIFEpmYh",
"nips_2021_e0nZIFEpmYh",
"nips_2021_e0nZIFEpmYh"
] |
nips_2021_PvWYUN7t4Tb | Stability and Generalization of Bilevel Programming in Hyperparameter Optimization | The (gradient-based) bilevel programming framework is widely used in hyperparameter optimization and has achieved excellent performance empirically. Previous theoretical work mainly focuses on its optimization properties, while leaving the analysis on generalization largely open. This paper attempts to address the issue by presenting an expectation bound w.r.t. the validation set based on uniform stability. Our results can explain some mysterious behaviours of the bilevel programming in practice, for instance, overfitting to the validation set. We also present an expectation bound for the classical cross-validation algorithm. Our results suggest that gradient-based algorithms can be better than cross-validation under certain conditions in a theoretical perspective. Furthermore, we prove that regularization terms in both the outer and inner levels can relieve the overfitting problem in gradient-based algorithms. In experiments on feature learning and data reweighting for noisy labels, we corroborate our theoretical findings.
| accept | The paper is among the early theoretical work on studying the stability and generalization of hyperparameter optimization (HO) task in contrast to the optimization properties widely studied in the literature. It used a notion of uniform stability for validation to get the rate of $O({T^\kappa\over m})$ in comparison to $O({\sqrt{\log T}\over m})$ using CV approaches. There are various interesting discussions on the implication of the comparisons on the possible superior performance of UD in HO framework. Experiments validate the theoretical findings. The reviewers' concerns/questions on the tightness of the exponential dependence on K , possible improvement of the bounds if the inner loss is convex or strongly, and other questions on the experiments are well responded in the rebuttal. | train | [
"jTP2-gCmYQ",
"Yvq9Ywc3OBE",
"hnwFULuADI-",
"0HNqEdt6rOu",
"3gqSdDVhXcG",
"jHEqStkknDZ",
"1Lw6gkTfFB",
"K9lE3QdZKv",
"PvmhunYDfh",
"AufsnKcL1M7",
"Z4EkIQmbI3",
"l0AohLcmDkn",
"o6cXFIlJhoT",
"QNBL411guhs",
"X9W8Zsq05Lb",
"SgHoHCYkd4h",
"c5_bysa57QJ",
"P5m5tx9gTAH",
"9qE_nRXaRj9"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"In this paper, the authors study bilevel programming for hyperparameter optimization, which involves an outer level optimization on hyperparameter and an inner optimization on hypothesis. Unlike most of existing studies focusing on optimization, the authors study the generalization by using the tool of algorithmic... | [
6,
-1,
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
-1,
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"nips_2021_PvWYUN7t4Tb",
"P5m5tx9gTAH",
"3gqSdDVhXcG",
"nips_2021_PvWYUN7t4Tb",
"1Lw6gkTfFB",
"nips_2021_PvWYUN7t4Tb",
"K9lE3QdZKv",
"c5_bysa57QJ",
"nips_2021_PvWYUN7t4Tb",
"Z4EkIQmbI3",
"l0AohLcmDkn",
"nips_2021_PvWYUN7t4Tb",
"jHEqStkknDZ",
"nips_2021_PvWYUN7t4Tb",
"jTP2-gCmYQ",
"l0Ao... |
nips_2021_3stG49d5VA | Regime Switching Bandits | Xiang Zhou, Yi Xiong, Ningyuan Chen, Xuefeng GAO | accept | The reviewers agreed that the paper provides a nice contribution by presenting a learning algorithm for the problem which competitive with a much stronger baseline than what was previously used in the literature. On the other hand, several issues were raised, most importantly that (i) the algorithm design with the forced exploration part seems suboptimal and the induced separation makes combining existing results relatively easy; and (2) the algorithm can't be implemented in an efficient way yet supposedly does not achieve the optimal rate.
Nevertheless the reviewers felt that the positives outweigh the negatives, hence I recommend acceptance of the paper. In the revised version the authors should address the several problems discussed in the review process, and provide a better and more transparent comparison to Azizzadenesheli et al. (2016) (how the algorithm relates to their method), as well as including new baselines in the experiments, such as a modified version of the algorithm of Azizzadenesheli et al., as well as some oracle baselines which allow the experimental analysis of different parts of the algorithm. | train | [
"d0jmrmlHbnN",
"BIVFIPWz3Q",
"YXtb3Y0CNfH",
"3h6TbBxr4fF",
"_GjFIFywLxS",
"jajr2Gy27_9",
"k5aoIhIWnvL",
"HTAkuXoDSMk",
"YpQ3LIyveWM",
"zYPdx_afHKQ",
"ptOvfaebA9"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the detailed comments to my different remarks.\nI think that storing all the previous choices and rewards is an important aspect of the algorithm but assuming that the authors clarify this in the revised version, in my opinion the paper is above the acceptance threshold.",
" Thank you fo... | [
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
3,
3
] | [
"jajr2Gy27_9",
"HTAkuXoDSMk",
"nips_2021_3stG49d5VA",
"k5aoIhIWnvL",
"nips_2021_3stG49d5VA",
"zYPdx_afHKQ",
"_GjFIFywLxS",
"YXtb3Y0CNfH",
"ptOvfaebA9",
"nips_2021_3stG49d5VA",
"nips_2021_3stG49d5VA"
] |
nips_2021_NrEwQwhPODl | MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps | Deep neural networks are susceptible to adversarially crafted, small, and imperceptible changes in the natural inputs. The most effective defense mechanism against these examples is adversarial training which constructs adversarial examples during training by iterative maximization of loss. The model is then trained to minimize the loss on these constructed examples. This min-max optimization requires more data, larger capacity models, and additional computing resources. It also degrades the standard generalization performance of a model. Can we achieve robustness more efficiently? In this work, we explore this question from the perspective of knowledge transfer. First, we theoretically show the transferability of robustness from an adversarially trained teacher model to a student model with the help of mixup augmentation. Second, we propose a novel robustness transfer method called Mixup-Based Activated Channel Maps (MixACM) Transfer. MixACM transfers robustness from a robust teacher to a student by matching activated channel maps generated without expensive adversarial perturbations. Finally, extensive experiments on multiple datasets and different learning scenarios show our method can transfer robustness while also improving generalization on natural images.
| accept | This paper describes a method (MixACM) for transferring robustness from a teacher model to a student mode using activated channel map matching and mix-up, obviating the need for expensive adversarial training.
A key concern raised by reviewers was that, while the individual components of the method are well motivated and the overall combination of them appears to be highly effective, these individual components are already known, and their simple combination is not as novel or theoretically motivated as one might hope. Reviewers also expressed concerns about quality of writing and the overall clarity of the paper.
This is something of a borderline case, but given the strong results produced by this method, I nonetheless recommend that the work be accepted.
| test | [
"aSvWk2YikE8",
"5ezfpsJC4Xe",
"6GX1aRF3RTQ",
"f5xgFhSxfI",
"0l_KVYlkIbb",
"aN3QIUoNtJA",
"m-CpKBDE9r",
"NMF53w_mXUV",
"MSdXW_3KnUZ",
"TGLxcQEEvm7",
"GwzykVB3yx-",
"zGlhHwaWIG_",
"6h29HqA6l-B",
"pHR4lfDz5c",
"8r-nca61QXz"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your response. I suggest that the authors can add this discussion in the final version. I leave my score unchanged. ",
" Thanks for the responses. Concerns can be solved by improving the writing parts. I will keep the score unchanged as 6. ",
" Thank you very much for increasing the s... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"GwzykVB3yx-",
"pHR4lfDz5c",
"m-CpKBDE9r",
"aN3QIUoNtJA",
"nips_2021_NrEwQwhPODl",
"6h29HqA6l-B",
"TGLxcQEEvm7",
"nips_2021_NrEwQwhPODl",
"nips_2021_NrEwQwhPODl",
"NMF53w_mXUV",
"8r-nca61QXz",
"pHR4lfDz5c",
"0l_KVYlkIbb",
"nips_2021_NrEwQwhPODl",
"nips_2021_NrEwQwhPODl"
] |
nips_2021_JRM0Umk6mdC | Localization, Convexity, and Star Aggregation | Suhas Vijaykumar | accept | The paper provides a nice unifying method to obtain fast rates for exp-concave losses, uniformly convex losses etc via an offset rademacher complexity type term. While the rates covered in the paper are not new, the unifying analysis is simple and clean.
The reviewers are positive about the contributions of the paper and think that the paper is worth publishing. I concur with the reviews and find the contributions of the paper clean and neat. Overall I recommend acceptance. | train | [
"86BbA57thcB",
"vJ55hRDccVa",
"5ZlQWgDTx07",
"GB6auyO62_3",
"SrLkN7VxOkr",
"7jrJC0bQ5PI",
"wwfzdD_sHSz",
"yKIrdmGOJE",
"0GjPrcRG4Cu",
"Mdl7XkLhw9J"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper is concerned with the excess risk analysis of empirical risk minimization (ERM) over convex function classes, as well as Audibert's 'improper' Star-shaped aggregation procedure over nonconvex classes (which reduces to ERM in the convex setting), in statistical learning.\nSharp bounds have been obtained ... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"nips_2021_JRM0Umk6mdC",
"GB6auyO62_3",
"yKIrdmGOJE",
"wwfzdD_sHSz",
"0GjPrcRG4Cu",
"86BbA57thcB",
"Mdl7XkLhw9J",
"nips_2021_JRM0Umk6mdC",
"nips_2021_JRM0Umk6mdC",
"nips_2021_JRM0Umk6mdC"
] |
nips_2021_rsNBA9gtDf4 | Aligning Silhouette Topology for Self-Adaptive 3D Human Pose Recovery | Articulation-centric 2D/3D pose supervision forms the core training objective in most existing 3D human pose estimation techniques. Except for synthetic source environments, acquiring such rich supervision for each real target domain at deployment is highly inconvenient. However, we realize that standard foreground silhouette estimation techniques (on static camera feeds) remain unaffected by domain-shifts. Motivated by this, we propose a novel target adaptation framework that relies only on silhouette supervision to adapt a source-trained model-based regressor. However, in the absence of any auxiliary cue (multi-view, depth, or 2D pose), an isolated silhouette loss fails to provide a reliable pose-specific gradient and requires to be employed in tandem with a topology-centric loss. To this end, we develop a series of convolution-friendly spatial transformations in order to disentangle a topological-skeleton representation from the raw silhouette. Such a design paves the way to devise a Chamfer-inspired spatial topological-alignment loss via distance field computation, while effectively avoiding any gradient hindering spatial-to-pointset mapping. Experimental results demonstrate our superiority against prior-arts in self-adapting a source trained model to diverse unlabeled target domains, such as a) in-the-wild datasets, b) low-resolution image domains, and c) adversarially perturbed image domains (via UAP).
| accept | This submission has received 4 positive final ratings: 6, 7, 6, 7.
The reviewers appreciated overall novelty, extensive experiments, strong empirical performance and clear presentation.
The remaining minor questions and concerns were addressed by the authors in the rebuttal, as acknowledged by the reviewers.
As a result, the final recommendation is to accept for poster presentation. | train | [
"0jnigqnYG2E",
"qbOxzbSKyHF",
"Bk69c1sKGRu",
"sA8Zxpbtqsf",
"5ayW3o0t8y4",
"TjaaphdFP4",
"Ukyogy0Sbmy",
"xkLi9ZvU4jC",
"BTE2k9WtmA",
"HZ11b62oR29",
"LugOuwInK4P",
"Rj2kVI8qspn",
"wmlNU0i3yo2",
"mt-mXLk3AWZ",
"KIF8kS7EK_1",
"YZNdQwU0m8",
"rMfCj-N5SwC",
"_b41lYDoGoP"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for going through our rebuttal. Below we clarify the concerns.\n\n> How many frames were discarded? I don't understand how this went unmentioned. Did you exclude the same frames for other methods in Table 3?\n\n**1.** We would like to clarify that our evaluation procedure (inference on test-set) is exac... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5,
4
] | [
"xkLi9ZvU4jC",
"Bk69c1sKGRu",
"0jnigqnYG2E",
"HZ11b62oR29",
"Ukyogy0Sbmy",
"BTE2k9WtmA",
"LugOuwInK4P",
"mt-mXLk3AWZ",
"wmlNU0i3yo2",
"Rj2kVI8qspn",
"rMfCj-N5SwC",
"_b41lYDoGoP",
"YZNdQwU0m8",
"KIF8kS7EK_1",
"nips_2021_rsNBA9gtDf4",
"nips_2021_rsNBA9gtDf4",
"nips_2021_rsNBA9gtDf4",
... |
nips_2021_swur4c3YSyF | Self-Adaptable Point Processes with Nonparametric Time Decays | Zhimeng Pan, Zheng Wang, Jeff M. Phillips, Shandian Zhe | accept | The paper proposes a new marked temporal point process model for continuous-time event sequences. While the standard neural stochastic point process models (e.g., Neural Hawkes Processes) parameterized the time-varying intensity functions using recurrent neural networks, these models are often black boxes and don't allow to estimate the influence (inhibition or excitation) that event types have on future ones. In contrast, the proposed work uses interpretable Gaussian processes and achieves better performance while maintaining interpretability and scalability. In this sense, the approach achieves to both flexibly and transparently estimate various decaying influences between every pair of events.
With few exceptions on the model/ training algorithm that could be presented more clearly, the paper is overall well-written and mathematically sound. Despite its sophisticated inference scheme, it provides a valuable contribution to machine learning and statistics.
Please run a grammar check over the paper for the final version since many small mistakes are present. | test | [
"e-ToKSjWTLM",
"2EwkzZXte2r",
"08X2fQfk0gY",
"RW5GLaXbIlU",
"ijbYayrBuel",
"ZwQuqkM0IO",
"WjguR7KZdlE",
"am5ERF7w16H",
"Ky7wBB9DFuO"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a multi-variate self-adaptable point process with nonparametric time decays. First, to flexibly represent arbitrary types of influences between events, they introduce an embedding to represent each event type, and model the influence between any two events as an unknown function of the event ty... | [
6,
-1,
-1,
-1,
-1,
-1,
8,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"nips_2021_swur4c3YSyF",
"RW5GLaXbIlU",
"Ky7wBB9DFuO",
"e-ToKSjWTLM",
"am5ERF7w16H",
"WjguR7KZdlE",
"nips_2021_swur4c3YSyF",
"nips_2021_swur4c3YSyF",
"nips_2021_swur4c3YSyF"
] |
nips_2021_IBdEfhLveS | Offline Meta Reinforcement Learning -- Identifiability Challenges and Effective Data Collection Strategies | Ron Dorfman, Idan Shenfeld, Aviv Tamar | accept | This paper introduces an interesting problem, where the meta-agent is required to quickly learn unseen tasks given some offline data for training tasks. In addition, the paper also formally introduces the MDP ambiguity problem induced by the new problem setup. Although the proposed solution (off-policy version of VariBAD + reward relabelling) is a little bit straightforward, it is a reasonable solution to the problem considered in this paper, and the empirical results are also solid. During the post-rebuttal discussion period, the majority of the reviewers agreed that the formal description of the MDP ambiguity problem and the Bayesian RL perspective is interesting enough to be presented at NeurIPS. Therefore, I recommend accepting this paper. | val | [
"dn6-JrmWl2M",
"YZRsBA7qqBg",
"vMjy05ATy30",
"jvnMzVPTuQd",
"duc4-7r-k1q",
"7tVTxxh8GAA",
"lX-j3or1MTb",
"r-HluFOjwO",
"x-kLLiJp7Qt",
"fA2-lFQIoG5",
"p3NJTGJ8Dd"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper addresses the problem of learning Bayes-Optimal Exploration strategies from offline data.\nThe authors discuss MDP ambiguity problems that may arise in this context and propose BoRel an adaptation\nof the online, on-policy VariBad algorithm that addresses these issues via policy replaying and reward rela... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_IBdEfhLveS",
"nips_2021_IBdEfhLveS",
"jvnMzVPTuQd",
"x-kLLiJp7Qt",
"7tVTxxh8GAA",
"r-HluFOjwO",
"nips_2021_IBdEfhLveS",
"YZRsBA7qqBg",
"dn6-JrmWl2M",
"p3NJTGJ8Dd",
"nips_2021_IBdEfhLveS"
] |
nips_2021_VH0TRmnqUc | RoMA: Robust Model Adaptation for Offline Model-based Optimization | We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries. A popular approach to solving this problem is maintaining a proxy model, e.g., a deep neural network (DNN), that approximates the true objective function. Here, the main challenge is how to avoid adversarially optimized inputs during the search, i.e., the inputs where the DNN highly overestimates the true objective function. To handle the issue, we propose a new framework, coined robust model adaptation (RoMA), based on gradient-based optimization of inputs over the DNN. Specifically, it consists of two steps: (a) a pre-training strategy to robustly train the proxy model and (b) a novel adaptation procedure of the proxy model to have robust estimates for a specific set of candidate solutions. At a high level, our scheme utilizes the local smoothness prior to overcome the brittleness of the DNN. Experiments under various tasks show the effectiveness of RoMA compared with previous methods, obtaining state-of-the-art results, e.g., RoMA outperforms all at 4 out of 6 tasks and achieves runner-up results at the remaining tasks.
| accept | The paper makes a somewhat incremental, yet solid and effective contribution to the area of model-based optimization by robust pre-training and a test-time robustification process. I concur with the reviewers and recommend the paper to be accepted at NeurIPS. | train | [
"NgFsLpGusmI",
"rhMuGzpbu85",
"FxxMY51wW6I",
"J9Kcnak8N1R",
"sZTHLs3flrP",
"mOAwhtJxmlb",
"m-2QeTnmB8t",
"Gwg5JC9vajl",
"JN62_9KYVkL",
"kDYdGenvKRn",
"ca83jYZaY0",
"0l7-mCrwwde",
"lqKjzX32zu",
"HTjtsaSKcg-",
"BboF4MdKXU"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We are happy to hear that our rebuttal addressed your concerns well.\n\nWe will explicitly state the scope of our work to avoid such confusion. We will also add the additional experimental results as you suggested.\n\nWe admit that we are very much excited about our idea, as this is completely a new approach/idea... | [
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"mOAwhtJxmlb",
"J9Kcnak8N1R",
"nips_2021_VH0TRmnqUc",
"JN62_9KYVkL",
"nips_2021_VH0TRmnqUc",
"Gwg5JC9vajl",
"kDYdGenvKRn",
"0l7-mCrwwde",
"lqKjzX32zu",
"HTjtsaSKcg-",
"nips_2021_VH0TRmnqUc",
"sZTHLs3flrP",
"FxxMY51wW6I",
"BboF4MdKXU",
"nips_2021_VH0TRmnqUc"
] |
nips_2021_L5vbEVIePyb | Flexible Option Learning | Temporal abstraction in reinforcement learning (RL), offers the promise of improving generalization and knowledge transfer in complex environments, by propagating information more efficiently over time. Although option learning was initially formulated in a way that allows updating many options simultaneously, using off-policy, intra-option learning (Sutton, Precup & Singh, 1999) , many of the recent hierarchical reinforcement learning approaches only update a single option at a time: the option currently executing. We revisit and extend intra-option learning in the context of deep reinforcement learning, in order to enable updating all options consistent with current primitive action choices, without introducing any additional estimates. Our method can therefore be naturally adopted in most hierarchical RL frameworks. When we combine our approach with the option-critic algorithm for option discovery, we obtain significant improvements in performance and data-efficiency across a wide variety of domains.
| accept | This paper presents an intuitive approach to updating multiple options together while learning in a hierarchical deep reinforcement learning setting. The approach seems principled, and performs well empirically.
All reviewers have advocated for acceptance, with the expectation that the requested clarifications in the paper are made. | val | [
"l9lTbyzCYO",
"qmxzAE-I6di",
"A7rrRQGYGDc",
"JCVHW1kdGBq",
"WMndD9nNyXa",
"RGT841GxUwR",
"iOD-E7qicOw"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"- This paper examines the possibility of improving sample efficiency in option learning by updating all options simultaneously, rather than the single option chosen for execution. The authors that posit that such an all-options update can improve sample efficiency both when a set of pre-computed options are provid... | [
6,
-1,
-1,
-1,
-1,
8,
7
] | [
4,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_L5vbEVIePyb",
"RGT841GxUwR",
"l9lTbyzCYO",
"A7rrRQGYGDc",
"iOD-E7qicOw",
"nips_2021_L5vbEVIePyb",
"nips_2021_L5vbEVIePyb"
] |
nips_2021_Q9hZdUBTC9S | Faster Directional Convergence of Linear Neural Networks under Spherically Symmetric Data | In this paper, we study gradient methods for training deep linear neural networks with binary cross-entropy loss. In particular, we show global directional convergence guarantees from a polynomial rate to a linear rate for (deep) linear networks with spherically symmetric data distribution, which can be viewed as a specific zero-margin dataset. Our results do not require the assumptions in other works such as small initial loss, presumed convergence of weight direction, or overparameterization. We also characterize our findings in experiments.
| accept | This paper is borderline. Reviewers generally agreed that the theoretical analysis (main contribution) is technically solid and delivers strong results, but on the other hand is limited to a very restricted setting of spherically symmetric data distributions. In addition, several reviewers felt that the empirical section on non-linear neural networks is somewhat disconnected from the theory (which applies to linear neural networks), and the value it adds to the paper is limited. While I share the aforementioned concerns, I believe the state of deep learning theory is such that strong analyses of restricted settings are still of interest, and therefore recommend publication. I encourage the authors to reconsider the necessity of some of their experiments and defer to the appendix those that do not significantly contribute to the main message of the paper.
| train | [
"NHXZmp04WR",
"qORvJUuCvev",
"9WhUdPZ0iD",
"gPoi4A90MTm",
"q50sf5W4oIz",
"X7f44r6sV1",
"r2kCsRgbk0h",
"mH-qdcoKPgq",
"lA_9DwrdAR8",
"_rj7xSr4mMD",
"eq9Lp2acIxV"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper analyzes the setting of shallow and deep linear neural networks under gradient flow and gradient descent and deduces a superpolynomial directional convergence guarantee.\nThe result is proven under one assumption: spherically symmetric data distribution. Additionally, in the deep linear networks case, it... | [
6,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
4,
6
] | [
4,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
2,
2
] | [
"nips_2021_Q9hZdUBTC9S",
"q50sf5W4oIz",
"mH-qdcoKPgq",
"X7f44r6sV1",
"_rj7xSr4mMD",
"nips_2021_Q9hZdUBTC9S",
"gPoi4A90MTm",
"NHXZmp04WR",
"eq9Lp2acIxV",
"nips_2021_Q9hZdUBTC9S",
"nips_2021_Q9hZdUBTC9S"
] |
nips_2021_A9HVNx1J8Pc | Online Facility Location with Multiple Advice | Clustering is a central topic in unsupervised learning and its online formulation has received a lot of attention in recent years. In this paper, we study the classic facility location problem in the presence of multiple machine-learned advice. We design an algorithm with provable performance guarantees such that, if the advice is good, it outperforms the best-known online algorithms for the problem, and if it is bad it still matches their performance.We complement our theoretical analysis with an in-depth study of the performance of our algorithm, showing its effectiveness on synthetic and real-world data sets.
| accept | The paper presents a learning-augmented algorithm for online facility location. The competitive ratio of the algorithm depends on the quality of the prediction, and the size of the advice; it is never worse than the competitive ratio of the best worst-case algorithm.
There was a substantial debate about (i) the advice model - how natural/realistic it is and (ii) the approximation guarantees, esp. the possible discrepancy between OPT and OPT(P,S). Overall, however, the reviewers felt that these issues do not significantly reduce the value of the paper.
| test | [
"Wzxm4uKWgvO",
"rOr8DZ99mTy",
"fngmiZfabZ",
"oldHPx4p8lK",
"f0taeZ6nIY7",
"Xbh1FNEXuFR",
"cXGYHlnRcYm",
"e1GIIBSvHi"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper is about online algorithms augmented with predictions of the future input. In particular, the authors examine the problem of online facility location. In the classical version of this problem, there is a sequence of points in a metric space that arrive in an online fashion and represent the location of ... | [
5,
-1,
-1,
-1,
-1,
6,
7,
8
] | [
4,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_A9HVNx1J8Pc",
"cXGYHlnRcYm",
"Xbh1FNEXuFR",
"e1GIIBSvHi",
"Wzxm4uKWgvO",
"nips_2021_A9HVNx1J8Pc",
"nips_2021_A9HVNx1J8Pc",
"nips_2021_A9HVNx1J8Pc"
] |
nips_2021_REXvo_lsQS9 | Credit Assignment in Neural Networks through Deep Feedback Control | The success of deep learning sparked interest in whether the brain learns by using similar techniques for assigning credit to each synaptic weight for its contribution to the network output. However, the majority of current attempts at biologically-plausible learning methods are either non-local in time, require highly specific connectivity motifs, or have no clear link to any known mathematical optimization method. Here, we introduce Deep Feedback Control (DFC), a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment. The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of feedback connectivity patterns. To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing. By combining dynamical system theory with mathematical optimization theory, we provide a strong theoretical foundation for DFC that we corroborate with detailed results on toy experiments and standard computer-vision benchmarks.
| accept | The paper introduces a new framework for biological plausible credit assignment algorithms in neural networks. The main idea consists in sending a top-down 'feedback' signal that drives the network to output the desired values, and update weights to make the feedback correction unneeded. They show a connection between the resulting algorithm and Gauss-Newton optimization. The work has connection to previous biological CA literature, in particular work on target propagation and feedback alignment.
Reviewers thought the contribution was insightful, and well written. Initial reviews brought about some concerns regarding novelty and what the precise contributions of some components were (whether they were important or needed at all, like the PI controller), which were extensively discussed in the rebuttal period. Please adjust the final version of the paper by taking these discussions into account.
A final improvement could include a more formal, clear theorem connecting DFC to GN: at the moment, it feels that one needs to connect theorem 1 and 2 to fully get the connection between the two. The relation between DFC and GN would ideally be made clear and rigorous in a single theorem, without having to connect dots.
| train | [
"9Z7lrqgpPrq",
"ovsRBhKSHT",
"efQoxc1DrGq",
"sZMH-xfTJsq",
"x6yd6wQPaUG",
"sW0M4uF-S0e",
"CD7yPzwMHEg",
"0T3cKoVrTG2",
"EtqOZkWP_-",
"EOoFBFoUdMc",
"4suJvJA67qf",
"CxToU1q2NF5",
"9PLjZCa36Zp",
"s0bOcP5LAcT",
"qB4Frhivy8g"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks for the response. The authors have addressed most of my concerns. I have increased my score from 6 to 7.",
"This paper proposes a deep feedback control (DFC) method for deep networks, which is biological plausibility, and the control signal can be used for credit assignment. Experiments on MNIST and Fash... | [
-1,
7,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9
] | [
-1,
2,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"9PLjZCa36Zp",
"nips_2021_REXvo_lsQS9",
"CxToU1q2NF5",
"nips_2021_REXvo_lsQS9",
"EOoFBFoUdMc",
"CD7yPzwMHEg",
"0T3cKoVrTG2",
"EtqOZkWP_-",
"x6yd6wQPaUG",
"4suJvJA67qf",
"CxToU1q2NF5",
"sZMH-xfTJsq",
"ovsRBhKSHT",
"qB4Frhivy8g",
"nips_2021_REXvo_lsQS9"
] |
nips_2021_6Gn6-oMRvu3 | Robust Online Correlation Clustering | In correlation clustering we are given a set of points along with recommendations whether each pair of points should be placed in the same cluster or into separate clusters. The goal cluster the points to minimize disagreements from the recommendations. We study the correlation clustering problem in the online setting., where points arrive one at a time, and upon arrival the algorithm must make an irrevocable cluster assignment decision. While the online version is natural, there is a simple lower bound that rules out any algorithm with a non-trivial competitive ratio. In this work we go beyond worst case analysis, and show that the celebrated Pivot algorithm performs well when given access to a small number of random samples from the input. Moreover, we prove that Pivot is robust to additional adversarial perturbations of the sample set in this setting. We conclude with an empirical analysis validating our theoretical findings.
| accept | The paper considers an online version of correlation clustering and shows that the well-known Pivot algorithm performs well in the setting. The result is not that surprising, but I think this is an important contribution nonetheless and warrants acceptance. | train | [
"-Zy61hYyx0c",
"5OTS8XS8kHb",
"INeqf291JtS",
"idYJbcJGKyt",
"4uRkRcrw7E",
"YEGiU1X4NFk",
"7dKjkByRnv",
"SD3-jkHo09"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the clarifications. I have read the other reviews and the author's responses.\nI maintain my evaluation.",
" - What happens if we change the order of items 1 and 2 in the robust model? Do we know that it becomes significantly harder? What is its status.\n\nThis was asked by the prior rev... | [
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"INeqf291JtS",
"YEGiU1X4NFk",
"4uRkRcrw7E",
"SD3-jkHo09",
"nips_2021_6Gn6-oMRvu3",
"nips_2021_6Gn6-oMRvu3",
"nips_2021_6Gn6-oMRvu3",
"nips_2021_6Gn6-oMRvu3"
] |
nips_2021_wHkKTW2wrmm | Neural Additive Models: Interpretable Machine Learning with Neural Nets | Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks. However, their accuracy comes at the cost of intelligibility: it is usually unclear how they make their decisions. This hinders their applicability to high stakes decision-making domains such as healthcare. We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models. NAMs learn a linear combination of neural networks that each attend to a single input feature. These networks are trained jointly and can learn arbitrarily complex relationships between their input feature and the output. Our experiments on regression and classification datasets show that NAMs are more accurate than widely used intelligible models such as logistic regression and shallow decision trees. They perform similarly to existing state-of-the-art generalized additive models in accuracy, but are more flexible because they are based on neural nets instead of boosted trees. To demonstrate this, we show how NAMs can be used for multitask learning on synthetic data and on the COMPAS recidivism data due to their composability, and demonstrate that the differentiability of NAMs allows them to train more complex interpretable models for COVID-19.
| accept | The reviewers were overall very enthusiastic about this paper, highlighting its novelty, and clarity in terms of writing and visualizations. There are a few changes that I expect you will make based on your responses to the reviewers. | train | [
"1oqmnoO1aU3",
"6X1mm9FQA0",
"cVjMYkmUG6C",
"IaQ4CAZ7SCQ",
"UzEWxpT-FT",
"Uh-5wh-565",
"1Rx3qCNraw",
"bEXNkDvjvZE"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose the neural additive models which have an additive structure and which are more explainable than general neural networks. They apply this model for various tasks and applications and show their interpretability through visualizations. Its a good contribution and a reasonably well-written paper. ... | [
7,
-1,
-1,
-1,
-1,
5,
8,
7
] | [
2,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"nips_2021_wHkKTW2wrmm",
"bEXNkDvjvZE",
"Uh-5wh-565",
"1Rx3qCNraw",
"1oqmnoO1aU3",
"nips_2021_wHkKTW2wrmm",
"nips_2021_wHkKTW2wrmm",
"nips_2021_wHkKTW2wrmm"
] |
nips_2021_LAwuz_L9U9j | Representation Learning for Event-based Visuomotor Policies | Event-based cameras are dynamic vision sensors that provide asynchronous measurements of changes in per-pixel brightness at a microsecond level. This makes them significantly faster than conventional frame-based cameras, and an appealing choice for high-speed robot navigation. While an interesting sensor modality, this asynchronously streamed event data poses a challenge for machine learning based computer vision techniques that are more suited for synchronous, frame-based data. In this paper, we present an event variational autoencoder through which compact representations can be learnt directly from asynchronous spatiotemporal event data. Furthermore, we show that such pretrained representations can be used for event-based reinforcement learning instead of end-to-end reward driven perception. We validate this framework of learning event-based visuomotor policies by applying it to an obstacle avoidance scenario in simulation. Compared to techniques that treat event data as images, we show that representations learnt from event streams result in faster policy training, adapt to different control capacities, and demonstrate a higher degree of robustness to environmental changes and sensor noise.
| accept | This paper introduces a new type of variational auto-encoders that can handle streams of high-frequency event-based spatiotemporal data (t, x, y, p) in order to decode dense images from a small set of N events, then combines a pre-trained eVAE with traditional reinforcement learning (PPO) to learn visuomotor policies to control (and fly while avoiding obstacles) a quadcopter in the AirSim simulator.
Reviewers have praised the idea, the application of RL to event-based data streams, the writing of the paper and the extent of experiment and comparison to non-event-based VAE + RL. Reviewers had some questions about equations, about specifics of some ablations, about rendering time for event-based simulation, about showing the simulator, etc., that were all satisfactorily answered by the authors.
After careful consideration of the paper and given review scores of (6, 6, 7, 7, mean 6.5), it appears that the paper should be accepted. The authors are invited to carry out all the changes they promised to the reviewers, including open-sourcing their code.
| val | [
"dZwXs7iaRy-",
"x6bQcnIgNiR",
"MyYdikSR-H4",
"oElxlMsWCx-",
"iW5zwT6RLR",
"92IHIa1k4GB",
"1jpLs_jF4UT",
"ACw0HMbFEw",
"hPwdoCb-hlu",
"9oafL4Edyhb"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The author proposed an event based VAE for high speed robot navigation useful for reinforcement learning. They then compare to other image based RL approaches in AirSim and show they outperform other event based RL baselines. They demonstrate that the representation is useful for training visuomotor policies. They... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_LAwuz_L9U9j",
"1jpLs_jF4UT",
"nips_2021_LAwuz_L9U9j",
"9oafL4Edyhb",
"hPwdoCb-hlu",
"dZwXs7iaRy-",
"MyYdikSR-H4",
"dZwXs7iaRy-",
"nips_2021_LAwuz_L9U9j",
"nips_2021_LAwuz_L9U9j"
] |
nips_2021_zDtFO9vohmF | Kernel Functional Optimisation | Traditional methods for kernel selection rely on parametric kernel functions or a combination thereof and although the kernel hyperparameters are tuned, these methods often provide sub-optimal results due to the limitations induced by the parametric forms. In this paper, we propose a novel formulation for kernel selection using efficient Bayesian optimisation to find the best fitting non-parametric kernel. The kernel is expressed using a linear combination of functions sampled from a prior Gaussian Process (GP) defined by a hyperkernel. We also provide a mechanism to ensure the positive definiteness of the Gram matrix constructed using the resultant kernels. Our experimental results on GP regression and Support Vector Machine (SVM) classification tasks involving both synthetic functions and several real-world datasets show the superiority of our approach over the state-of-the-art.
| accept | The paper proposes a zero-order optimization method where the optimized variable is a kernel function in Hyper-RKHS induced by a selected hyper-kernel. The algorithm is applied to the optimization of kernel functions for C-SVM and GP regression. Experiments show significant improvement over existing methods. The reviewers consider the paper technically sound and clearly written with a potential relevant impact on the field. We encourage the authors to improve the clarity of the mathematical part following the discussion that emerged during the review and the rebuttal period.
| train | [
"V11510lTNHO",
"j52iL35p5jI",
"IQ4LFo7etJM",
"8VZJTOdQvTs",
"wEryYYMHLl2",
"gpTyNGt-CV5",
"Ueas9mffuJB",
"ClEl_FYToDy",
"5uHT6it0Qf",
"dKNFGPed9D7",
"SCfVCrsVrN",
"pLlSERI6SAo",
"UKUW83RUtd",
"aYsHpQe6hE-",
"Rvw8_MSK8g",
"lpzP9SzcWs3"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for acknowledging our rebuttal. We will incorporate the reviewer suggestions into the paper if it gets accepted.",
"This work is interested in kernel selection. In contrast with most of the literature they consider non-parametric class of kernels using hyper-kernels. Nevertheless, some pre... | [
-1,
5,
-1,
8,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8
] | [
-1,
3,
-1,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"IQ4LFo7etJM",
"nips_2021_zDtFO9vohmF",
"aYsHpQe6hE-",
"nips_2021_zDtFO9vohmF",
"ClEl_FYToDy",
"5uHT6it0Qf",
"nips_2021_zDtFO9vohmF",
"dKNFGPed9D7",
"UKUW83RUtd",
"8VZJTOdQvTs",
"lpzP9SzcWs3",
"Rvw8_MSK8g",
"Ueas9mffuJB",
"j52iL35p5jI",
"nips_2021_zDtFO9vohmF",
"nips_2021_zDtFO9vohmF"
... |
nips_2021_L9JM-pxQOl | Generalized Shape Metrics on Neural Representations | Understanding the operation of biological and artificial networks remains a difficult and important challenge. To identify general principles, researchers are increasingly interested in surveying large collections of networks that are trained on, or biologically adapted to, similar tasks. A standardized set of analysis tools is now needed to identify how network-level covariates---such as architecture, anatomical brain region, and model organism---impact neural representations (hidden layer activations). Here, we provide a rigorous foundation for these analyses by defining a broad family of metric spaces that quantify representational dissimilarity. Using this framework, we modify existing representational similarity measures based on canonical correlation analysis and centered kernel alignment to satisfy the triangle inequality, formulate a novel metric that respects the inductive biases in convolutional layers, and identify approximate Euclidean embeddings that enable network representations to be incorporated into essentially any off-the-shelf machine learning method. We demonstrate these methods on large-scale datasets from biology (Allen Institute Brain Observatory) and deep learning (NAS-Bench-101). In doing so, we identify relationships between neural representations that are interpretable in terms of anatomical features and model performance.
| accept | This paper introduces a new similarity measure for representations. Given the wide recent use of comparing representations both in analyzing neural networks and in analyzing brain activity. There was enthusiasm from the reviewers about the fact that the metrics introduced are formally based, but however, there was a discussion about whether the paper should include a more formal topological analysis.
A consensus was reached by having the authors discuss their specific assumptions, the potential connections to topological analysis and what could be addressed in future work. Given that the authors have promised to implement these changes, I recommend acceptance.
| train | [
"GCGwBJrS4ci",
"wok2iShPNDM",
"-dGdLE0t3y9",
"70CkUvNRcMT",
"5ndF-DiCImv",
"O3Icmso5v3q",
"dV4hCF66nf",
"s_JAHbIKqmb",
"_ZRF339va3q",
"H4bHwc8NBL8",
"bqMiJDYFgw",
"tSrCoS_rydB",
"a7ZP2YKHCS9",
"jmUEvyPVfxF"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper develops metrics for comparing different (yet presumably related) representations. The emphasis is on representations generated by artificial or biological neural networks. The paper relates this development to previously proposed approaches. Experimental results on recordings of neuronal activity in th... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
9
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_L9JM-pxQOl",
"-dGdLE0t3y9",
"70CkUvNRcMT",
"5ndF-DiCImv",
"dV4hCF66nf",
"s_JAHbIKqmb",
"_ZRF339va3q",
"nips_2021_L9JM-pxQOl",
"GCGwBJrS4ci",
"jmUEvyPVfxF",
"a7ZP2YKHCS9",
"s_JAHbIKqmb",
"nips_2021_L9JM-pxQOl",
"nips_2021_L9JM-pxQOl"
] |
nips_2021_4jPVcKEYpSZ | Diverse Message Passing for Attribute with Heterophily | Most of the existing GNNs can be modeled via the Uniform Message Passing framework. This framework considers all the attributes of each node in its entirety, shares the uniform propagation weights along each edge, and focuses on the uniform weight learning. The design of this framework possesses two prerequisites, the simplification of homophily and heterophily to the node-level property and the ignorance of attribute differences. Unfortunately, different attributes possess diverse characteristics. In this paper, the network homophily rate defined with respect to the node labels is extended to attribute homophily rate by taking the attributes as weak labels. Based on this attribute homophily rate, we propose a Diverse Message Passing (DMP) framework, which specifies every attribute propagation weight on each edge. Besides, we propose two specific strategies to significantly reduce the computational complexity of DMP to prevent the overfitting issue. By investigating the spectral characteristics, existing spectral GNNs are actually equivalent to a degenerated version of DMP. From the perspective of numerical optimization, we provide a theoretical analysis to demonstrate DMP's powerful representation ability and the ability of alleviating the over-smoothing issue. Evaluations on various real networks demonstrate the superiority of our DMP on handling the networks with heterophily and alleviating the over-smoothing issue, compared to the existing state-of-the-arts.
| accept | This paper proposes a message passing scheme for graph neural networks(GNNs) derived from homophily. The problem is clearly motivated. The distinctness of the work is brought vis-a-vis GNN models has been elaborated in both spatial and spectral domains. One of the reviewers appreciated the theoretical insights which help in understand the shortcomings of existing literature. However the main concern seems to be that comparisons of the State of the art need to be improved.
| train | [
"mrnNSQuOWTf",
"sYQIpd5EfEJ",
"9f2-GXffU-",
"8sGpbXpUUri",
"6C-ItQqRaS",
"2hSLnqQZdG",
"OQyOGmex1Y9",
"heixjKp8ZAO"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper focused on improving graph neural networks from homophily and over-smoothing issues by introducing a diverse message passing (DMP) framework. Experiments on 9 public datasets with a variety of homophily scores were provided to validate the effectiveness of the proposed method. Pros:\n-\tFollowing prev... | [
7,
-1,
-1,
-1,
-1,
6,
4,
7
] | [
4,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_4jPVcKEYpSZ",
"OQyOGmex1Y9",
"mrnNSQuOWTf",
"heixjKp8ZAO",
"2hSLnqQZdG",
"nips_2021_4jPVcKEYpSZ",
"nips_2021_4jPVcKEYpSZ",
"nips_2021_4jPVcKEYpSZ"
] |
nips_2021_ySFGlFjgIfN | Towards Robust Bisimulation Metric Learning | Learned representations in deep reinforcement learning (DRL) have to extract task-relevant information from complex observations, balancing between robustness to distraction and informativeness to the policy. Such stable and rich representations, often learned via modern function approximation techniques, can enable practical application of the policy improvement theorem, even in high-dimensional continuous state-action spaces. Bisimulation metrics offer one solution to this representation learning problem, by collapsing functionally similar states together in representation space, which promotes invariance to noise and distractors. In this work, we generalize value function approximation bounds for on-policy bisimulation metrics to non-optimal policies and approximate environment dynamics. Our theoretical results help us identify embedding pathologies that may occur in practical use. In particular, we find that these issues stem from an underconstrained dynamics model and an unstable dependence of the embedding norm on the reward signal in environments with sparse rewards. Further, we propose a set of practical remedies: (i) a norm constraint on the representation space, and (ii) an extension of prior approaches with intrinsic rewards and latent space regularization. Finally, we provide evidence that the resulting method is not only more robust to sparse reward functions, but also able to solve challenging continuous control tasks with observational distractions, where prior methods fail.
| accept | Three knowledgable reviewers recommended acceptance of the paper (2x accept, 1x weak accept) and one reviewer recommended (weak) rejection of the paper. The authors addressed most of the reviewers' concerns in their rebuttal but some concerns were not resolved. In the discussion about the paper we came to the conclusion that the paper can provide several interesting insights but needs to address several of the reviewer concerns the camera ready version. I am therefore recommending acceptance of the paper and at the same time strongly advise the authors to carefully adjust the paper to address the remaining reviewers' concerns (in particular reviewer pEeB's concerns regarding clarity and theoretical statements). | train | [
"mzhYCQxRL-C",
"oq6coHfwLXY",
"8PnB2CRmi24",
"gbENPRwOa-9",
"5x1NbeibPkN",
"XUCrkRuHvKn",
"mzdlomHk0n1",
"V8UcgHcomxH",
"VXCRwpoCGT2",
"SZKIGUy4SwH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers the problem of learning good representations for reinforcement learning problems. It approaches this from the perspective of bisimulation metrics, extending earlier work on on-policy bisimulation for control (DBC). The paper identifies issues with DBC that also appear in other types of metric ... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
2
] | [
"nips_2021_ySFGlFjgIfN",
"mzdlomHk0n1",
"5x1NbeibPkN",
"VXCRwpoCGT2",
"mzhYCQxRL-C",
"SZKIGUy4SwH",
"V8UcgHcomxH",
"nips_2021_ySFGlFjgIfN",
"nips_2021_ySFGlFjgIfN",
"nips_2021_ySFGlFjgIfN"
] |
nips_2021_DbxKZvfOIhu | Beyond BatchNorm: Towards a Unified Understanding of Normalization in Deep Learning | Inspired by BatchNorm, there has been an explosion of normalization layers in deep learning. Recent works have identified a multitude of beneficial properties in BatchNorm to explain its success. However, given the pursuit of alternative normalization layers, these properties need to be generalized so that any given layer's success/failure can be accurately predicted. In this work, we take a first step towards this goal by extending known properties of BatchNorm in randomly initialized deep neural networks (DNNs) to several recently proposed normalization layers. Our primary findings follow: (i) similar to BatchNorm, activations-based normalization layers can prevent exponential growth of activations in ResNets, but parametric techniques require explicit remedies; (ii) use of GroupNorm can ensure an informative forward propagation, with different samples being assigned dissimilar activations, but increasing group size results in increasingly indistinguishable activations for different samples, explaining slow convergence speed in models with LayerNorm; and (iii) small group sizes result in large gradient norm in earlier layers, hence explaining training instability issues in Instance Normalization and illustrating a speed-stability tradeoff in GroupNorm. Overall, our analysis reveals a unified set of mechanisms that underpin the success of normalization methods in deep learning, providing us with a compass to systematically explore the vast design space of DNN normalization layers.
| accept | This paper performs extensive empirical scans, comparing various aspects of normalization methods. The reviews are all positive, but the discussion raised a focused on novelty and clarity issues. Regarding novelty, some results seem to be less surprising and seem like a validation of existing understanding (sections 3 and 5), while some results seem to be more interesting (section 4). Regarding clarity, the paper had a lot of confusing typos (especially in the more interesting section 4), but those were cleared in the discussion. Still, I feel some parts are missing details, and I recommend to author to improve it (e.g., I couldn't follow the "proof idea" of conjecture 1, or the parts with the harmonic mean in section 5). | train | [
"ytBng3Lppf_",
"kiLzsKaZBy2",
"zh0Gt2_Mk2x",
"ft9ImOT3w6W",
"rjmC4rUy0o",
"ajWLYIKR26f",
"WpSuKUJm7B",
"hJomMejQDXh",
"OAoHzllgIXy",
"LyLGKV5TIha",
"Ctrvpg83Jtg",
"TL8P5ypx3S6",
"KcwUPSHKf63",
"KbQXECnsR7L"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
" Following GroupNorm and using G for number of groups is definitely a justified suggestion and we will make sure to follow it. We have also fixed typos in Section 4 (both figure 7 and text) to improve clarity of the section. Thank you very much for pointing those out! These changes will be reflected in the final v... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
-1,
7,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
-1,
4,
-1,
-1,
-1,
-1
] | [
"kiLzsKaZBy2",
"zh0Gt2_Mk2x",
"ft9ImOT3w6W",
"rjmC4rUy0o",
"ajWLYIKR26f",
"KbQXECnsR7L",
"nips_2021_DbxKZvfOIhu",
"nips_2021_DbxKZvfOIhu",
"WpSuKUJm7B",
"nips_2021_DbxKZvfOIhu",
"LyLGKV5TIha",
"KcwUPSHKf63",
"hJomMejQDXh",
"Ctrvpg83Jtg"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.