paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2021_6vWuYzkp8d
Discovering and Achieving Goals via World Models
How can artificial agents learn to solve many diverse tasks in complex visual environments without any supervision? We decompose this question into two challenges: discovering new goals and learning to reliably achieve them. Our proposed agent, Latent Explorer Achiever (LEXA), addresses both challenges by learning a world model from image inputs and using it to train an explorer and an achiever policy via imagined rollouts. Unlike prior methods that explore by reaching previously visited states, the explorer plans to discover unseen surprising states through foresight, which are then used as diverse targets for the achiever to practice. After the unsupervised phase, LEXA solves tasks specified as goal images zero-shot without any additional learning. LEXA substantially outperforms previous approaches to unsupervised goal reaching, both on prior benchmarks and on a new challenging benchmark with 40 test tasks spanning across four robotic manipulation and locomotion domains. LEXA further achieves goals that require interacting with multiple objects in sequence. Project page: https://orybkin.github.io/lexa/
accept
This paper proposes to learn a useful goal-conditioned policy in the absence of reward by jointly training an explorer that learns to visit uncertain states and an achiever that learns to reach a given goal state. All of the reviewers agreed that this is a novel combination of the prior work on learning a goal-conditioned policy in self-supervised learning and model-based RL, though each component is not entirely new. The only initial concern was that the proposed method was not evaluated on domains where the baselines were evaluated. However, the authors provided an additional result during the rebuttal period, and the majority of the reviewers are satisfied with it. Therefore, I recommend accepting this paper and suggest the authors to include the new result in the camera-ready version.
train
[ "dtZYTVgxseu", "25qs01REUCa", "wrrqDZdGA7", "ZmXShJVCCks", "Eg5NyOxQ-jn", "kcoJUhwRUVQ", "4Ez6G2u7P9-", "XaTIGtR8mDh", "cU3RjLq7b0H", "EffLImRCe-o", "8gBheNgsSBV", "nkuEh3TBktx", "frVPNyF0CAO", "Sdq5xIr2WUV", "b4BMegNzJJ", "RvOIafdzBc_", "J5QTPv6aCFp", "L683va9aqQy", "78i_1wDWfZ"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " Thank you for the additional experiments, they address my main concern with the paper. I have raised my score accordingly. ", "This paper proposes a method to learn goal-conditioned policies by combining ideas from goal-conditioned RL and model-based exploration. It operates by training an exploration policy wi...
[ -1, 6, 6, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 4, 4, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "ZmXShJVCCks", "nips_2021_6vWuYzkp8d", "nips_2021_6vWuYzkp8d", "b4BMegNzJJ", "kcoJUhwRUVQ", "XaTIGtR8mDh", "8gBheNgsSBV", "4Ez6G2u7P9-", "nips_2021_6vWuYzkp8d", "78i_1wDWfZ", "nkuEh3TBktx", "wrrqDZdGA7", "cU3RjLq7b0H", "25qs01REUCa", "25qs01REUCa", "lqT9oqVaRAQ", "wrrqDZdGA7", "-6Z...
nips_2021_KbV-UZRKb3g
Understanding and Improving Early Stopping for Learning with Noisy Labels
The memorization effect of deep neural network (DNN) plays a pivotal role in many state-of-the-art label-noise learning methods. To exploit this property, the early stopping trick, which stops the optimization at the early stage of training, is usually adopted. Current methods generally decide the early stopping point by considering a DNN as a whole. However, a DNN can be considered as a composition of a series of layers, and we find that the latter layers in a DNN are much more sensitive to label noise, while their former counterparts are quite robust. Therefore, selecting a stopping point for the whole network may make different DNN layers antagonistically affect each other, thus degrading the final performance. In this paper, we propose to separate a DNN into different parts and progressively train them to address this problem. Instead of the early stopping which trains a whole DNN all at once, we initially train former DNN layers by optimizing the DNN with a relatively large number of epochs. During training, we progressively train the latter DNN layers by using a smaller number of epochs with the preceding layers fixed to counteract the impact of noisy labels. We term the proposed method as progressive early stopping (PES). Despite its simplicity, compared with the traditional early stopping, PES can help to obtain more promising and stable results. Furthermore, by combining PES with existing approaches on noisy label training, we achieve state-of-the-art performance on image classification benchmarks. The code is made public at https://github.com/tmllab/PES.
accept
The paper proposes a variant of early stopping for learning with noisy labels. The method starts with the hypothesis that early layers are less prone to be adversely affected by label noise than later layers. Authors empirically validate this hypothesis and then propose a progressive layerwise training method for tackling label noise. This is then used to identify noisy examples which are treated as unlabeled data and existing semi-supervised methods are used to get further empirical gains. Reviewers have expressed some concerns on the heuristic components in the method (use of Adam for last layers vs SGD for earlier layers), several metaparameters (the number of epochs for each subblock), and the increased training time of the method due to progressive/multi-stage training, but they have found the motivation behind the paper convincing and have been overall positive about the paper. I believe the paper is above the acceptance threshold for the publication.
train
[ "4cfl_kS-hq-", "-GU3zYdFA7y", "-hgu3wKT8eL", "8NlxeNWZSVf", "IrQQVO45Aaz", "0e0TMaYylzN", "zRbKmiZkBzu", "q6uA2CrYXjC", "GZECk5fb2LP", "N4NAlauc3In", "E_Wdm6zHNZs", "6fK7pDLca3", "RppfOiyb6RS", "umyThMd0-za" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I have carefully read the response. The authors have addressed my concerns well. Thus, I am glad to keep my score for acceptance. Thanks.", " I thank the authors for the detailed comments and clarifications! I have carefully read the authors' response, and my concerns were addressed. Thus, I will keep my accept...
[ -1, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "N4NAlauc3In", "GZECk5fb2LP", "nips_2021_KbV-UZRKb3g", "6fK7pDLca3", "-hgu3wKT8eL", "nips_2021_KbV-UZRKb3g", "E_Wdm6zHNZs", "nips_2021_KbV-UZRKb3g", "umyThMd0-za", "RppfOiyb6RS", "0e0TMaYylzN", "-hgu3wKT8eL", "nips_2021_KbV-UZRKb3g", "nips_2021_KbV-UZRKb3g" ]
nips_2021_PJEPtZmw-SQ
Distributionally Robust Imitation Learning
We consider the imitation learning problem of learning a policy in a Markov Decision Process (MDP) setting where the reward function is not given, but demonstrations from experts are available. Although the goal of imitation learning is to learn a policy that produces behaviors nearly as good as the experts’ for a desired task, assumptions of consistent optimality for demonstrated behaviors are often violated in practice. Finding a policy that is distributionally robust against noisy demonstrations based on an adversarial construction potentially solves this problem by avoiding optimistic generalizations of the demonstrated data. This paper studies Distributionally Robust Imitation Learning (DRoIL) and establishes a close connection between DRoIL and Maximum Entropy Inverse Reinforcement Learning. We show that DRoIL can be seen as a framework that maximizes a generalized concept of entropy. We develop a novel approach to transform the objective function into a convex optimization problem over a polynomial number of variables for a class of loss functions that are additive over state and action spaces. Our approach lets us optimize both stationary and non-stationary policies and, unlike prevalent previous methods, it does not require repeatedly solving an inner reinforcement learning problem. We experimentally show the significant benefits of DRoIL’s new optimization method on synthetic data and a highway driving environment.
accept
This paper is somewhat borderline. The paper shows an interesting connection between distributional robustness and MaxEnt IRL, showing that MaxEnt IRL is a special case of distributionally robust IL. Using DRO in IRL is interesting & underexplored. The paper also has a minor contribution of making distributionally robust IL more efficient. The main downsides of the paper are as follows. First, the approach is limited to discrete state and action spaces, severely limiting its practical relevance. As a result, the experimental comparisons are also outdated. Second, the connection between DRO and MaxEnt IRL has limited technical difficulty since it simply invokes a lemma about exponential families (though simplicity can also be viewed as a good thing). Overall, this is a scenario where the paper is correct and generally well-executed but has potentially limited expected impact and limited technical difficulty. Given that impact is subjective and at least half of the reviewers found the ideas to be interesting, it seems like others in the community would also get value out of the paper.
train
[ "HV2XOtlxI83", "tbScFkjqzbg", "qV8dMhP2IBw", "S8Re0DUPXU", "Rg_SuLVRP-f", "KVNp4SsKhzW", "GGk6AqjSahO", "2TlEzsvDQ98", "rjPGzrBiUHf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for the clarifications. I stick to my initial assessment.", "This paper studies Distributionally Robust Imitation Learning (DIRL) and reveals that Maximum Entropy (MaxEnt) can be a special case of DIRL. In the case of an \"additive\" loss function over state and actions, it also proposes a new algorit...
[ -1, 5, 6, -1, -1, -1, -1, 7, 6 ]
[ -1, 4, 3, -1, -1, -1, -1, 4, 3 ]
[ "Rg_SuLVRP-f", "nips_2021_PJEPtZmw-SQ", "nips_2021_PJEPtZmw-SQ", "tbScFkjqzbg", "2TlEzsvDQ98", "qV8dMhP2IBw", "rjPGzrBiUHf", "nips_2021_PJEPtZmw-SQ", "nips_2021_PJEPtZmw-SQ" ]
nips_2021_OrPraBRj45z
On the Power of Edge Independent Graph Models
Sudhanshu Chanpuriya, Cameron Musco, Konstantinos Sotiropoulos, Charalampos Tsourakakis
accept
The reviewers are generally leaning to accept the paper. They unanimously appreciate the theoretical contribution of the paper regarding edge-indendent graph models and consider the paper well written. Most concerns have been very well addressed by the authors. Possible extensions of the paper to edge-dependent models are interesting but beyond the scope of the paper at hand. Reviewers disagree on the potential impact of the paper and on the relevance of the theoretical results in practical applications. It was mentioned that the paper is highly valuable for research on graph generative models and may have further impact on other research regarding learning on graphs.
train
[ "veC2EHR1QFo", "6iAQEU3BRR0", "qITLoRMBmw-", "UnF_pL4sv6Z", "LAjdXtBfq8m", "5dX80oaZId", "URKi2n8lEHr", "zK39q-h8tlb", "YEuVW0fRD-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Dear authors, \nThank you for your response to my queries/comments as well as the other reviewers. I really appreciate your detailed responses and taking the time to add them.\n\nI do not have any further questions at present. I will review the other reviewers' comments as well as all your responses to them to fi...
[ -1, 6, 7, 6, -1, -1, -1, -1, 5 ]
[ -1, 4, 4, 3, -1, -1, -1, -1, 4 ]
[ "URKi2n8lEHr", "nips_2021_OrPraBRj45z", "nips_2021_OrPraBRj45z", "nips_2021_OrPraBRj45z", "qITLoRMBmw-", "UnF_pL4sv6Z", "6iAQEU3BRR0", "YEuVW0fRD-", "nips_2021_OrPraBRj45z" ]
nips_2021_W6e384Lkjbw
Stochastic Online Linear Regression: the Forward Algorithm to Replace Ridge
reda ouhamma, Odalric-Ambrym Maillard, Vianney Perchet
accept
The paper concerns online linear regression algorithm -- ridge regression and the Forward algorithm a.k.a. Vovk-Azoury-Warmuth forecaster -- in the stochastic setting. Both algorithms work in the adversarial setting but require the range of labels to be bounded (additionally, RR needs to know the range in order to clip its predictions). The authors consider a stochastic well-specified model $y = \langle \theta^*,x \rangle + \xi$, with the noise $\xi$ being sub-gaussian. They show that RR and Forward work in this setting (high-probability regret bound) even though labels are potentially unbounded, with a better scaling of the bound with respect to the regularization and the range of inputs. It is also shown that RR is more sensitive than Forward to the right choice of the regularization for small sample sizes. Applications of their results to stochastic linear bandits are given (better regret). Overall, the reviewers liked the paper and found these results to be novel and interesting. However, they also pointed at two issues, which made most of them to evaluate the paper below the threshold bar: 1. Limited significance. The paper does not introduce any new algorithms, and the proof techniques applied to RR and Forward were not found to be novel or surprising. Two of the reviewers pointed out that due to sub-gaussianity of the noise, the labels are w.h.p. bounded in the range $O(\sqrt{\log T})$ (for bounded comparator), so the proposed bound might not have significant advantage over the existing one. Furthermore the upper bounds have $O(d^2)$ dependence on the dimension as opposed to $O(d)$ for the adversarial case (the authors claim, though, that they can get down to $O(d)$ by an improved analysis in the revision). 2. There were problems with presentation reported, and suggestions that the paper should be improved and more discussion should be added (two of the reviewers were confused about the motivation and the actual contribution, especially around the discussion on tuning the regularization). This is why I think the paper is not yet ready to be publish. However, I encourage the authors to resubmit their paper, taking into account the reviewers' remarks, in particular: to improve the presentation, extend the discussion of the main results, discuss the missing references to past work, and provide clear motivation around the scaling of the regularization coefficient.
train
[ "aoa2ANE5QzL", "eIMx_LiZ2LB", "y_kZitvXkEW", "BNPVX3ZLHuq", "t6ucpNiI01", "IyP4JHEoWf1", "dSbItBzr57x", "gi0ydJZWDwH", "gxD8l9N0wEK", "37Ol5yZQ_YQ", "vw4hFqdCrCI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper offers a new analysis of the forward algorithm in the setting of stochastic online linear regression. Traditional analysis of this method under adversarial setting suffers from assumptions of boundedness of the output variable. These assumptions are not satisfied in the stochastic setting. The authors co...
[ 5, 6, -1, -1, 5, -1, 5, -1, -1, -1, -1 ]
[ 3, 4, -1, -1, 3, -1, 4, -1, -1, -1, -1 ]
[ "nips_2021_W6e384Lkjbw", "nips_2021_W6e384Lkjbw", "BNPVX3ZLHuq", "IyP4JHEoWf1", "nips_2021_W6e384Lkjbw", "vw4hFqdCrCI", "nips_2021_W6e384Lkjbw", "aoa2ANE5QzL", "t6ucpNiI01", "eIMx_LiZ2LB", "dSbItBzr57x" ]
nips_2021_xVLzpMOexqo
Dr Jekyll & Mr Hyde: the strange case of off-policy policy updates
The policy gradient theorem states that the policy should only be updated in states that are visited by the current policy, which leads to insufficient planning in the off-policy states, and thus to convergence to suboptimal policies. We tackle this planning issue by extending the policy gradient theory to policy updates with respect to any state density. Under these generalized policy updates, we show convergence to optimality under a necessary and sufficient condition on the updates’ state densities, and thereby solve the aforementioned planning issue. We also prove asymptotic convergence rates that significantly improve those in the policy gradient literature. To implement the principles prescribed by our theory, we propose an agent, Dr Jekyll & Mr Hyde (J&H), with a double personality: Dr Jekyll purely exploits while Mr Hyde purely explores. J&H’s independent policies allow to record two separate replay buffers: one on-policy (Dr Jekyll’s) and one off-policy (Mr Hyde’s), and therefore to update J&H’s models with a mixture of on-policy and off-policy updates. More than an algorithm, J&H defines principles for actor-critic algorithms to satisfy the requirements we identify in our analysis. We extensively test on finite MDPs where J&H demonstrates a superior ability to recover from converging to a suboptimal policy without impairing its speed of convergence. We also implement a deep version of the algorithm and test it on a simple problem where it shows promising results.
accept
The reviewers discussed this paper and came to a consensus that it is a useful contribution and should be accepted. There were concerns about the fact that this work is primarily in the tabular setting and novelty relative to existing work. The restriction to the tabular setting is quite limited, in that convergence behavior is known to be quite different for (biased) policy gradient methods in the tabular setting (namely that convergence is still guaranteed, as opposed to under function approximation where there are known counterexamples). An issue raised by a reviewer is in the omission of previous work for off-policy policy gradients, that highlight some of these issues, and that consider a different behavior distribution in the objective (somewhat similarly to what is done here, though it is less general since it is restricted to the distribution under a fixed behavior). This connection should be discussed, and the issue of potential divergence more clearly highlighted. The authors do acknowledge this in the paper, refering to the work on biased actor-critic, but it is somewhat buried. This paper would be improved with a much more explicit discussion about this, and if or how the authors think these results might extend to function approximation. As mentioned, despite some of these issues, the insights provided, particularly about convergence rates and relationships to the state weightings, are useful.
val
[ "EkUeRiT-NDM", "4LHjnEf1xU", "D1z7giaFAcS", "KVjmaGjxI_", "Qf0Uw4a-89S", "mivovYp8rN", "ocu16popqz", "9-7jq6IzIuL", "txcLzDRnOTY", "kejlnICeHtT", "FOdegDjb7fL", "mLU1yeP5ITb" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " ***Trying certain experiments with Jekyll for evaluation only could be an interesting addition. This would match the evaluation procedure in many deep RL papers with policy gradient methods and possibly simplifiy the tuning of epsilon (no decaying).***\n\nWe reproduced the constant exploration hyperparameter sear...
[ -1, 7, -1, 7, -1, 6, -1, -1, -1, -1, -1, 9 ]
[ -1, 4, -1, 3, -1, 3, -1, -1, -1, -1, -1, 3 ]
[ "D1z7giaFAcS", "nips_2021_xVLzpMOexqo", "kejlnICeHtT", "nips_2021_xVLzpMOexqo", "ocu16popqz", "nips_2021_xVLzpMOexqo", "FOdegDjb7fL", "mLU1yeP5ITb", "KVjmaGjxI_", "4LHjnEf1xU", "mivovYp8rN", "nips_2021_xVLzpMOexqo" ]
nips_2021_h4es0CIohF
Understanding Adaptive, Multiscale Temporal Integration In Deep Speech Recognition Systems
Natural signals such as speech are hierarchically structured across many different timescales, spanning tens (e.g., phonemes) to hundreds (e.g., words) of milliseconds, each of which is highly variable and context-dependent. While deep neural networks (DNNs) excel at recognizing complex patterns from natural signals, relatively little is known about how DNNs flexibly integrate across multiple timescales. Here, we show how a recently developed method for studying temporal integration in biological neural systems – the temporal context invariance (TCI) paradigm – can be used to understand temporal integration in DNNs. The method is simple: we measure responses to a large number of stimulus segments presented in two different contexts and estimate the smallest segment duration needed to achieve a context invariant response. We applied our method to understand how the popular DeepSpeech2 model learns to integrate across time in speech. We find that nearly all of the model units, even in recurrent layers, have a compact integration window within which stimuli substantially alter the response and outside of which stimuli have little effect. We show that training causes these integration windows to shrink at early layers and expand at higher layers, creating a hierarchy of integration windows across the network. Moreover, by measuring integration windows for time-stretched/compressed speech, we reveal a transition point, midway through the trained network, where integration windows become yoked to the duration of stimulus structures (e.g., phonemes or words) rather than absolute time. Similar phenomena were observed in a purely recurrent and purely convolutional network although structure-yoked integration was more prominent in the recurrent network. These findings suggest that deep speech recognition systems use a common motif to encode the hierarchical structure of speech: integrating across short, time-yoked windows at early layers and long, structure-yoked windows at later layers. Our method provides a straightforward and general-purpose toolkit for understanding temporal integration in black-box machine learning models.
accept
In this paper the authors use temporal context invariance (TCI) to understand temporal integration in black-box deep neural networks. The study is carried out using the DeepSpeech 2 model. The authors find that integratoin windows are specialized to different time scales in different layers, the higher the layers the more sensitive to longer time scales. In addition, lower layers are more sensitive to static windows while higher layers are more adaptive to duration of structures. All reviewers find the work interesting and novel in methodology. The hierarchical structure and pattern of response revealed in the experiments, despite in line with some previously observations/speculations, are still quite interesting and intriguing. The method investigated in the paper also can potentially provide a new tool to the community to gain insights into the black-box machine learning models. I would recommend accept. The authors should address the questions raised by the reviewers in the revision. If possible, please also include new experimental results on other model architectures to make the paper stronger.
test
[ "37sEZyOvZp_", "IcphMlRMfZd", "_V1aNReYiEe", "IHiIFVbj5e1", "ISs5D24x0MS", "SRP6UuRyQZc", "Fjxj0Vzw3s", "NM8kYSRmU2x", "bWN-A16iMoh", "MAaPJeITAkO" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors investigate the DeepSpeech2 speech recognition model using temporal context invariance (TCI), which was recently introduced and used to study temporal integration in biological systems. The analysis shows that the \"integration windows\" for which the intermediate outputs of DeepSpeech2 are approximate...
[ 6, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "nips_2021_h4es0CIohF", "NM8kYSRmU2x", "NM8kYSRmU2x", "bWN-A16iMoh", "bWN-A16iMoh", "37sEZyOvZp_", "MAaPJeITAkO", "nips_2021_h4es0CIohF", "nips_2021_h4es0CIohF", "nips_2021_h4es0CIohF" ]
nips_2021_nqSLT0WcZq
VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer
Since visual perception can give rich information beyond text descriptions for world understanding, there has been increasing interest in leveraging visual grounding for language learning. Recently, vokenization (Tan and Bansal, 2020) has attracted attention by using the predictions of a text-to-image retrieval model as labels for language model supervision. Despite its success, the method suffers from approximation error of using finite image labels and the lack of vocabulary diversity of a small image-text dataset. To overcome these limitations, we present VidLanKD, a video-language knowledge distillation method for improving language understanding. We train a multi-modal teacher model on a video-text dataset, and then transfer its knowledge to a student language model with a text dataset. To avoid approximation error, we propose to use different knowledge distillation objectives. In addition, the use of a large-scale video-text dataset helps learn diverse and richer vocabularies. In our experiments, VidLanKD achieves consistent improvements over text-only language models and vokenization models, on several downstream language understanding tasks including GLUE, SQuAD, and SWAG. We also demonstrate the improved world knowledge, physical reasoning, and temporal reasoning capabilities of our model by evaluating on the GLUE-diagnostics, PIQA, and TRACIE datasets. Lastly, we present comprehensive ablation studies as well as visualizations of the learned text-to-video grounding results of our teacher and student language models.
accept
All but one reviewer recommend the acceptance of this paper. The reviewer not recommending acceptance has remaining concerns about positioning this work with respect to vtokenizer and the AC encourages the authors to address this. Please also be careful about explaining additional details in this submission as requested by this reviewer. However other reviewers were quite positive about this work and one reviewers noted that they felt that this paper does indeed make a more convincing case for the general topic of creating visually-grounded NLP models than Vokenization. The AC recommends acceptance.
train
[ "xJWg1wcIlSK", "vM8F1jzRtMP", "60Uy0Qd3TXl", "Kante-aznLS", "oWp7GN4KYsL", "7vOEXqK--l2", "knNF1R9w276", "UGGsNefsWnW", "a_hkk-hGAaU", "6U2DbKNn-eu", "4C1qUCqmtdM", "4H2HelNLUA" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you for giving suggestions. We used triplet loss for fair comparison with Vokenization paper [1]. We conducted an ablation experiment using triplet vs. N-pair loss for teacher model pretraining. For N-pair loss [2], concretely, for every batch with N image-text pairs, we create N positive pairs and use the...
[ -1, -1, 7, -1, 6, -1, 5, -1, -1, -1, -1, 7 ]
[ -1, -1, 3, -1, 3, -1, 3, -1, -1, -1, -1, 4 ]
[ "7vOEXqK--l2", "Kante-aznLS", "nips_2021_nqSLT0WcZq", "6U2DbKNn-eu", "nips_2021_nqSLT0WcZq", "UGGsNefsWnW", "nips_2021_nqSLT0WcZq", "knNF1R9w276", "4H2HelNLUA", "60Uy0Qd3TXl", "oWp7GN4KYsL", "nips_2021_nqSLT0WcZq" ]
nips_2021_9RFFgpQAOzk
Detecting Individual Decision-Making Style: Exploring Behavioral Stylometry in Chess
The advent of machine learning models that surpass human decision-making ability in complex domains has initiated a movement towards building AI systems that interact with humans. Many building blocks are essential for this activity, with a central one being the algorithmic characterization of human behavior. While much of the existing work focuses on aggregate human behavior, an important long-range goal is to develop behavioral models that specialize to individual people and can differentiate among them.To formalize this process, we study the problem of behavioral stylometry, in which the task is to identify a decision-maker from their decisions alone. We present a transformer-based approach to behavioral stylometry in the context of chess, where one attempts to identify the player who played a set of games. Our method operates in a few-shot classification framework, and can correctly identify a player from among thousands of candidate players with 98% accuracy given only 100 labeled games. Even when trained on amateur play, our method generalises to out-of-distribution samples of Grandmaster players, despite the dramatic differences between amateur and world-class players. Finally, we consider more broadly what our resulting embeddings reveal about human style in chess, as well as the potential ethical implications of powerful methods for identifying individuals from behavioral data.
accept
UPDATE: The revision has been reviewed and this paper has been accepted. We recommend that the authors integrate the final sentence of the introduction into the end of Paragraph 2, where they are talking about applications of their ideas. Some applications raise ethical considerations and it's important to admit that alongside the other applications rather than it appearing as an afterthought at the end. ---- This paper is very difficult to make a decision. The reviewers all agree the paper is excellent technically (even the low reviewer score is not basing their score on the technical contribution). However, there is also general agreement that the paper does not sufficiently address what amounts to serious ethical considerations of this work, specifically the opportunity for it or derivatives to be abused. The authors acknowledge that a more substantial discussion is needed in the paper, so I trust they understand this significance of this concern. However, I (and other reviewers) also feel the authors' response lacks a recognition of appropriate mitigations of possible abuse. They primarily appeal to the public nature of the chess data as a consideration in why this line of research is ethically appropriate. This is not sufficient. Abuses, for example where a bad actor uses even public data to identify/track an otherwise anonymous person in order to discriminate or restrict rights, are not limited to use only private data. In fact, it seems to me public data abuses are more likely or more damaging than private data abuses (as the latter limits the abuse to owners of private data). I do believe advances of stylometry in identifying chess players is a pretty benign application, however there's little discussion about how general this work might be in its ability to do stylometry in other domains. The authors mention an additional social benefit of understanding what identifiability exists in our datasets, but this argument needs more attention and peer review than can be completed through the author response/discussion. As a consequence this paper is being **conditionally accepted**. The final determination will only be made upon review of the revision. The authors should take advantage of the extra page to address concerns as reflected in their own response(s), although without the emphasis that use of public data lessens the ethical concerns. We hope that the authors' added discussion could be used to send a strong signal commensurate with the gravity of the situation, so that appropriate community norms can be developed.
train
[ "phICI8_nFB", "hk4CHL2hxIj", "WbJVuO-2aE4", "FoJauR1fLdF", "cOqLFzlDGCB", "N_kC1SDdrTT", "NfS2Z02HGif", "C0rD0VqW_B8", "NaB7iAHzq6", "Pxyo6V1Ps0E", "Ec9YkkyuNk", "fICKDibVg80", "SHyDINhn7P0", "WSOoetZw0YK", "8aUiSl6lkJY", "UCIyqLnHCT_", "KYPi4IYkw1b", "I_X82eB9pvI", "pv__8wyl3yf"...
[ "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_re...
[ " The review discussion period is coming to an end. Are there any follow-up questions or points of clarification we can address before our ability to comment is turned off?", " Thank you for your time in reading our response and continuing the discussion. Your suggestion to include an analysis/discussion of what ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, 7, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "pv__8wyl3yf", "FoJauR1fLdF", "NfS2Z02HGif", "WSOoetZw0YK", "Pxyo6V1Ps0E", "UCIyqLnHCT_", "SHyDINhn7P0", "Ec9YkkyuNk", "nips_2021_9RFFgpQAOzk", "nips_2021_9RFFgpQAOzk", "NaB7iAHzq6", "nips_2021_9RFFgpQAOzk", "pv__8wyl3yf", "I_X82eB9pvI", "KYPi4IYkw1b", "nips_2021_9RFFgpQAOzk", "nips_...
nips_2021_byizK1OI4xA
Coupled Gradient Estimators for Discrete Latent Variables
Training models with discrete latent variables is challenging due to the high variance of unbiased gradient estimators. While low-variance reparameterization gradients of a continuous relaxation can provide an effective solution, a continuous relaxation is not always available or tractable. Dong et al. (2020) and Yin et al. (2020) introduced a performant estimator that does not rely on continuous relaxations; however, it is limited to binary random variables. We introduce a novel derivation of their estimator based on importance sampling and statistical couplings, which we extend to the categorical setting. Motivated by the construction of a stick-breaking coupling, we introduce gradient estimators based on reparameterizing categorical variables as sequences of binary variables and Rao-Blackwellization. In systematic experiments, we show that our proposed categorical gradient estimators provide state-of-the-art performance, whereas even with additional Rao-Blackwellization previous estimators (Yin et al., 2019) underperform a simpler REINFORCE with a leave-one-out-baseline estimator (Kool et al., 2019).
accept
Four knowledgeable reviewers recommend accepting this submission. I agree. This submission makes a valuable contribution by proposing new inference techniques for models with discrete latent variables.
train
[ "9U7Nlu6sFfB", "ymJ3jPbpGW", "RmxYyHsIqGe", "63cfVRRstDz", "2zL3TNXPbs", "AB1u8DsAou3", "k6C1iK0ZxSz", "RIdnRbSV9ue", "pM8knIkRxWF", "3PMlrn7bl7", "j2UxRpL_81", "DaR1_AYZXSx", "1xrPX7_RyuS", "fegOLICP4Y3", "V-hipgoR9gz", "JHUG0344lXh", "E1JY93HRtvH", "9inmty4s0Jr", "32vuQgDBP1S",...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_revi...
[ " Thank you! Yes, we intend to substantially revise the text for clarity and reorganize how the related works section is done.\n\nYes, you are correct. Thank you for catching that typo in the equation; we can confirm it is implemented properly in the code.\n", "The paper concerns the estimation of gradients of ex...
[ -1, 7, -1, -1, -1, -1, 7, -1, -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, -1, -1, -1, -1, 3, -1, -1, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "RmxYyHsIqGe", "nips_2021_byizK1OI4xA", "63cfVRRstDz", "2zL3TNXPbs", "AB1u8DsAou3", "RIdnRbSV9ue", "nips_2021_byizK1OI4xA", "pM8knIkRxWF", "s-hd8UZE0Zj", "nips_2021_byizK1OI4xA", "DaR1_AYZXSx", "V-hipgoR9gz", "32vuQgDBP1S", "nips_2021_byizK1OI4xA", "JHUG0344lXh", "E1JY93HRtvH", "6pUQ...
nips_2021_PftCCiHVQP
AutoGEL: An Automated Graph Neural Network with Explicit Link Information
Recently, Graph Neural Networks (GNNs) have gained popularity in a variety of real-world scenarios. Despite the great success, the architecture design of GNNs heavily relies on manual labor. Thus, automated graph neural network (AutoGNN) has attracted interest and attention from the research community, which makes significant performance improvements in recent years. However, existing AutoGNN works mainly adopt an implicit way to model and leverage the link information in the graphs, which is not well regularized to the link prediction task on graphs, and limits the performance of AutoGNN for other graph tasks. In this paper, we present a novel AutoGNN work that explicitly models the link information, abbreviated to AutoGEL. In such a way, AutoGEL can handle the link prediction task and improve the performance of AutoGNNs on the node classification and graph classification task. Moreover, AutoGEL proposes a novel search space containing various design dimensions at both intra-layer and inter-layer designs and adopts a more robust differentiable search algorithm to further improve efficiency and effectiveness. Experimental results on benchmark data sets demonstrate the superiority of AutoGEL on several tasks.
accept
This paper proposes a new autoGNN model that takes link information into considerations. The model has shown better performance compared to baselines. The reviewers had concerns on novelty and baselines, and suggested new experiments. The authors have done a good job in addressing reviewers’ concerns, which is reflected in the increase of some rating. The concerns regarding to KG link prediction tasks has been well addressed in the rebuttal, although the reviewer didn’t raise their rating. Overall, the paper extends an existing algorithm, by considering link prediction task and modeling edge information. Although the idea is incremental, the execution of this paper is solid. I thus recommend accept.
train
[ "wP8iMDOZCZl", "yh7eyWg-yNc", "CZp7lBbY7J", "DfDA_BtE1Jc", "UfLGl8NqdoX", "CnDoW6cg20k", "eg-G_m5Nrd", "WbZMacwsB92", "tGu0Di8Slbb", "p2iqNYnr8Q", "QAjDYYPjLgs", "SGCTg2YKD0L" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank your efforts in reviewing our submission :)", "The authors propose an AutoGNN model that introduces more design dimensions in both inter- and intro-layer. It is a decent attempt for authors to take into consideration the edge embedding in MPNN and adopt a stochastic differentiable search algo...
[ -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, 3, -1, -1, 4, -1, -1, -1, -1, -1, 3, 5 ]
[ "CZp7lBbY7J", "nips_2021_PftCCiHVQP", "p2iqNYnr8Q", "CnDoW6cg20k", "nips_2021_PftCCiHVQP", "WbZMacwsB92", "SGCTg2YKD0L", "UfLGl8NqdoX", "QAjDYYPjLgs", "yh7eyWg-yNc", "nips_2021_PftCCiHVQP", "nips_2021_PftCCiHVQP" ]
nips_2021_CLCVcl1rSPP
RL for Latent MDPs: Regret Guarantees and a Lower Bound
Jeongyeol Kwon, Yonathan Efroni, Constantine Caramanis, Shie Mannor
accept
In this paper, the authors study efficient algorithms and hardness results for Latent MDPs. In a latent MDP, in each iteration, an MDP is randomly drawn from a set of M possible MDPs, and the agent interacts with that MDP without knowing its identity. The goal is to find a policy so that the expected cumulative reward is maximized, where the expectation is taken over the choice of the MDP and its stochasticity. The authors first show that without further assumptions, LMDPs are hard to learn. The authors then give a set of sufficient conditions so that LDMPs are learnable with polynomial sample complexity. One possible condition that makes LMDPs learnable with polynomial number of samples is the case when the identity of the randomly chosen MDP is revealed to the agent at the end of each trajectory. The authors further show that such identity can be learned if certain seperatedness condition holds. Preliminary experimental results are attached. There is a clear consensus amongst reviewers that this is a strong accept.
train
[ "I-st9iID6ys", "H5a_w2SnGCh", "cVknplybrnN", "PDNqRbcCunn", "w2CI4Ubs8-8", "zoduPNmSdm", "sTBWifHdEP", "-_suYUfrP-", "6ZC1IbELEj0", "uRBJgQTBdf" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate your positive feedback and increasing the score!", "This paper studies episodic RL in the regret minimization setup in Latent MDPs (LMDPs), which belong to the family of POMDPs. The paper presents a sample complexity lower bound for general LMDPs (i.e., in the absence of any restrictive assumption...
[ -1, 8, -1, -1, -1, -1, -1, 8, 6, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "cVknplybrnN", "nips_2021_CLCVcl1rSPP", "w2CI4Ubs8-8", "-_suYUfrP-", "H5a_w2SnGCh", "uRBJgQTBdf", "6ZC1IbELEj0", "nips_2021_CLCVcl1rSPP", "nips_2021_CLCVcl1rSPP", "nips_2021_CLCVcl1rSPP" ]
nips_2021_ZDMqRGSksHs
Adaptive Sampling for Minimax Fair Classification
Machine learning models trained on uncurated datasets can often end up adversely affecting inputs belonging to underrepresented groups. To address this issue, we consider the problem of adaptively constructing training sets which allow us to learn classifiers that are fair in a {\em minimax} sense. We first propose an adaptive sampling algorithm based on the principle of \emph{optimism}, and derive theoretical bounds on its performance. We also propose heuristic extensions of this algorithm suitable for application to large scale, practical problems. Next, by deriving algorithm independent lower-bounds for a specific class of problems, we show that the performance achieved by our adaptive scheme cannot be improved in general. We then validate the benefits of adaptively constructing training sets via experiments on synthetic tasks with logistic regression classifiers, as well as on several real-world tasks using convolutional neural networks (CNNs).
accept
While the review scores are a bit divergent (one reviewer scored 4 while others gave greater than or equal to 6), the reviews are overall positive, and some of the major concerns regarding the organization of theoretical guarantee and the advantage of ${\cal A}_{opt}$ are properly addressed during the discussion period. Re. the access to the oracle: I believe this assumption is not very strong in light of practical scenarios, although it would definitely be better if the algorithm does not depend on it. Also, the authors explained in the rebuttal how to react to such challenging scenarios properly. Overall I believe this paper is worth being published, given that the theoretical result is further elaborated in the main text as well as the advantage of ${\cal A}_{opt}$ is highlighted with sufficient experimental supports, as the authors promised.
val
[ "9QyvaB_5Ata", "c4IzsWI7rxs", "_KtyZpl6nmz", "vGviaU_Rh6S", "YKpXwyTBLXs", "f5dJ3TMN-A0", "iyBqUVGbjq", "mH8Hvds_Eed", "Hv1_uk-mqOf", "XN85Y-AanyI", "WR28aC5OY83" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers,\n\nWe believe that we properly and factually addressed all major points raised in your reviews. Please let us know if you have any other concerns. We would be happy to discuss them.\n\n\nSincerely,\nThe authors\n", " We appreciate the reviewer reading our response and are glad we have been able ...
[ -1, -1, -1, -1, -1, -1, -1, 4, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 2, 3, 3 ]
[ "nips_2021_ZDMqRGSksHs", "_KtyZpl6nmz", "f5dJ3TMN-A0", "WR28aC5OY83", "Hv1_uk-mqOf", "XN85Y-AanyI", "mH8Hvds_Eed", "nips_2021_ZDMqRGSksHs", "nips_2021_ZDMqRGSksHs", "nips_2021_ZDMqRGSksHs", "nips_2021_ZDMqRGSksHs" ]
nips_2021_m8KpGet0Etq
Structured in Space, Randomized in Time: Leveraging Dropout in RNNs for Efficient Training
Anup Sarma, Sonali Singh, Huaipan Jiang, Rui Zhang, Mahmut Kandemir, Chita Das
accept
Paper presents results on structured dropout in recurrent models that can be used to speed up computation without affecting results. Reviewers found sufficient depth of exploration in results, and the results looked decent. Reviewers wanted to hear about the applicability to transformers, and the authors addressed that in the rebuttal.
train
[ "6MOa3BPG4c", "yadrEkEEPMk", "caAmLHn_wv", "Q7chiwlq9cU", "MKMwJAXfscN", "_1bQpxYWAOo", "NG4htjVWsTY", "XghsVdQcd8B", "urxl1GukRwS", "mf461tdALpC" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a way to use dropout induced sparsity for LSTMs to reduce run-time in general purpose SIMD hardware and systolic arrays, by structuring dropout patterns. Experiments on LM (PTB dataset), MT (IWSLT De-En and En-Vi), and NER (CoNLL-2003) shows that the proposed approach can induce 1.23x to 1.64x ...
[ 6, -1, 6, -1, -1, -1, -1, -1, 7, 6 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_m8KpGet0Etq", "XghsVdQcd8B", "nips_2021_m8KpGet0Etq", "caAmLHn_wv", "nips_2021_m8KpGet0Etq", "mf461tdALpC", "urxl1GukRwS", "6MOa3BPG4c", "nips_2021_m8KpGet0Etq", "nips_2021_m8KpGet0Etq" ]
nips_2021_VH2og5jlrzm
Variational Continual Bayesian Meta-Learning
Conventional meta-learning considers a set of tasks from a stationary distribution. In contrast, this paper focuses on a more complex online setting, where tasks arrive sequentially and follow a non-stationary distribution. Accordingly, we propose a Variational Continual Bayesian Meta-Learning (VC-BML) algorithm. VC-BML maintains a Dynamic Gaussian Mixture Model for meta-parameters, with the number of component distributions determined by a Chinese Restaurant Process. Dynamic mixtures at the meta-parameter level increase the capability to adapt to diverse tasks due to a larger parameter space, alleviating the negative knowledge transfer problem. To infer posteriors of model parameters, compared to the previously used point estimation method, we develop a more robust posterior approximation method -- structured variational inference for the sake of avoiding forgetting knowledge. Experiments on tasks from non-stationary distributions show that VC-BML is superior in transferring knowledge among diverse tasks and alleviating catastrophic forgetting in an online setting.
accept
The paper proposes a new, fully-Bayesian method for online meta-learning: VC-BML. A mixture models over the meta-parameters is updated dynamically. This is used as an informative prior for the upcoming tasks. The Reviewers highlighted the novelty of the method with respect to previous works. The paper is theoretically sound, and VC-BML improves over known meta-learning algorithms in practice. Therefore, I will recommend to accept the paper. Please take into account all the comments of the Reviewers for the camera-ready version of the paper. Especially, Reviewer h2Cd pointed out a few problems in the writing of the paper and in the comparison with Jerfel et al. 2019. The Reviewer updated his/her evaluation of the paper following your detailed reply and your promise to revise the paper accordingly.
test
[ "FfIGXMQf_b_", "EgPycxgp9Jf", "9JxvVZGGTPz", "9LnyRIz9GK", "n9xPbveuSJ", "6QgHkB3xBW", "QCae81OgmQi", "lnLW1tEqwNX", "2II2KqK6cTI", "C80v8t5q6aX", "xwbT8YId_h_" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This article presents a fully variational method to address the continual learning problem: training a neural network sequentially on different tasks, without forgetting preceding tasks.\n\nThe theoretical framework, namely variational inference with hierarchical random variables, is classical, which makes the gen...
[ 7, 6, -1, 6, -1, -1, -1, -1, -1, 7, 7 ]
[ 4, 3, -1, 3, -1, -1, -1, -1, -1, 3, 3 ]
[ "nips_2021_VH2og5jlrzm", "nips_2021_VH2og5jlrzm", "6QgHkB3xBW", "nips_2021_VH2og5jlrzm", "FfIGXMQf_b_", "EgPycxgp9Jf", "xwbT8YId_h_", "9LnyRIz9GK", "C80v8t5q6aX", "nips_2021_VH2og5jlrzm", "nips_2021_VH2og5jlrzm" ]
nips_2021__ZXlOpdufFJ
Recognizing Vector Graphics without Rasterization
In this paper, we consider a different data format for images: vector graphics. In contrast to raster graphics which are widely used in image recognition, vector graphics can be scaled up or down into any resolution without aliasing or information loss, due to the analytic representation of the primitives in the document. Furthermore, vector graphics are able to give extra structural information on how low-level elements group together to form high level shapes or structures. These merits of graphic vectors have not been fully leveraged in existing methods. To explore this data format, we target on the fundamental recognition tasks: object localization and classification. We propose an efficient CNN-free pipeline that does not render the graphic into pixels (i.e. rasterization), and takes textual document of the vector graphics as input, called YOLaT (You Only Look at Text). YOLaT builds multi-graphs to model the structural and spatial information in vector graphics, and a dual-stream graph neural network is proposed to detect objects from the graph. Our experiments show that by directly operating on vector graphics, YOLaT outperforms raster-graphic based object detection baselines in terms of both average precision and efficiency.
accept
The reviewers have extensively discussed the paper with the authors and came to the conclusion that the paper studies an interesting and important problem. The reviewers have agreed to support the acceptance of the paper if some of the changes are incorporated in the final version. Specifically, the title should be changed to be more precise and less misleading, and there should be more discussion on the feasibility of potential baselines (including the ones coming from handwriting recognition). Overall, I agree with the reviewers and recommend acceptance of the paper but I would like to encourage the authors to carefully incorporate all the raised concerns and suggestions to improve the paper.
train
[ "CvhZj0RoG0w", "MxoubFzl4sm", "DtXS2fcQj2", "rBRl3_cFI6", "kpT8r2kwIJH", "Igq-beN1FrN", "FflUH1XDfs", "xL6O8wJshQS", "xA26jR9vUEr", "RnDsrk-frxs", "T_zJBbyv0G", "pPDLbeunrb3", "tZ4N_lqbnPu", "gyfpqwx4je", "rw8nioLM9q1" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Update at the end of the discussion session: \nDiscussion among reviewers and with the authors made me increase my score by a total of 2 points. \n\nThe paper presents a graph-neural network based approach to detecting objects in vector graphics. The paper works directly on the vector graphics representation by fi...
[ 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021__ZXlOpdufFJ", "DtXS2fcQj2", "kpT8r2kwIJH", "FflUH1XDfs", "xL6O8wJshQS", "nips_2021__ZXlOpdufFJ", "T_zJBbyv0G", "xA26jR9vUEr", "CvhZj0RoG0w", "rw8nioLM9q1", "Igq-beN1FrN", "gyfpqwx4je", "nips_2021__ZXlOpdufFJ", "nips_2021__ZXlOpdufFJ", "nips_2021__ZXlOpdufFJ" ]
nips_2021_bJaZ8leI0QJ
On Episodes, Prototypical Networks, and Few-Shot Learning
Episodic learning is a popular practice among researchers and practitioners interested in few-shot learning.It consists of organising training in a series of learning problems (or episodes), each divided into a small training and validation subset to mimic the circumstances encountered during evaluation.But is this always necessary?In this paper, we investigate the usefulness of episodic learning in methods which use nonparametric approaches, such as nearest neighbours, at the level of the episode.For these methods, we not only show how the constraints imposed by episodic learning are not necessary, but that they in fact lead to a data-inefficient way of exploiting training batches.We conduct a wide range of ablative experiments with Matching and Prototypical Networks, two of the most popular methods that use nonparametric approaches at the level of the episode.Their "non-episodic'' counterparts are considerably simpler, have less hyperparameters, and improve their performance in multiple few-shot classification datasets.
accept
The paper is currently very borderline with respect to reviewer opinions. The paper is well-written, with a straightforward premise and thorough experiments. The main outstanding concern is the narrow scoping: PN/MN are no longer considered the state of the art in the field, and the narrow focus of the paper makes it unclear how well the findings will generalize to other forms of episodic learning. On the other hand, these are foundational models and are still used in many contexts. The heart of the debate is whether this will help move the field forward. I think that the NCA baseline is simple and effective enough that the answer leans towards “yes.” In particular, if this was published several years ago, I think it is likely that NCA itself could have become a foundational approach to few-shot learning - especially since it requires less tuning. I think that the relationship between performance, and the number of pairwise distances per batch is compelling. I would like to see the following experiment (which I didn’t find in the current paper): take PN and/or MN and for each episode, randomize the episode several times. That is, take the set of support and query points, compute the loss, shuffle the batch (assigning different query/support points within the batch), and then compute the loss again. Take the loss of the batch to be the average of these, and then take a gradient step. This effectively increases the number of comparisons used in the batch, but in a way that PN/MN can exploit. Based on the hypothesis in the paper, I expect the PN/MN results to improve, but it will be interesting either way.
train
[ "XMRHmas4my5", "zvFAgVth3ab", "PGVaNgpsle7", "jh09IsCxicq", "QinnVh8ntLc", "nhI9yNfgO8C", "8T0FVAjcKJ", "jJOgGSMCjIN", "AeI4nMbCFxm", "u3Q9G4DyKH", "eE1tIxnuqy8", "RsdJOxVIS4e", "6ek9uGnl58V" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper conducts a case study for the non-parametric few-shot classification methods (e.g. Prototypical Networks). \nIt proposes to utilize the classic Neighbourhood Component Analysis (NCA) sampling instead of the original matching or prototypical style episode sampling. \nThe authors conducted ablation experi...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "nips_2021_bJaZ8leI0QJ", "nhI9yNfgO8C", "jJOgGSMCjIN", "QinnVh8ntLc", "8T0FVAjcKJ", "eE1tIxnuqy8", "u3Q9G4DyKH", "6ek9uGnl58V", "XMRHmas4my5", "RsdJOxVIS4e", "nips_2021_bJaZ8leI0QJ", "nips_2021_bJaZ8leI0QJ", "nips_2021_bJaZ8leI0QJ" ]
nips_2021_l41jc6kUfKr
Pointwise Bounds for Distribution Estimation under Communication Constraints
Wei-Ning Chen, Peter Kairouz, Ayfer Ozgur
accept
Based on the reviews and discussion, I am inclined to recommend the acceptance of this paper. The reviewers generally appreciate the importance of the problem, and the contributions made, particularly studying instance-optimality in a communication-constrained setting. However, the reviewers do maintain some concerns, some of which are quoted as follows: - "I still think $n\ge d^3$ is restrictive (also much worse than $d^2$ you mentioned). Even in applications where the number of samples is larger than the domain size, $d^3$ easily blows up. As other quantitative results are available, it is important to include proper discussions such as why you believe the transition happens at $d^3$." - "I don't agree with "the scaling of our lower can be made arbitrarly close to the upper bound" due to the extra vanishing $c_\delta$. I do appreciate the lower bound $\Vert p\Vert_{1/2} /\log d $ though." - "Perhaps the biggest weakness of the analysis is the large error terms, which require a large number of clients to be effective. Equivalently, the results of the paper do not offer improvements on sample complexity of the tasks for a given bitrate." - "the authors claimed that their min-max Lower Bound is stronger because they maximize over a larger domain, not making intuitive sense. For another example, the authors commented that the $\ell_2$ error is equally important as the $\ell_1$ error for assessing discrete distribution learners. I don't think this is the case in the literature, and the $\ell_2$-type results are often much easier to derive. In particular, the authors also admitted that the $\ell_1$ problem remains "open." " I believe that these are valid concerns, but at the same time, none of them were viewed as strong reasons for rejection, and one reviewer was willing to champion the paper more strongly, hence my recommendation. In the final version of the paper, I ask that the authors carefully take the above comments *and* all of the reviews into careful consideration.
train
[ "_5jrbPOLnK9", "xAI_8rBPZZE", "thBVyKIXXw", "w6dcUucKdQL", "HRELMtoC5lJ", "yg22tv1r91", "XbVM9oLpGZr", "cQlX1omq3bh", "v_4E9tWQhC" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper studies distributed estimation of a discrete, $d$-dimensional law $p$ under communication constraints (wherein each client may only send $b$ bits) in a sequential model (wherein clients are ordered, and are aware of messages sent by earlier clients when preparing their message). \n\nThe authors describe ...
[ 8, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "nips_2021_l41jc6kUfKr", "thBVyKIXXw", "v_4E9tWQhC", "cQlX1omq3bh", "XbVM9oLpGZr", "_5jrbPOLnK9", "nips_2021_l41jc6kUfKr", "nips_2021_l41jc6kUfKr", "nips_2021_l41jc6kUfKr" ]
nips_2021_EmeWbcWORRg
CHIP: CHannel Independence-based Pruning for Compact Neural Networks
Yang Sui, Miao Yin, Yi Xie, Huy Phan, Saman Aliari Zonouz, Bo Yuan
accept
This paper proposes a new pruning method for model compression that utilizes channel independence (CI). CI is given by the change of nuclear norm of the entire set of feature maps between before and after removing a channel. The method is compared with other methods extensively, and it is reported that the proposed method shows a better performance than other existing ones. This paper is overall well-written and it is easy to follow. The proposed method can be easily implemented. It is nice that such a simple method gives good performance improvement. On the other hand, there are still some concerns as follows: 1. The authors state that the existing methods do not consider inter-channel correlation. However, it is a too strong claim and ignores several related works. For example, CCP [R1] and Spectral-Pruning [R2] consider correlation between channels, and other methods implicitly takes inter-channel correlation into account. The authors should add discussions about relation to those existing methods, and tone down the claim. 2. The proposed method utilizes a nuclear norm type criterion to measure channel independence instead of directly measuring channel correlations. The current form of the paper does not give sufficient justification about why the proposed one is appropriate. This point should be discussed properly, for example, by showing the strong correlation between channel independence (correlation score) and nuclear norm in numerical experiments. 3. Some details of experimental settings such as the number of preserved filters $k_l$ are missing. These information should be included. I also recommend the authors to include accuracy-pruning rate trade-off curve of different pruning metrics. [R1] Collaborative Channel Pruning for Deep Networks. ICML 2019. [R2] Spectral Pruning: Compressing Deep Neural Networks via Spectral Analysis and its Generalization Error. IJCAI 2020. The concerns I listed above should be addressed. On the other hand, the proposed method shows fairly nice performance which is informative for the community. In summary, I recommend this paper for acceptance, while I think the authors should fix the issues listed above.
train
[ "mCofUfCrzbz", "q_Ez7clyeQe", "XPMOWW_P3ij", "yurLg_qrcU1", "H7951ZJiYAC", "VCfJxz08yWl", "IiNjLusSnv", "TvLZ0Dt48sp", "49FNDWzcx8M", "Ai2tyDkVvBX", "xbGwTeofBD4", "jtgfUVzee8L", "LfSCB7C88-d", "V7iHlHfRfg", "T_W_95g10a", "rfEmRnY5pEn", "pzA66XbG2d2" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " [Update on MobileNetV2 results]: Following the reviewer's suggestions, we perform the experiment of pruning MobileNetV2 model on ImageNet dataset via using CHIP method. The results and performance comparison are shown below. Please note that because HRank and SCOP do not prune MobileNetV2 on ImageNet, we compare ...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "H7951ZJiYAC", "VCfJxz08yWl", "V7iHlHfRfg", "nips_2021_EmeWbcWORRg", "TvLZ0Dt48sp", "Ai2tyDkVvBX", "jtgfUVzee8L", "49FNDWzcx8M", "LfSCB7C88-d", "xbGwTeofBD4", "rfEmRnY5pEn", "pzA66XbG2d2", "yurLg_qrcU1", "T_W_95g10a", "nips_2021_EmeWbcWORRg", "nips_2021_EmeWbcWORRg", "nips_2021_EmeWb...
nips_2021_Ggikq6Tdxch
Federated Split Task-Agnostic Vision Transformer for COVID-19 CXR Diagnosis
Federated learning, which shares the weights of the neural network across clients, is gaining attention in the healthcare sector as it enables training on a large corpus of decentralized data while maintaining data privacy. For example, this enables neural network training for COVID-19 diagnosis on chest X-ray (CXR) images without collecting patient CXR data across multiple hospitals. Unfortunately, the exchange of the weights quickly consumes the network bandwidth if highly expressive network architecture is employed. So-called split learning partially solves this problem by dividing a neural network into a client and a server part, so that the client part of the network takes up less extensive computation resources and bandwidth. However, it is not clear how to find the optimal split without sacrificing the overall network performance. To amalgamate these methods and thereby maximize their distinct strengths, here we show that the Vision Transformer, a recently developed deep learning architecture with straightforward decomposable configuration, is ideally suitable for split learning without sacrificing performance. Even under the non-independent and identically distributed data distribution which emulates a real collaboration between hospitals using CXR datasets from multiple sources, the proposed framework was able to attain performance comparable to data-centralized training. In addition, the proposed framework along with heterogeneous multi-task clients also improves individual task performances including the diagnosis of COVID-19, eliminating the need for sharing large weights with innumerable parameters. Our results affirm the suitability of Transformer for collaborative learning in medical imaging and pave the way forward for future real-world implementations.
accept
The main concerns raised by the reviewers were: 1) comparison to multi-task, single-task, task-specific experts and CNN-based multi-task models. 2) more detailed discussions on the privacy issues 3) a discussion/comparison of communication costs. Overall, the reviewers found that the authors' response did a good job in addressing all these concerns. Indeed, the new results/discussion significantly strengthens the paper and should be included in a revised version.
val
[ "TCmpGd3DF3X", "eCBub2BCwM", "ogIwF-qy7uk", "OujMGeR4-kF", "w7-HfJ0s9mT", "mOEgVJj7yba", "Rx-516bHjUd", "p1r8orGDWI", "qZLfcE-3Dlf", "QAGH5246yl", "Of6i0VaFxw5", "lnra7seQmw", "gwrDoSl6P70", "zOxlVoQKFm", "LcZBf_H1l5Z", "O5XKoCpCoW", "9Y8Xz8R5S9s", "EJWV4INiuW5", "vekX0aS9xUD", ...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_...
[ " Thank you for your time and efforts to review our paper. As you have already mentioned, we have addressed many concerns of reviewers in our responses. Could you please re-evaluate the rating based on our responses?", " Thank you for your response and for increasing your rating.\nWe will ensure that robust and i...
[ -1, -1, -1, 7, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "j0G4wRrTOQR", "ogIwF-qy7uk", "mOEgVJj7yba", "nips_2021_Ggikq6Tdxch", "VpefSvsFkx9", "OujMGeR4-kF", "d_aS6qZZ_G", "QAGH5246yl", "nips_2021_Ggikq6Tdxch", "TP-bpezB878", "nips_2021_Ggikq6Tdxch", "nips_2021_Ggikq6Tdxch", "j0G4wRrTOQR", "d_aS6qZZ_G", "vekX0aS9xUD", "ptXmcGLnfC3", "Q41LuJ...
nips_2021_Zsrn9wXWN0
Active Offline Policy Selection
This paper addresses the problem of policy selection in domains with abundant logged data, but with a restricted interaction budget. Solving this problem would enable safe evaluation and deployment of offline reinforcement learning policies in industry, robotics, and recommendation domains among others. Several off-policy evaluation (OPE) techniques have been proposed to assess the value of policies using only logged data. However, there is still a big gap between the evaluation by OPE and the full online evaluation in the real environment. Yet, large amounts of online interactions are often not possible in practice. To overcome this problem, we introduce active offline policy selection --- a novel sequential decision approach that combines logged data with online interaction to identify the best policy. This approach uses OPE estimates to warm start the online evaluation. Then, in order to utilize the limited environment interactions wisely we decide which policy to evaluate next based on a Bayesian optimization method with a kernel function that represents policy similarity. We use multiple benchmarks with a large number of candidate policies to show that the proposed approach improves upon state-of-the-art OPE estimates and pure online policy evaluation
accept
The reviewers felt this paper would be a nice contribution to the conference. It introduces a novel problem setting of active offline policy selection problem formulation, which relies both on logged data and limited online interactions with the environment. The authors have done a good job replying to all concerns during the discussion. To strengthen the contribution, authors are encouraged to open source components of the approach.
train
[ "_dvxyGaoct9", "8OYSKn_EfzA", "gMBvGAdXL4w", "NcfIyGCTsCj", "W-KG7gLygEb", "VovExomrYoD", "10-zwBfyAZq", "PwGBSnAhNJa", "F4Qgb4gdx2", "2mhqq1zLMci", "fJ68fEpniM3", "u3bi55jzJtC", "lqKVqwEFkQZ", "W_3afJTfvBG" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for addressing my concerns. I have two follow-ups based on the responses. First of all, there is no problem at all for proposing a new setting. The real concern is NOT whether this setting is new but the underlying motivations. While it is true that there are real-world cases (I ...
[ -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "F4Qgb4gdx2", "NcfIyGCTsCj", "nips_2021_Zsrn9wXWN0", "W-KG7gLygEb", "VovExomrYoD", "PwGBSnAhNJa", "nips_2021_Zsrn9wXWN0", "gMBvGAdXL4w", "W_3afJTfvBG", "lqKVqwEFkQZ", "u3bi55jzJtC", "nips_2021_Zsrn9wXWN0", "nips_2021_Zsrn9wXWN0", "nips_2021_Zsrn9wXWN0" ]
nips_2021_BYrJYl1rexa
Unsupervised Representation Transfer for Small Networks: I Believe I Can Distill On-the-Fly
A current remarkable improvement of unsupervised visual representation learning is based on heavy networks with large-batch training. While recent methods have greatly reduced the gap between supervised and unsupervised performance of deep models such as ResNet-50, this development has been relatively limited for small models. In this work, we propose a novel unsupervised learning framework for small networks that combines deep self-supervised representation learning and knowledge distillation within one-phase training. In particular, a teacher model is trained to produce consistent cluster assignments between different views of the same image. Simultaneously, a student model is encouraged to mimic the prediction of on-the-fly self-supervised teacher. For effective knowledge transfer, we adopt the idea of domain classifier so that student training is guided by discriminative features invariant to the representational space shift between teacher and student. We also introduce a network driven multi-view generation paradigm to capture rich feature information contained in the network itself. Extensive experiments show that our student models surpass state-of-the-art offline distilled networks even from stronger self-supervised teachers as well as top-performing self-supervised models. Notably, our ResNet-18, trained with ResNet-50 teacher, achieves 68.3% ImageNet Top-1 accuracy on frozen feature linear evaluation, which is only 1.5% below the supervised baseline.
accept
All reviewers were supportive of the ideas in the paper and recommended acceptance - I agree with them. I also believe that studying small networks is an important area in itself due to the need for lightweight models on "edge devices". The reviewers have provided a number of valuable comments which the authors should integrate into their final version.
train
[ "DWh_HsQtRkD", "WAkpgjtPWh6", "RKpo12fzjvL", "nz5qS2iCrL", "SlDjqCfPHqI", "K0l84taDvwX" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes an unsupervised representation learning method. It follows a teacher-student paradigm. The teacher (ResNet-50) is learned by fitting samples to clusters defined by randomly initialized prototypes and fitting different views of the same sample to the same cluster. The views are created with featu...
[ 6, 7, -1, -1, -1, 6 ]
[ 4, 4, -1, -1, -1, 4 ]
[ "nips_2021_BYrJYl1rexa", "nips_2021_BYrJYl1rexa", "K0l84taDvwX", "DWh_HsQtRkD", "WAkpgjtPWh6", "nips_2021_BYrJYl1rexa" ]
nips_2021_V3aZTKsHykQ
Understanding Bandits with Graph Feedback
Houshuang Chen, zengfeng Huang, Shuai Li, Chihao Zhang
accept
While the reviewers felt that the results are limited in their relevance and scope (they only apply to weakly observable graphs and the improvements are within log factors), they also thought---and I concur---that the paper is tackling a very basic and challenging problem, and from a mathematical point of view it contains some interesting ideas (the connection to integrality gaps of LPs is particularly compelling). I strongly encourage the authors to take the reviewers’ comments into consideration when preparing the final version and carefully implement their suggestions.
train
[ "w4nxwZGLQc", "tiVlTFjyERT", "LBSFovJ1a6", "sb7pa22IAEd", "k7r2gztLjq1", "TmB8CQcLdxN", "FoAly4zo0n", "i-xQPMf6Fa", "Idjc9jWhFFo", "9DxHFS8JMY" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer, we apologize for skipping over your main concern and we will respond below. \n\n**the contribution is quite limited since it only focuses on logarithmic gains**\n\nAlthough our improvement from $\\delta$ to $\\delta^*$ seems to be incremental and only brings logarithmic gains, we believe that the o...
[ -1, -1, -1, -1, -1, -1, 5, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 2, 3 ]
[ "tiVlTFjyERT", "k7r2gztLjq1", "9DxHFS8JMY", "Idjc9jWhFFo", "i-xQPMf6Fa", "FoAly4zo0n", "nips_2021_V3aZTKsHykQ", "nips_2021_V3aZTKsHykQ", "nips_2021_V3aZTKsHykQ", "nips_2021_V3aZTKsHykQ" ]
nips_2021_L_cN8vD0XdT
Information-theoretic generalization bounds for black-box learning algorithms
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.
accept
The paper continues a line of work which develops information-theoretic bounds on the generalization gap (i.e. the difference between e,pirical and population loss) in supervised learning. Almost all reviewers were favorable for acceptance, and found the results innovative. Please address reviewers comments in the final version.
train
[ "fKs2x9hz1f", "-j8GGRd5d5Q", "coOv5fEc7NA", "5HW8TkG9QXp", "X5KBk3pHaOh", "-nPPbSFNGJ4", "i6jtByP3O_E", "FL1JwaW5Dzk", "R8yXyL_DpH", "8es9s_OCvkf", "4uCR-0bOmz3", "MWR-1Ww1C-", "JuOaUf7N84Y" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper derives information-theoretic bounds for learning algorithms. The bounds are based on the predictions of the learned model, rather than the weights (output), which makes them tighter by the data-processing inequality and can yield non-vacuous bounds in cases where weight-based mutual information bounds a...
[ 7, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2 ]
[ "nips_2021_L_cN8vD0XdT", "nips_2021_L_cN8vD0XdT", "5HW8TkG9QXp", "X5KBk3pHaOh", "-j8GGRd5d5Q", "-j8GGRd5d5Q", "fKs2x9hz1f", "JuOaUf7N84Y", "MWR-1Ww1C-", "4uCR-0bOmz3", "nips_2021_L_cN8vD0XdT", "nips_2021_L_cN8vD0XdT", "nips_2021_L_cN8vD0XdT" ]
nips_2021_3Ky3sH5enrc
Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation
Qiming Hu, Xiaojie Guo
accept
The reviewers in general liked the idea and results of the paper: simple designs of keeping all features in the network by adding dual-ReLU branch; good results on SIRS against SOTA. Many issues were raised, e.g. lack of comparison, formalism and intuition of the technique. The reviewers were satisfied with the rebuttal and bumped up the score. Please incorporate the post-rebuttal comments from the reviewers, e.g. having only results on the task of reflection removal rather than adding another task with a short description (can use the space to make the figures larger), and improving the text and simplifying the figures.
train
[ "BlaHKLtom1q", "9chIhTDhuD", "znrOjt6GCGH", "pz0tp6b2XSv", "Pl7_0-Kesc1", "zoeCvu_XzQm", "3k2UFzbV9o4", "0D5Ryf37Gad" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author" ]
[ "This paper presents a nonlinear module that leverages both positive and negative ReLUs for feature separation. This enables applications such as reflection removal, blind denoising and other layer separation tasks. The paper shows several variants of adding the proposed module into existing CNN frameworks. Results...
[ 7, 6, -1, -1, 6, -1, 6, -1 ]
[ 4, 4, -1, -1, 5, -1, 4, -1 ]
[ "nips_2021_3Ky3sH5enrc", "nips_2021_3Ky3sH5enrc", "9chIhTDhuD", "BlaHKLtom1q", "nips_2021_3Ky3sH5enrc", "Pl7_0-Kesc1", "nips_2021_3Ky3sH5enrc", "3k2UFzbV9o4" ]
nips_2021_P4W74BXoyBy
Rot-Pro: Modeling Transitivity by Projection in Knowledge Graph Embedding
Knowledge graph embedding models learn the representations of entities and relations in the knowledge graphs for predicting missing links (relations) between entities. Their effectiveness are deeply affected by the ability of modeling and inferring different relation patterns such as symmetry, asymmetry, inversion, composition and transitivity. Although existing models are already able to model many of these relations patterns, transitivity, a very common relation pattern, is still not been fully supported. In this paper, we first theoretically show that the transitive relations can be modeled with projections. We then propose the Rot-Pro model which combines the projection and relational rotation together. We prove that Rot-Pro can infer all the above relation patterns. Experimental results show that the proposed Rot-Pro model effectively learns the transitivity pattern and achieves the state-of-the-art results on the link prediction task in the datasets containing transitive relations.
accept
the reviewers agreed that the paper makes a solid contribution by proposing a model that can infer transitivity, together with convincing experimental results. Most of the reviewers' concerns have been cleared in the discussion phase. The authors should make sure to incorporate the necessary updates in the camera-ready version.
train
[ "pYk9OVbCKb7", "1NyqmBG18vp", "L9RNzAz6-Hi", "4AdG7Pru1uV", "zUG2oElYe6g", "RMBsQe_RQ8q", "KCs8VLF8sFd", "_nSi3ARtxtv", "8kGl2K8vQUB", "vPtfoiJ1bC5", "Fz2zIA3eWBn" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " The embedding methods considered in this paper, such as TransE and RotatE, treat different relations in the knowledge graph independently. That is, the interactions between relations in these methods have the possibility to be learned from the data, but they are not explicitly modeled (current benchmarks for eval...
[ -1, -1, 6, -1, 7, -1, -1, -1, -1, 6, 7 ]
[ -1, -1, 4, -1, 4, -1, -1, -1, -1, 4, 3 ]
[ "1NyqmBG18vp", "RMBsQe_RQ8q", "nips_2021_P4W74BXoyBy", "KCs8VLF8sFd", "nips_2021_P4W74BXoyBy", "vPtfoiJ1bC5", "L9RNzAz6-Hi", "Fz2zIA3eWBn", "zUG2oElYe6g", "nips_2021_P4W74BXoyBy", "nips_2021_P4W74BXoyBy" ]
nips_2021_XgGUUaKgips
Planning from Pixels in Environments with Combinatorially Hard Search Spaces
The ability to form complex plans based on raw visual input is a litmus test for current capabilities of artificial intelligence, as it requires a seamless combination of visual processing and abstract algorithmic execution, two traditionally separate areas of computer science. A recent surge of interest in this field brought advances that yield good performance in tasks ranging from arcade games to continuous control; these methods however do not come without significant issues, such as limited generalization capabilities and difficulties when dealing with combinatorially hard planning instances. Our contribution is two-fold: (i) we present a method that learns to represent its environment as a latent graph and leverages state reidentification to reduce the complexity of finding a good policy from exponential to linear (ii) we introduce a set of lightweight environments with an underlying discrete combinatorial structure in which planning is challenging even for humans. Moreover, we show that our methods achieves strong empirical generalization to variations in the environment, even across highly disadvantaged regimes, such as “one-shot” planning, or in an offline RL paradigm which only provides low-quality trajectories.
accept
Learning dynamics models over pixels for model-based RL remains a largely open problem. There is consensus among the reviewers that the empirical results in this paper are impressive, and the approach constitutes a valuable contribution. The reviewers make several suggestions for improvements, e.g., analyzing generalization to goals and clarifying necessary conditions for environments it can perform well on, that can be addressed in a revision.
train
[ "_rtrbVjrsB", "j9cZ7gjgoGB", "7QzutlYgfdZ", "YwYU6miANq8", "MF6s6jSAK-s", "IxhHPSFL0Re", "lMwYgecTdM" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your thorough response! You have addressed my concerns. Given this additional information, I do think the paper is fit to be presented at the conference and I increase my score to 7. Regarding the pseudonym, yes it is a cool one indeed ;)", "In this work, the authors present a method to perform pl...
[ -1, 7, -1, -1, -1, 6, 6 ]
[ -1, 5, -1, -1, -1, 3, 3 ]
[ "MF6s6jSAK-s", "nips_2021_XgGUUaKgips", "lMwYgecTdM", "IxhHPSFL0Re", "j9cZ7gjgoGB", "nips_2021_XgGUUaKgips", "nips_2021_XgGUUaKgips" ]
nips_2021_rxAS126OC-A
PLUGIn: A simple algorithm for inverting generative models with recovery guarantees
Babhru Joshi, Xiaowei Li, Yaniv Plan, Ozgur Yilmaz
accept
This paper studies the problem of inverting a generative model. More precisely, we are given a ReLU network $G$ and we observe $y = G(x^*) + \epsilon$ and the goal is to recovery $x^*$. The paper gives strong provable guarantees under much milder assumptions than in earlier work. Earlier work of Hand and Voroninski gives recovery guarantees when the weights matrices in the network are Gaussian and each layer is expansive. This paper is able to weaken the former condition to an expansion on average. In particular some layers can be contractive, as long as the ratio of their output to the size of the input to the network is large. Further they give a simple iterative algorithm. The intuition behind it is clear, however the technical analysis is complicated by the fact that the iterates depend on the randomness. All the reviewers agreed that this paper takes a definitive step forward on an important class of problems.
train
[ "HK0LlXxQ7e0", "dAVT24gvKRy", "6T9CIBXCAup", "xN8lNTXDUDr", "BznYwqBwpcO", "ovlxSVfYVem", "fGxy_QYH9X" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for clarifying regarding compressive measurements! I continue to believe that this is a very strong paper and enthusiastically recommend acceptance.", "This paper proposes an algorithm to solve the problem of inverting random generative models from noisy measurements where the weights of the model are dr...
[ -1, 7, -1, -1, -1, 7, 8 ]
[ -1, 4, -1, -1, -1, 2, 3 ]
[ "6T9CIBXCAup", "nips_2021_rxAS126OC-A", "fGxy_QYH9X", "dAVT24gvKRy", "ovlxSVfYVem", "nips_2021_rxAS126OC-A", "nips_2021_rxAS126OC-A" ]
nips_2021_7U7JxTiL8gz
Modular Gaussian Processes for Transfer Learning
We present a framework for transfer learning based on modular variational Gaussian processes (GP). We develop a module-based method that having a dictionary of well fitted GPs, each model being characterised by its hyperparameters, pseudo-inputs and their corresponding posterior densities, one could build ensemble GP models without revisiting any data. Our method avoids undesired data centralisation, reduces rising computational costs and allows the transfer of learned uncertainty metrics after training. We exploit the augmentation of high-dimensional integral operators based on the Kullback-Leibler divergence between stochastic processes to introduce an efficient lower bound under all the sparse variational GPs, with different complexity and even likelihood distribution. The method is also valid for multi-output GPs, learning correlations a posteriori between independent modules. Extensive results illustrate the usability of our framework in large-scale and multi-task experiments, also compared with the exact inference methods in the literature.
accept
This paper proposes a framework for reusing previously learned sparse variational GP models without revisiting any data and a further extension to the multi-output context. All four knowledgeable reviewers recommend acceptance and I agree with them. As a clarification to the authors, yes, you are allowed and encouraged to incorporate amendments/additions to address the reviewers feedback in the final version of the paper.
train
[ "uF37GafH9wh", "w9ke10rGNY_", "f2VVxs9ONEJ", "QAPidSxrMv5", "xf-YR-VoY33", "MtQdBvTaix", "afaqjS4aYI4", "2F6kTELY8MR", "lbHBaibV7xF", "hIRnnGRAsBW", "lcZ7fw123dN", "WV1la5pBdVy" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the positive consideration of our response. \n\n> However, doesn't that make the VEM algorithm obsolete?\n\nRight, since the Adam method is valid for the optimisation of $\\psi, Z, \\mu$ and $L$, somehow the VEM is obsolete due to it takes longer time for similar results. \n\n> Which of ...
[ -1, -1, 9, 6, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "w9ke10rGNY_", "afaqjS4aYI4", "nips_2021_7U7JxTiL8gz", "nips_2021_7U7JxTiL8gz", "lbHBaibV7xF", "nips_2021_7U7JxTiL8gz", "WV1la5pBdVy", "f2VVxs9ONEJ", "QAPidSxrMv5", "lcZ7fw123dN", "nips_2021_7U7JxTiL8gz", "nips_2021_7U7JxTiL8gz" ]
nips_2021_YFysbLCFdIe
Neural Human Performer: Learning Generalizable Radiance Fields for Human Performance Rendering
In this paper, we aim at synthesizing a free-viewpoint video of an arbitrary human performance using sparse multi-view cameras. Recently, several works have addressed this problem by learning person-specific neural radiance fields (NeRF) to capture the appearance of a particular human. In parallel, some work proposed to use pixel-aligned features to generalize radiance fields to arbitrary new scenes and objects. Adopting such generalization approaches to humans, however, is highly challenging due to the heavy occlusions and dynamic articulations of body parts. To tackle this, we propose Neural Human Performer, a novel approach that learns generalizable neural radiance fields based on a parametric human body model for robust performance capture. Specifically, we first introduce a temporal transformer that aggregates tracked visual features based on the skeletal body motion over time. Moreover, a multi-view transformer is proposed to perform cross-attention between the temporally-fused features and the pixel-aligned features at each time step to integrate observations on the fly from multiple views. Experiments on the ZJU-MoCap and AIST datasets show that our method significantly outperforms recent generalizable NeRF methods on unseen identities and poses.
accept
This submission received 4 positive final ratings: 8, 7, 7, 8. The reviewers appreciated overall novelty of the approach, its generalization properties and strong empirical performance. The remaining concerns were mostly around clarity, justification of individual components and lack of analysis of failure cases. These were mostly addressed in the rebuttal, as acknowledged by the reviewers. The final recommendation is therefore to accept as a spotlight.
train
[ "_Fu399qfWHj", "9IUdXltf8hH", "3vL-2uzg5dH", "3q0AYq_iYhZ", "3auWdd8XXnS", "iXSD26-GQlR", "UCD9DktOetS", "e5OmaCp4djX" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper solves a problem similar to Neural Body (reconstruction / free-view rendering of clothed humans from videos based on SMPL fits) but does not require training a neural network per scene; instead, it generalises from training sequences and, at test time, runs only a feed-forward model conditioned on keyfra...
[ 8, -1, -1, -1, -1, 8, 7, 7 ]
[ 5, -1, -1, -1, -1, 4, 4, 4 ]
[ "nips_2021_YFysbLCFdIe", "_Fu399qfWHj", "e5OmaCp4djX", "UCD9DktOetS", "iXSD26-GQlR", "nips_2021_YFysbLCFdIe", "nips_2021_YFysbLCFdIe", "nips_2021_YFysbLCFdIe" ]
nips_2021_cjnSJIf3c9Y
Locally differentially private estimation of functionals of discrete distributions
Cristina Butucea, Yann ISSARTEL
accept
The paper provides LDP algorithms for estimating functionals of discrete distributions. While the paper addresses a fundamental question, reviewers raise concerns regarding the writing of the paper and the gap between upper and lower bounds. I encourage authors to incorporate reviewer concerns in subsequent versions.
train
[ "7Ea7WqWAr8", "Gd6U0gS5xZH", "PKd6W7vswZs", "4WrI45DZUlZ", "cqE-yxdLPB3", "IttI7R_z9-", "-G1_2JIgtHV", "2GfZ8GfXe2Z", "-tlEFgxS16T", "gWkY-tX8RwD", "XYToyYcFYr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. My score on the paper remains unchanged.", " We thank the referee for his/her careful reading of our manuscript and numerous constructive remarks and questions.\n\n$1^{\\circ}$, $2^{\\circ}$ and $4^{\\circ}$. Motivation:\n\nIndeed, our previous manuscript did not expand on the applicati...
[ -1, -1, -1, -1, -1, -1, 6, 7, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 5, 4, 5 ]
[ "PKd6W7vswZs", "XYToyYcFYr", "gWkY-tX8RwD", "-tlEFgxS16T", "2GfZ8GfXe2Z", "-G1_2JIgtHV", "nips_2021_cjnSJIf3c9Y", "nips_2021_cjnSJIf3c9Y", "nips_2021_cjnSJIf3c9Y", "nips_2021_cjnSJIf3c9Y", "nips_2021_cjnSJIf3c9Y" ]
nips_2021_1oRFmD0Fl-5
Asymptotics of representation learning in finite Bayesian neural networks
Recent works have suggested that finite Bayesian neural networks may sometimes outperform their infinite cousins because finite networks can flexibly adapt their internal representations. However, our theoretical understanding of how the learned hidden layer representations of finite networks differ from the fixed representations of infinite networks remains incomplete. Perturbative finite-width corrections to the network prior and posterior have been studied, but the asymptotics of learned features have not been fully characterized. Here, we argue that the leading finite-width corrections to the average feature kernels for any Bayesian network with linear readout and Gaussian likelihood have a largely universal form. We illustrate this explicitly for three tractable network architectures: deep linear fully-connected and convolutional networks, and networks with a single nonlinear hidden layer. Our results begin to elucidate how task-relevant learning signals shape the hidden layer representations of wide Bayesian neural networks.
accept
This paper analyzes the leading-order corrections to the average feature kernels in Bayesian neural networks, and conjectures a universal form for the result. The analysis and conclusions should be of interest to theorists in the NeurIPS community, and I recommend acceptance.
train
[ "Qrr_jOiurS8", "GQXXjNwpVV4", "cRkja35f0Wd", "amZi38eJy5v", "o15iEX0Ujr", "RBynXrUVAS5", "9otnhD9ppSs", "HhYCZlmd3WP", "1T62jLuuCe-", "xuSfcE3lzyJ", "_G6y00DgBB8", "kCqh02pBL9r", "yiyEwOaQKHc", "xduo4MUXbEn", "QcJoe_pjGSi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper shows analytically and numerically that the leading finite-width corrections to the average hidden layer kernels of any fully-connected feedforward networks with linear readout and least-squares loss have a largely prescribed form : these assumptions fix the dependency of the correction on the target o...
[ 7, -1, 7, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 3, -1, 3, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021_1oRFmD0Fl-5", "kCqh02pBL9r", "nips_2021_1oRFmD0Fl-5", "yiyEwOaQKHc", "xduo4MUXbEn", "xduo4MUXbEn", "nips_2021_1oRFmD0Fl-5", "1T62jLuuCe-", "xuSfcE3lzyJ", "9otnhD9ppSs", "QcJoe_pjGSi", "Qrr_jOiurS8", "cRkja35f0Wd", "nips_2021_1oRFmD0Fl-5", "nips_2021_1oRFmD0Fl-5" ]
nips_2021_YL6e9oSeInj
Adaptive Ensemble Q-learning: Minimizing Estimation Bias via Error Feedback
The ensemble method is a promising way to mitigate the overestimation issue in Q-learning, where multiple function approximators are used to estimate the action values. It is known that the estimation bias hinges heavily on the ensemble size (i.e., the number of Q-function approximators used in the target), and that determining the 'right' ensemble size is highly nontrivial, because of the time-varying nature of the function approximation errors during the learning process. To tackle this challenge, we first derive an upper bound and a lower bound on the estimation bias, based on which the ensemble size is adapted to drive the bias to be nearly zero, thereby coping with the impact of the time-varying approximation errors accordingly. Motivated by the theoretic findings, we advocate that the ensemble method can be combined with Model Identification Adaptive Control (MIAC) for effective ensemble size adaptation. Specifically, we devise Adaptive Ensemble Q-learning (AdaEQ), a generalized ensemble method with two key steps: (a) approximation error characterization which serves as the feedback for flexibly controlling the ensemble size, and (b) ensemble size adaptation tailored towards minimizing the estimation bias. Extensive experiments are carried out to show that AdaEQ can improve the learning performance than the existing methods for the MuJoCo benchmark.
accept
The paper presents a novel method to tackle overestimation bias in Q-learning. The majority of reviewers agreed that the paper is a good contribution to the NeurIPS community, albeit empirical evidence could be expanded further to consider additional ablation studies and environments.
train
[ "IHT-j6wpzQ_", "cYrTm-L0VWP", "m4RR9IfGZL4", "A5Hh71CBe9f", "9sYT-mqAaNS", "nqdKFoQ_J4N", "sbZNm1UIRyo", "5tCkpzGVTq", "CLRjL4jxMw", "E0-JYpry4s", "eYCiNuRcjgp", "Ox_cSHvsuG7", "nnkpGEsHZ3a", "iwFge_dVESf", "_VJssSkud9", "3MBZKO7X3Fs", "m2hNv-nKrv", "Pr0RpvcG2I", "zcqULPmGwEQ" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "...
[ " Thank you for your meticulous reading and useful feedback, which will help us to improve the presentation.\n\n(1) As stated in our earlier response, AdaEQ shows great practical advantages compared with AVG, TD3 and SAC. Firstly, it considers the time-varying characteristics of the approximation error due to the i...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 4, 5 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "cYrTm-L0VWP", "9sYT-mqAaNS", "nips_2021_YL6e9oSeInj", "iwFge_dVESf", "nqdKFoQ_J4N", "nnkpGEsHZ3a", "E0-JYpry4s", "eYCiNuRcjgp", "zcqULPmGwEQ", "m2hNv-nKrv", "3MBZKO7X3Fs", "Pr0RpvcG2I", "zcqULPmGwEQ", "m4RR9IfGZL4", "Pr0RpvcG2I", "nips_2021_YL6e9oSeInj", "nips_2021_YL6e9oSeInj", "...
nips_2021_zdmF437BCB
Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?
Petar Stojanov, Zijian Li, Mingming Gong, Ruichu Cai, Jaime Carbonell, Kun Zhang
accept
The authors study unsupervised domain adaptation. They point out that if the data supports overlap, the encoder might need domain-specific information to recover an invariant representation which elicits a high-accuracy classifier. They propose a method which models this domain-specific information as a latent variable while encouraging it to have minimal influence on a decoder. Reviewers agreed that the motivating analysis (necessity of domain-dependent encoders) was informative. Most reviewers (DkiW, 8KhW, 219a) thought the experiments were promising (reviewer d96Z remarked that OfficeHome results were missing, but the authors address this in the rebuttal). Reviewers also tended to agree that the method itself was overcomplicated and this might hurt its usefulness in practice (319a, d96Z), but I feel this shouldn't by itself prevent acceptance.
val
[ "QA2jYjXcEav", "qqT0DJ-adVn", "pxXVPzRr-Gc", "Zn4spmcb_e4", "zneRShUmUtO", "dHW09LxpeJ9", "Cv3BgLBlui2", "VL3Hx4zu9XG", "MSXr3yZsh5w", "ZZKwTeRRqhU", "TfbOQlzOj-o", "WQKLdpG_-6b", "5Rv_0h45rR7", "PSpRW-oGXt4", "AzXYfzz4ii0", "oQ_SGkIkxn" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. I will keep my score.", " We are grateful for your consideration of our response and the update of your recommendation!", " Dear Reviewer d96Z, \n\nWe are writing to provide results for the RSDA-MSTN and RSDA-DANN baselines run on the Office-Home dataset, when using pretrained fea...
[ -1, -1, -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, -1, -1, 4, -1, 2, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "5Rv_0h45rR7", "dHW09LxpeJ9", "Zn4spmcb_e4", "MSXr3yZsh5w", "nips_2021_zdmF437BCB", "PSpRW-oGXt4", "nips_2021_zdmF437BCB", "WQKLdpG_-6b", "ZZKwTeRRqhU", "TfbOQlzOj-o", "oQ_SGkIkxn", "Cv3BgLBlui2", "AzXYfzz4ii0", "zneRShUmUtO", "nips_2021_zdmF437BCB", "nips_2021_zdmF437BCB" ]
nips_2021_VzuIzbRDrum
CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation
The imputation of missing values in time series has many applications in healthcare and finance. While autoregressive models are natural candidates for time series imputation, score-based diffusion models have recently outperformed existing counterparts including autoregressive models in many tasks such as image generation and audio synthesis, and would be promising for time series imputation. In this paper, we propose Conditional Score-based Diffusion model (CSDI), a novel time series imputation method that utilizes score-based diffusion models conditioned on observed data. Unlike existing score-based approaches, the conditional diffusion model is explicitly trained for imputation and can exploit correlations between observed values. On healthcare and environmental data, CSDI improves by 40-65% over existing probabilistic imputation methods on popular performance metrics. In addition, deterministic imputation by CSDI reduces the error by 5-20% compared to the state-of-the-art deterministic imputation methods. Furthermore, CSDI can also be applied to time series interpolation and probabilistic forecasting, and is competitive with existing baselines. The code is available at https://github.com/ermongroup/CSDI.
accept
This paper uses conditional score-based diffusion models for probabilistic time series imputation. The authors use an attention mechanism to capture the temporal and feature dependencies. A self-supervised training method is developed. The paper is overall well-written and the method appears to outperform alternative imputation techniques on some metrics. The proposed method is novel but was also perceived as a rather straightforward combination of existing techniques (conditional score ideas have already been used in different domains and the self-supervised technique has been applied in related areas). That said, it is by no means trivial to put this combination in practice and the empirics are compelling. After the rebuttal of the authors, some of the reviewers increase their score. With scores of 6, 6, 6, there remains a lack of enthusiasm for the paper. However, I recommend acceptance of the paper as it is a useful contribution to probabilistic time series imputation.
train
[ "K14EEFQ5cF9", "rwAcF8fRPfj", "F7wJUmnZCuQ", "5oneG_rUpkf", "FzyhU4TZma", "JH_UlfrzcVY", "OAZ-zyVdKNT", "mj0ONFmS6xu", "muyXKT1rV-L" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for addressing my comments, make sure to include them in the paper. Although there are some novelty issues raised by other reviewers, I am inclined towards accepting this paper as it addresses an important problem and improves over the previous state-of-the-art methods. ", "The authors use diffusion p...
[ -1, 6, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "FzyhU4TZma", "nips_2021_VzuIzbRDrum", "OAZ-zyVdKNT", "mj0ONFmS6xu", "muyXKT1rV-L", "nips_2021_VzuIzbRDrum", "rwAcF8fRPfj", "nips_2021_VzuIzbRDrum", "nips_2021_VzuIzbRDrum" ]
nips_2021_9-XhLobA4z
Causal Bandits with Unknown Graph Structure
In causal bandit problems the action set consists of interventions on variables of a causal graph. Several researchers have recently studied such bandit problems and pointed out their practical applications. However, all existing works rely on a restrictive and impractical assumption that the learner is given full knowledge of the causal graph structure upfront. In this paper, we develop novel causal bandit algorithms without knowing the causal graph. Our algorithms work well for causal trees, causal forests and a general class of causal graphs. The regret guarantees of our algorithms greatly improve upon those of standard multi-armed bandit (MAB) algorithms under mild conditions. Lastly, we prove our mild conditions are necessary: without them one cannot do better than standard MAB algorithms.
accept
All reviews are extremely favorable except one dissenting review. This paper produced a very long discussion and engagement between dissenting reviewer, other reviewers and authors. I would like to thank all reviewers (especially the dissenting reviewer) for participating in the discussion. I would like to state the main points of the dissenting review and I feel that the discussion and authors final responses does address most of the serious concerns raised. Therefore, I am recommending acceptance. Particularly, this is the first paper (I am aware of) that solves regret minimization problem for causal bandits when only the equivalence class is known. The authors leverage existing structure learning results to obtain an algorithm that ensures non trivial regret (non trivial regret guarantee holds under some technical conditions). Main points from the dissenting review - followed by paraphrased answers from either the authors/other reviewers. a) First step requires having the full equivalence class graph available A) many other works in active structure learning literature makes these assumptions. This paper considers a harder problem of regret minimization without full structure knowledge, this would be not be a major penalizing factor. Besides, interventionally it does not cost anything to assume one has enough observational samples. Therefore, it is a resonable assumption. b) "unknown graph then needs to satisfy some extremely restrictive properties" A) Authors assume intersection incomparable property on the clique tree decomposition of the observational markov equivalence class. The property assumed can be tested with only the equivalence class which is an input to the bandit. Nevertheless, this is a popular assumption - both in structure learning and some prior literature. Another reviewer even hinted that in a random graph, this may be satisfied even with high probability (or at least depends on random graph model assumptions). This is satisfied with popular chordal graph sub families. Other aspect is that the bandit algorithm would run irrespective of the assumption. It would halt at a specific line and then the bandit part of the algorithm would commence with potentially large number of arms. *Note to authors*: I would urge the authors to note this point explicitly in camera ready and provide a complete algorithm with the halting conditions etc... Not withstanding the above, it is an assumption to avoid a 'pathology' (in my opinion) in the clique tree. c) The claimed regret bounds in Thm2+3 are misleading in that an order N dependence is hidden in a constant that actually scales with N A) All regret bounds in MAB problems scale with problem parameters - ambient dimension in linear bandits, number of arms in multi armed bandits, some graph parameter when DAG is known in causal bandits. Here, this is number of chain components in the chordal graph which is a parameter that depends on the EC. If all nodes are isolated this would reduce to standard MAB and therefore such a regret scaling would be unavoidable.
train
[ "DmZ3NrsjLdz", "NeymrdUF_H7", "qwZr9jPfyki", "Zm2lODmCm_c", "NaydMcHCIB_", "JbB722ca2zE", "BScakJr0hPG", "eIwy4YyVqYu", "KgVDJt8Wa3d", "TeGAkallM5F", "VBvne9yv1Cr", "tO1RXk5vEH", "2kVeOR9daCX", "1GRhQSS_gcs", "WgTaDfOcbds", "tNZQOJUp9c", "OoMcAkRt02t", "_C9PlQcGnii", "LbHTXGKyPe2...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", ...
[ " Thank you very much for your suggestion! We are happy to change our notations.", " Thank you everyone for the extended discussion. I think this was useful to further clarify some points and the pointers to the bandit literature by the authors for similar bounds are definitely useful. \n\nA minor suggestion I ha...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 3, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 5, 4 ]
[ "NeymrdUF_H7", "qwZr9jPfyki", "Zm2lODmCm_c", "JbB722ca2zE", "eIwy4YyVqYu", "BScakJr0hPG", "eIwy4YyVqYu", "nips_2021_9-XhLobA4z", "TeGAkallM5F", "VBvne9yv1Cr", "tO1RXk5vEH", "tNZQOJUp9c", "1GRhQSS_gcs", "guzg_vfue0", "mL7bGwTKUO0", "OoMcAkRt02t", "_C9PlQcGnii", "3UAkQe2tYHe", "Wx9...
nips_2021_-U9I0f2S7W
Piper: Multidimensional Planner for DNN Parallelization
The rapid increase in sizes of state-of-the-art DNN models, and consequently the increase in the compute and memory requirements of model training, has led to the development of many execution schemes such as data parallelism, pipeline model parallelism, tensor (intra-layer) model parallelism, and various memory-saving optimizations. However, no prior work has tackled the highly complex problem of optimally partitioning the DNN computation graph across many accelerators while combining all these parallelism modes and optimizations.In this work, we introduce Piper, an efficient optimization algorithm for this problem that is based on a two-level dynamic programming approach. Our two-level approach is driven by the insight that being given tensor-parallelization techniques for individual layers (e.g., Megatron-LM's splits for transformer layers) significantly reduces the search space and makes the global problem tractable, compared to considering tensor-parallel configurations for the entire DNN operator graph.
accept
After some discussion, there's general consensus toward accepting the work. As Reviewer cpuf notes: "It is the first work that uses dynamic programming to search in a non-trivial search space that includes data parallelism, tensor model parallelism, pipeline model parallelism, and activation rematerialization." There are a number of challenges in finding parallelizable configurations, and while the paper's specific ideas may have overlap with prior works or may be argued as incremental, it makes an important delta in solving this important problem area. Flaw-wise, there are several experimental limitations that would be worth resolving as Reviewer BM6L and cpuf note. I highly recommend the authors address the rebuttal concerns as they further polish the paper.
test
[ "eEEu2UgfbKY", "1hZUk7n1--7", "fBrTogcIeI3", "tUf5mQUwssF", "CAOOZiNMne", "kookSFJc_7J", "7IQmJr6PCnh", "D3EchZJnbLH", "5pFfkOS2r2K", "U0nU6o8IBOT" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response! The response addresses my concerns and it would be great to include these into the final version.", " Thanks for updating the experiments and baselines. it would be great to incorporate these notes and relations in the final submission.", " Thank you for your very detailed and insight...
[ -1, -1, -1, -1, -1, -1, 6, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "kookSFJc_7J", "tUf5mQUwssF", "U0nU6o8IBOT", "5pFfkOS2r2K", "D3EchZJnbLH", "7IQmJr6PCnh", "nips_2021_-U9I0f2S7W", "nips_2021_-U9I0f2S7W", "nips_2021_-U9I0f2S7W", "nips_2021_-U9I0f2S7W" ]
nips_2021_0v9EPJGc10
Causal Effect Inference for Structured Treatments
We address the estimation of conditional average treatment effects (CATEs) for structured treatments (e.g., graphs, images, texts). Given a weak condition on the effect, we propose the generalized Robinson decomposition, which (i) isolates the causal estimand (reducing regularization bias), (ii) allows one to plug in arbitrary models for learning, and (iii) possesses a quasi-oracle convergence guarantee under mild assumptions. In experiments with small-world and molecular graphs we demonstrate that our approach outperforms prior work in CATE estimation.
accept
The proposed method extends the R-learner (Nie and Wager, 2020) to deal with multiple treatments that are graph-structured. Dealing with non-Euclidean treatments is a very important problem that is understudied in the literature. The proposed two-stage algorithm is an extension of the Robinson decomposition that several reviewers found promising (kq9J, rvHB). An important stepping stone of the approach is to generalize the propensity scores to propensity features. Reviewers rvHB and 1AJd criticized the overlap assumption, which is very strong if the treatment is graph-structured. In the discussion, it has been clarified that the overlap assumption is only required for the features maps of X and T. In addition, the authors provided a sketch for a quasi-oracle regret bound. Reviewers kq9J, rvHB and fdib agree that the paper is well-written. Reviewer fdib pointed to some issues in the presentation. Overall, the authors have addressed the main issues in their responses and the theoretical results have been strengthened during the review process.
train
[ "SaUjrkEEWXL", "OFvk1KruYxN", "fx7oc0NASE", "k1Fi7yC80b", "5YYCZZvoO5i", "TjvTin0IToT", "yEJlcxckUO", "7BomHeYV91", "tG434IAWknz", "ZUCbebJpM1r", "q1gecJRGH98", "5ylhgnUxnlj", "imEL0MoamJR", "aGxHDBq2bH-", "Eesu9_rHdVd", "yyRprYHOYrX" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a CATE estimation approach when the treatment variable $T$ is a graph. Under a separable assumption (A3) between confounders $X$ and the treatment $T$ in the outcome structural equation ($Y$), Robinson's decomposition of CATE for binary $T$ can be applied. The performance of the authors' propos...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "nips_2021_0v9EPJGc10", "fx7oc0NASE", "q1gecJRGH98", "7BomHeYV91", "TjvTin0IToT", "tG434IAWknz", "imEL0MoamJR", "nips_2021_0v9EPJGc10", "SaUjrkEEWXL", "Eesu9_rHdVd", "yyRprYHOYrX", "aGxHDBq2bH-", "nips_2021_0v9EPJGc10", "nips_2021_0v9EPJGc10", "nips_2021_0v9EPJGc10", "nips_2021_0v9EPJG...
nips_2021_kO3l8oz8EVP
Efficient hierarchical Bayesian inference for spatio-temporal regression models in neuroimaging
Several problems in neuroimaging and beyond require inference on the parameters of multi-task sparse hierarchical regression models. Examples include M/EEG inverse problems, neural encoding models for task-based fMRI analyses, and climate science. In these domains, both the model parameters to be inferred and the measurement noise may exhibit a complex spatio-temporal structure. Existing work either neglects the temporal structure or leads to computationally demanding inference schemes. Overcoming these limitations, we devise a novel flexible hierarchical Bayesian framework within which the spatio-temporal dynamics of model parameters and noise are modeled to have Kronecker product covariance structure. Inference in our framework is based on majorization-minimization optimization and has guaranteed convergence properties. Our highly efficient algorithms exploit the intrinsic Riemannian geometry of temporal autocovariance matrices. For stationary dynamics described by Toeplitz matrices, the theory of circulant embeddings is employed. We prove convex bounding properties and derive update rules of the resulting algorithms. On both synthetic and real neural data from M/EEG, we demonstrate that our methods lead to improved performance.
accept
The authors present an efficient method to infer models with both spatial and temporal correlations by leveraging Kronoker, sparse, and toeplitz structures. The authors found the method interesting and highly practical as a way to bring additional structure into the regression framework without undue computational burden. While the reviewers did raise a few minor points related to the restrictive model assumptions and the potential for additional comparisons, the authors have provided thorough responses that the reviewers have found adequate. Thus I am happy to recommend this work for acceptance to NeurIPS.
train
[ "mwoOj9IPZcN", "OoSvpyX2SD1", "1TNMi0QNMIl", "5S2Ffu2MyYE", "nLtfbFmQzt", "RQNdjZB0DkD", "z799wk0sCT5", "1tKUS8cTRI", "8uWPiuIOrjN", "KlKKTXQ4p6X" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed response. I appreciate the willingness to include a comparison to other methods that account for temporal correlation structure, and I definitely think this would strengthen the empirical results. However, the current set of experiments seems sufficient to me for showcasing the capabil...
[ -1, -1, -1, -1, -1, -1, 7, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "nLtfbFmQzt", "1TNMi0QNMIl", "KlKKTXQ4p6X", "8uWPiuIOrjN", "1tKUS8cTRI", "z799wk0sCT5", "nips_2021_kO3l8oz8EVP", "nips_2021_kO3l8oz8EVP", "nips_2021_kO3l8oz8EVP", "nips_2021_kO3l8oz8EVP" ]
nips_2021_Xl1Z1L9DBIJ
Topological Attention for Time Series Forecasting
The problem of (point) forecasting univariate time series is considered. Most approaches, ranging from traditional statistical methods to recent learning-based techniques with neural networks, directly operate on raw time series observations. As an extension, we study whether local topological properties, as captured via persistent homology, can serve as a reliable signal that provides complementary information for learning to forecast. To this end, we propose topological attention, which allows attending to local topological features within a time horizon of historical data. Our approach easily integrates into existing end-to-end trainable forecasting models, such as N-BEATS, and, in combination with the latter exhibits state-of-the-art performance on the large-scale M4 benchmark dataset of 100,000 diverse time series from different domains. Ablation experiments, as well as a comparison to recent techniques in a setting where only a single time series is available for training, corroborate the beneficial nature of including local topological information through an attention mechanism.
accept
The paper proposes a form of local topological attention to improve the performance of a time series forecasting model. The reviewers agree that the proposed method is interesting and relevant, and the writing is clear. On the other hand, all reviewers caution that the performance improvements demonstrated empirically are small (~ 1% on M4), and two reviewers see the novelty as somewhat limited. However, the authors were able to alleviate further concerns regarding the empirical evaluation during the discussion period, leading all reviewers to vote for (weak) acceptance. The reviewers belief that the merits outweigh the flaws of the paper, and I concur with their assessment, recommending acceptance.
train
[ "CeLtvY__VoC", "xbron7j4TPp", "SUwJ2mCZwf1", "nSS9Ov0e50G", "vwRaLLeyb7f", "ZCY4NC2xPM5", "QXV_pLTnyd", "T_wDuvepvd", "ZZUg7Hfx-J", "63RZ1F3TW96" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " For completeness, and as mentioned/promised in our original response, we report the updated `electricity` result of Table 1 below. *Similar to `car-parts`, the ranking remains stable*.\n\n| `electricity` | Avg. Rank | % Diff. |\n|---|---|---|\n| Lin.+TopAttn | 1.50 → 1.60 | 10.59 → 5.43 |\n| Deep...
[ -1, -1, -1, -1, 6, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, 3, 4 ]
[ "xbron7j4TPp", "SUwJ2mCZwf1", "nSS9Ov0e50G", "QXV_pLTnyd", "nips_2021_Xl1Z1L9DBIJ", "ZZUg7Hfx-J", "vwRaLLeyb7f", "63RZ1F3TW96", "nips_2021_Xl1Z1L9DBIJ", "nips_2021_Xl1Z1L9DBIJ" ]
nips_2021_oAjn5-AgSd
Local Signal Adaptivity: Provable Feature Learning in Neural Networks Beyond Kernels
Neural networks have been shown to outperform kernel methods in practice (including neural tangent kernels). Most theoretical explanations of this performance gap focus on learning a complex hypothesis class; in some cases, it is unclear whether this hypothesis class captures realistic data. In this work, we propose a related, but alternative, explanation for this performance gap in the image classification setting, based on finding a sparse signal in the presence of noise. Specifically, we prove that, for a simple data distribution with sparse signal amidst high-variance noise, a simple convolutional neural network trained using stochastic gradient descent learns to threshold out the noise and find the signal. On the other hand, the corresponding neural tangent kernel, with a fixed set of predetermined features, is unable to adapt to the signal in this manner. We supplement our theoretical results by demonstrating this phenomenon empirically: in CIFAR-10 and MNIST images with various backgrounds, as the background noise increases in intensity, a CNN's performance stays relatively robust, whereas its corresponding neural tangent kernel sees a notable drop in performance. We therefore propose the "local signal adaptivity" (LSA) phenomenon as one explanation for the superiority of neural networks over kernel methods.
accept
The paper is motivated by the need for mathematical understanding of mechanisms of feature learning in neural networks, and for understanding the limitations of kernel models (i.e., neural tangent kernel NTK) for learning in randomly initialized networks. The paper studies a very simple model problem, in which the goal is to determine whether a given input contains a noisy copy of a certain motif, or simply noise. In positive samples, the location of the motif is chosen uniformly at random. This simple model problem provides a separation between kernel methods, and methods which learn features: the paper considers single-layer nonlinear networks, and shows that (i) this problem can be solved by a network which only two neurons [which essentially implements a soft thresholding denoiser], which can be learned by randomly initialized stochastic gradient descent, and (ii) the corresponding neural tangent kernel is only capable of achieving nontrivial performance when the number of neurons exceeds the length, d, of the motif. The paper complements these observations with experiments on both Gaussian backgrounds and “visual clutter”, involving more complex architectures, and showing a performance gap between the trained network and the corresponding kernel model, when the background is prominent. Initial reviews appreciated the paper’s clarity, and its contribution to the discussion on gaps between kernels and feature learning. Reviewers also noted several limitations — the “handcrafted” nature of the neural network and the simplicity of this model problem — and raised questions about comparisons to finite vs infinity width neural tangent kernels. After discussion of the above points, the reviewers converged to a decision to accept the paper. The AC concurs with this evaluation. There are some obvious limitations to the analysis. At the same time, the strength of the paper is that it provides rigorous results on feature learning and separations vis kernel methods in the practically relevant setting of detecting a motif (feature) in background clutter. The experiments suggest that these phenomena also obtain for more realistic networks and backgrounds. The paper is clearly written, and seems likely to stimulate future work.
train
[ "t6W4YE3qDoL", "HwpbFSOGcll", "ZSoHQQqVzpU", "oWYb9iStI8u", "X5MC1HWw4oo", "EpWjbMcjEkr", "n9y0Tyunjrf", "0r73OpihH9n", "dwhetXQv4Gk", "fl-8xRKmKO", "oCMdd-rKIc3", "3XCAXAGttDL", "KFI7HcVCqp", "Xqjty5X_78y", "uADgQpdEn-4" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\nThank you for the response to my review. I take your points regarding the task, but also tend to maintain my original rating. At the same time, I agree that your contribution here is quite interesting, and should spur interesting future deep learning theory research.", " Thank you! We will be s...
[ -1, -1, -1, 6, -1, -1, 7, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, -1, -1, 4, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "KFI7HcVCqp", "EpWjbMcjEkr", "X5MC1HWw4oo", "nips_2021_oAjn5-AgSd", "3XCAXAGttDL", "0r73OpihH9n", "nips_2021_oAjn5-AgSd", "dwhetXQv4Gk", "oCMdd-rKIc3", "uADgQpdEn-4", "n9y0Tyunjrf", "oWYb9iStI8u", "Xqjty5X_78y", "nips_2021_oAjn5-AgSd", "nips_2021_oAjn5-AgSd" ]
nips_2021_7X_sBjIwtm9
IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision Transformers
Bowen Pan, Rameswar Panda, Yifan Jiang, Zhangyang Wang, Rogerio Feris, Aude Oliva
accept
This paper proposes a framework for redundancy reduction in vision transformers. The reduction is obtained in a learnable manner, using REINFORCE. The results are encouraging, yielding an improvement over recent vision transformer architectures and providing an interpretable result. The tackled topic is timely and the execution of this submission is good. Therefore, I recommend the acceptance of this paper as a poster.
train
[ "p7x9cuMgWjK", "rSiA5LS6PjX", "bpa66MabEG", "NHvCfSzVR5y", "rpC8Qzoa4hs", "5V7p_-R7D7K", "umukkVRTaQ", "YhQ-FeYjxvf", "b8jofUVdbm", "u3bKyDVUm86", "wKHz5CWCAX9", "jpNtLVdFsHB", "YAKIyzTDGPb", "t0BWlPVnsqe" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer yHQu,\n\nThanks again for your constructive comments and suggestions. As the discussion phase is nearing its end, we wondered if you might still have any concerns that we could address. We hope our new results and comparisons with other sparse transformers and interpretability methods addressed all ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "5V7p_-R7D7K", "wKHz5CWCAX9", "NHvCfSzVR5y", "u3bKyDVUm86", "nips_2021_7X_sBjIwtm9", "umukkVRTaQ", "b8jofUVdbm", "nips_2021_7X_sBjIwtm9", "t0BWlPVnsqe", "YAKIyzTDGPb", "jpNtLVdFsHB", "nips_2021_7X_sBjIwtm9", "nips_2021_7X_sBjIwtm9", "nips_2021_7X_sBjIwtm9" ]
nips_2021_tjwQaOI9tdy
Symbolic Regression via Deep Reinforcement Learning Enhanced Genetic Programming Seeding
Symbolic regression is the process of identifying mathematical expressions that fit observed output from a black-box process. It is a discrete optimization problem generally believed to be NP-hard. Prior approaches to solving the problem include neural-guided search (e.g. using reinforcement learning) and genetic programming. In this work, we introduce a hybrid neural-guided/genetic programming approach to symbolic regression and other combinatorial optimization problems. We propose a neural-guided component used to seed the starting population of a random restart genetic programming component, gradually learning better starting populations. On a number of common benchmark tasks to recover underlying expressions from a dataset, our method recovers 65% more expressions than a recently published top-performing model using the same experimental setup. We demonstrate that running many genetic programming generations without interdependence on the neural-guided component performs better for symbolic regression than alternative formulations where the two are more strongly coupled. Finally, we introduce a new set of 22 symbolic regression benchmark problems with increased difficulty over existing benchmarks. Source code is provided at www.github.com/brendenpetersen/deep-symbolic-optimization.
accept
This paper received review scores with a high variance: 5,6,6,9. The paper was actively discussed, both with the authors and in private. Reviewer Ug8S (score: 9) championed the paper for its simplicity and strong results. Yet, the review text and the arguments given (e.g., "I see no real reason for this work for not being accepted.") do not sound like a score of 9 at NeurIPS, more like a 6 or 7). Negative points in the reviews and the discussion were the lack of originality (the paper heuristically combines existing methods: RL in order to sample the initial population of evolutionary methods, to solve symbolic regression problems). A brief note about the NeurIPS checklist: the authors' responses to the questions are very casual (e.g. a simple "Yes" would be preferable over "It would be a bad idea not to"), as if these points are obviously satisfied; however, the paper did *not* attach the code and as the code repo merely gave "www.anonymous.submission". That does *not* count as the code being available; it should be available for reviewers to check in an anonymized repo or attached as supplementary material. Overall, in view of the strong results and the simplicity of the method, I am leaning towards acceptance as a poster.
val
[ "bRLJxvejZ-0", "g_no1kzdN0o", "CapWhEB4c2u", "gZZkYj_CByW", "_gkHDPteKss", "fsdBexObMVi", "yQzRTkV2mbH", "sPbjXk1D28e", "rGw-0JeTjKv", "Na0j8QrkF6s", "nH68YMPzsHJ", "zkfyPQnftnp" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper describes a method to enhance the performance of Genetic Programming (GP) for symbolic regression by iteratively enhancing initial populations for 'random restart' GP implementations. The method works through a generative recurrent network (RNN) and an Reinforcement Learning (RL) approach, in which the R...
[ 9, -1, -1, 6, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 5, -1, -1, 2, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "nips_2021_tjwQaOI9tdy", "rGw-0JeTjKv", "yQzRTkV2mbH", "nips_2021_tjwQaOI9tdy", "gZZkYj_CByW", "zkfyPQnftnp", "gZZkYj_CByW", "nH68YMPzsHJ", "bRLJxvejZ-0", "nips_2021_tjwQaOI9tdy", "nips_2021_tjwQaOI9tdy", "nips_2021_tjwQaOI9tdy" ]
nips_2021_ssohLcmn4-r
Choose a Transformer: Fourier or Galerkin
In this paper, we apply the self-attention from the state-of-the-art Transformer in Attention Is All You Need for the first time to a data-driven operator learning problem related to partial differential equations. An effort is put together to explain the heuristics of, and to improve the efficacy of the attention mechanism. By employing the operator approximation theory in Hilbert spaces, it is demonstrated for the first time that the softmax normalization in the scaled dot-product attention is sufficient but not necessary. Without softmax, the approximation capacity of a linearized Transformer variant can be proved to be comparable to a Petrov-Galerkin projection layer-wise, and the estimate is independent with respect to the sequence length. A new layer normalization scheme mimicking the Petrov-Galerkin projection is proposed to allow a scaling to propagate through attention layers, which helps the model achieve remarkable accuracy in operator learning tasks with unnormalized data. Finally, we present three operator learning experiments, including the viscid Burgers' equation, an interface Darcy flow, and an inverse interface coefficient identification problem. The newly proposed simple attention-based operator learner, Galerkin Transformer, shows significant improvements in both training cost and evaluation accuracy over its softmax-normalized counterparts.
accept
This paper studies Transformer variants for data-driven operator learning problems of parametric partial differential equations: the Fourier-type and the Galerkin-type. This paper includes both theoretical and empirical evidence to show that these two Transformer variants are accurate and efficient for solving various parametric PDEs. The authors have nice discussions with all the reviewers during the rebuttal period and put great efforts into improving the paper quality. All the reviewers agree that the mapping between Transformer and PDE solvers is interesting and important. Therefore, I recommend acceptance. I hope the authors can revise the paper in the camera-ready version.
train
[ "DZ91sAKElKl", "dtDX01SApAN", "sXse3RwOicW", "_5wNpWDYxIO", "phsaqTELtqX", "PBT_wSADFTj", "xU7pn2tZih5", "QLdXoxLW3fo", "q6AY8jH7sp9", "Pz9iykP9Aqc", "-MA_q6M1SZ", "wjG1TbJ2zpV", "LXE_I-wH32N", "_b4_Paadadn", "-y2P0ILyWWK", "6psTFHAMAti", "vAOtKnXs3hLQ", "wFPd2dTHUAf", "8WjhaiiZX...
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We greatly appreciate reviewers for the effort spent on reviewing our work. Learning from the insightful and valuable suggestions and criticisms, we lay out the following roadmap of some revisions to the paper, to better present this new method and a set of new tools to the community.\n\n----\n\n### Changes done ...
[ -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, 3, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "nips_2021_ssohLcmn4-r", "PBT_wSADFTj", "phsaqTELtqX", "xU7pn2tZih5", "wjG1TbJ2zpV", "vAOtKnXs3hLQ", "6psTFHAMAti", "nips_2021_ssohLcmn4-r", "Pz9iykP9Aqc", "LXE_I-wH32N", "nips_2021_ssohLcmn4-r", "8WjhaiiZXv5", "-MA_q6M1SZ", "-MA_q6M1SZ", "-MA_q6M1SZ", "QLdXoxLW3fo", "wFPd2dTHUAf", ...
nips_2021_kAm9By0R5ME
A Causal Lens for Controllable Text Generation
Controllable text generation concerns two fundamental tasks of wide applications, namely generating text of given attributes (i.e., attribute-conditional generation), and minimally editing existing text to possess desired attributes (i.e., text attribute transfer). Extensive prior work has largely studied the two problems separately, and developed different conditional models which, however, are prone to producing biased text (e.g., various gender stereotypes). This paper proposes to formulate controllable text generation from a principled causal perspective which models the two tasks with a unified framework. A direct advantage of the causal formulation is the use of rich causality tools to mitigate generation biases and improve control. We treat the two tasks as interventional and counterfactual causal inference based on a structural causal model, respectively. We then apply the framework to the challenging practical setting where confounding factors (that induce spurious correlations) are observable only on a small fraction of data. Experiments show significant superiority of the causal approach over previous conditional models for improved control accuracy and reduced bias.
accept
The submission proposes a method for debiasing controllable text generation, allowing text to be generated that satisfies one desired attribute, without also introducing confounding attributes. The reviewers agree that this is an important problem, and that a causal perspective is valuable here. The major concerns raised by the reviewers focused on the evaluation, however the authors have provided additional experimental results in their response that significantly strengthen the conclusions. Therefore, I recommend acceptance.
train
[ "tdu65J6J5n-", "eSH1NSHq3vc", "7esVJsi7OH", "KkurgeB_73i", "VSHSfVnIYdB", "0Tol3gmb5A", "tLiT08eF4qi", "BEsc_mxnKb", "cPo69oLUxT", "VXfpJBlmSgd", "xntli5kNwUW", "51XTFghkXpC", "HXFImLNDc5Z", "edOvjAk8M48", "Nji1daMJitK", "OjKw5FsiwTK" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " The additional regularizations in training and inference (for debiasing the generation) tend to lower the diversity to some extent. This seems to be a general observation -- e.g., the diversity of the baselines *Fair-reg* (40.2) and *GeDi* (41.7) is slightly lower than that of the *conditional LM* (41.9, see Tabl...
[ -1, -1, -1, -1, -1, 6, -1, -1, 7, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, 3, -1, -1, -1, -1, -1, 4, 2 ]
[ "eSH1NSHq3vc", "HXFImLNDc5Z", "51XTFghkXpC", "VSHSfVnIYdB", "BEsc_mxnKb", "nips_2021_kAm9By0R5ME", "xntli5kNwUW", "VXfpJBlmSgd", "nips_2021_kAm9By0R5ME", "cPo69oLUxT", "0Tol3gmb5A", "OjKw5FsiwTK", "Nji1daMJitK", "nips_2021_kAm9By0R5ME", "nips_2021_kAm9By0R5ME", "nips_2021_kAm9By0R5ME" ...
nips_2021_P0AeY-efPEx
Differentially Private Multi-Armed Bandits in the Shuffle Model
Jay Tenenbaum, Haim Kaplan, Yishay Mansour, Uri Stemmer
accept
This paper proposes new algorithms for differentially private multi-armed bandits in the shuffle model. Their algorithms nearly match the results in the central model. The paper is well-written and the reviewers were positive. I would suggest the authors incorporate the feedback from the reviewers in the final version. In particular, I found the discussion on the Naive attempt to convert the algorithm of Tossou and Dimitrakakis from the centralized model to the shuffle model to be useful, and adding some of this to the paper would make it better. I am happy to recommend acceptance for this paper.
train
[ "PyltIO7VhA0", "TzTzPjHwBb", "M06FqkyblL", "nYfbBQ0TNaw", "bOAbDdNT10T", "4gTtikf4VIj", "523wVJkZffO", "gznD-i4XT-0", "fnNS3CKIgv", "BegENCuAqh4", "ChpLiFm2Vwx" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " After reading the response and other reviews my review remains the same.", " After going through other reviews and authors' responses to those, my assessment of the paper remains unchanged.", "This paper studies the multi-armed bandit problem under the constraint of approximate differential privacy. Previousl...
[ -1, -1, 7, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "bOAbDdNT10T", "523wVJkZffO", "nips_2021_P0AeY-efPEx", "4gTtikf4VIj", "ChpLiFm2Vwx", "M06FqkyblL", "BegENCuAqh4", "fnNS3CKIgv", "nips_2021_P0AeY-efPEx", "nips_2021_P0AeY-efPEx", "nips_2021_P0AeY-efPEx" ]
nips_2021_nnQpieSBwJ
Dual Adaptivity: A Universal Algorithm for Minimizing the Adaptive Regret of Convex Functions
To deal with changing environments, a new performance measure—adaptive regret, defined as the maximum static regret over any interval, was proposed in online learning. Under the setting of online convex optimization, several algorithms have been successfully developed to minimize the adaptive regret. However, existing algorithms lack universality in the sense that they can only handle one type of convex functions and need apriori knowledge of parameters. By contrast, there exist universal algorithms, such as MetaGrad, that attain optimal static regret for multiple types of convex functions simultaneously. Along this line of research, this paper presents the first universal algorithm for minimizing the adaptive regret of convex functions. Specifically, we borrow the idea of maintaining multiple learning rates in MetaGrad to handle the uncertainty of functions, and utilize the technique of sleeping experts to capture changing environments. In this way, our algorithm automatically adapts to the property of functions (convex, exponentially concave, or strongly convex), as well as the nature of environments (stationary or changing). As a by product, it also allows the type of functions to switch between rounds.
accept
In agreement with the reviewers, I am happy to recommend acceptance of the paper. During the review process the authors promised several improvements to the paper (such as discussing and experimenting with the suggested simple black-box algorithm or clarifying that their references to sleeping experts are meant only in a restricted manner), which they must include in the final version of the paper.
train
[ "n1B4II0cQ0", "jsuyVq4aeGc", "-rk3Xrd846v", "EtxpIInfSzc", "H-zcudZ5iWA", "h0jS3VXQF9N", "nvCqv7UZet", "RGU8cydNm51", "z52G_hGZijt", "MSwJUMWROsD", "DlAj13S1lHt", "O-lRM9oX4g" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies online convex optimization in the strongly-adaptive regret setting. Essentially, this paper generalizes MetaGrad, an OCO algorithm which was able to adapt to the type of convexity of the losses, to the strongly adaptive regret setting.\n\nThis algorithm is build using the metagrad meta-regret st...
[ 7, -1, -1, -1, 6, -1, -1, -1, -1, -1, 7, 7 ]
[ 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021_nnQpieSBwJ", "-rk3Xrd846v", "nvCqv7UZet", "h0jS3VXQF9N", "nips_2021_nnQpieSBwJ", "z52G_hGZijt", "O-lRM9oX4g", "DlAj13S1lHt", "H-zcudZ5iWA", "n1B4II0cQ0", "nips_2021_nnQpieSBwJ", "nips_2021_nnQpieSBwJ" ]
nips_2021_2zO2lb7ykMD
Learning Hard Optimization Problems: A Data Generation Perspective
Optimization problems are ubiquitous in our societies and are present in almost every segment of the economy. Most of these optimization problems are NP-hard and computationally demanding, often requiring approximate solutions for large-scale instances. Machine learning frameworks that learn to approximate solutions to such hard optimization problems are a potentially promising avenue to address these difficulties, particularly when many closely related problem instances must be solved repeatedly. Supervised learning frameworks can train a model using the outputs of pre-solved instances. However, when the outputs are themselves approximations, when the optimization problem has symmetric solutions, and/or when the solver uses randomization, solutions to closely related instances may exhibit large differences and the learning task can become inherently more difficult. This paper demonstrates this critical challenge, connects the volatility of the training data to the ability of a model to approximate it, and proposes a method for producing (exact or approximate) solutions to optimization problems that are more amenable to supervised learning tasks. The effectiveness of the method is tested on hard non-linear nonconvex and discrete combinatorial problems.
accept
This paper provides a methodology for generating datasets for supervised learning for combinatorial optimization problems. This is an interesting perspective as it focuses on the importance of generating the right datasets for the supervised learning problem. The authors highlight several challenges that characterize combinatorial optimization problems and provide some theoretical insights for the proposed methodology. In particular the authors propose an approach that formulates the problem of optimal dataset design as a bilevel optimization problem and introduces an efficient algorithm for dataset generation. They show the effectiveness of the approach for job shop scheduling and optimal power flow problems. There was unanimous consensus on accepting the paper.
train
[ "VRYKtyVM8_", "QuTGscBsgo5", "FshEHyDoQRJ", "LReiwmTmAgD", "KmNVE3pXt1c", "1vaDzwGlEcF", "2O9zEd9yZWz", "w8e2aTW-LzL", "rIEvawFQUFz", "7RVNWfhGbMF", "vnIWEPhF5Yy", "Y6cMYcuZiIx", "6Zyph1FEOLT", "yT_D_dxFJJu", "SogoysOhR2s", "Pb73DIGp2L", "CKeuZ7P4XgF", "ZFHeKf3V3DK" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_rev...
[ "This paper proposes a method to optimize training data generation for supervised learning approaches to constrained optimization. First the authors show that the existence of various optimal solutions to the problem can make the learning more challenging. Then they Then they proceed with some theoretical insights ...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, 7, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021_2zO2lb7ykMD", "FshEHyDoQRJ", "LReiwmTmAgD", "KmNVE3pXt1c", "1vaDzwGlEcF", "SogoysOhR2s", "w8e2aTW-LzL", "yT_D_dxFJJu", "7RVNWfhGbMF", "Pb73DIGp2L", "nips_2021_2zO2lb7ykMD", "6Zyph1FEOLT", "ZFHeKf3V3DK", "CKeuZ7P4XgF", "VRYKtyVM8_", "vnIWEPhF5Yy", "nips_2021_2zO2lb7ykMD", ...
nips_2021_jnkE5c5f9m
Canonical Capsules: Self-Supervised Capsules in Canonical Pose
We propose an unsupervised capsule architecture for 3D point clouds. We compute capsule decompositions of objects through permutation-equivariant attention, and self-supervise the process by training with pairs of randomly rotated objects. Our key idea is to aggregate the attention masks into semantic keypoints, and use these to supervise a decomposition that satisfies the capsule invariance/equivariance properties. This not only enables the training of a semantically consistent decomposition, but also allows us to learn a canonicalization operation that enables object-centric reasoning. To train our neural network we require neither classification labels nor manually-aligned training datasets. Yet, by learning an object-centric representation in a self-supervised manner, our method outperforms the state-of-the-art on 3D point cloud reconstruction, canonicalization, and unsupervised classification.
accept
The authors have successfully addressed the raised issues. All reviewers agree that this paper makes good contribution to the NeurIPS community. The area chairs agree and recommend this paper be accepted.
train
[ "ifoJN9w0HHX", "YsfQ4aBZqh6", "vsALkAfQ6m", "HjV5t9ebCPs", "KiwoLqCt0xZ", "cYiX2rVfX8", "ivH8DybO1TP", "h7aw92D3ZzT", "M8OkQMk7IG", "btflviWySTN", "QUTwvEBd1C", "H8JQENdxxej", "eYaf3YE9v8t" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We base our unsupervised training on an auto-encoding reconstruction task, which the community typically relies on the 13 class ShapeNet Core dataset since AtlasNet (2018+). While we would have loved to be able to launch our experiments on collections with more classes, this is what is currently publicly availabl...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 8, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 3, 4 ]
[ "eYaf3YE9v8t", "eYaf3YE9v8t", "ivH8DybO1TP", "eYaf3YE9v8t", "H8JQENdxxej", "QUTwvEBd1C", "btflviWySTN", "M8OkQMk7IG", "nips_2021_jnkE5c5f9m", "nips_2021_jnkE5c5f9m", "nips_2021_jnkE5c5f9m", "nips_2021_jnkE5c5f9m", "nips_2021_jnkE5c5f9m" ]
nips_2021__KqWSCu566
Characterizing Generalization under Out-Of-Distribution Shifts in Deep Metric Learning
Deep Metric Learning (DML) aims to find representations suitable for zero-shot transfer to a priori unknown test distributions. However, common evaluation protocols only test a single, fixed data split in which train and test classes are assigned randomly. More realistic evaluations should consider a broad spectrum of distribution shifts with potentially varying degree and difficulty.In this work, we systematically construct train-test splits of increasing difficulty and present the ooDML benchmark to characterize generalization under out-of-distribution shifts in DML. ooDML is designed to probe the generalization performance on much more challenging, diverse train-to-test distribution shifts. Based on our new benchmark, we conduct a thorough empirical analysis of state-of-the-art DML methods. We find that while generalization tends to consistently degrade with difficulty, some methods are better at retaining performance as the distribution shift increases. Finally, we propose few-shot DML as an efficient way to consistently improve generalization in response to unknown test shifts presented in ooDML.
accept
This paper proposed a new benchmark for deep metric learning under varying degree of distribution shift. although the paper does not take a step in providing a solution, we are hopeful that this work inspires follow up work on OOD generalization. I request the authors to incorporate the reviews and discussions in the camera ready version.
train
[ "QegE__4tqy", "Jry9G9t9hE", "KW1PcpnoeP8", "R2YPJqAiW4H", "pggh5_rO0dl", "he7dHRHLVox", "-iAk9AlPsLH", "6X95lvTIjvo", "6h75OU1ZSi3", "YfxFOsNbvVa" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response! You helped clear up some confusion that I had. I will maintain my original score as it is a good paper.", " I acknowledge the authors' response and maintain my decision to accept.", " We thank the reviewers for their thorough reviews and helpful comments, as well as for appreciatin...
[ -1, -1, -1, -1, -1, -1, 7, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "pggh5_rO0dl", "KW1PcpnoeP8", "YfxFOsNbvVa", "6h75OU1ZSi3", "-iAk9AlPsLH", "6X95lvTIjvo", "nips_2021__KqWSCu566", "nips_2021__KqWSCu566", "nips_2021__KqWSCu566", "nips_2021__KqWSCu566" ]
nips_2021_I1GHll1Z7E
Dynamics-regulated kinematic policy for egocentric pose estimation
We propose a method for object-aware 3D egocentric pose estimation that tightly integrates kinematics modeling, dynamics modeling, and scene object information. Unlike prior kinematics or dynamics-based approaches where the two components are used disjointly, we synergize the two approaches via dynamics-regulated training. At each timestep, a kinematic model is used to provide a target pose using video evidence and simulation state. Then, a prelearned dynamics model attempts to mimic the kinematic pose in a physics simulator. By comparing the pose instructed by the kinematic model against the pose generated by the dynamics model, we can use their misalignment to further improve the kinematic model. By factoring in the 6DoF pose of objects (e.g., chairs, boxes) in the scene, we demonstrate for the first time, the ability to estimate physically-plausible 3D human-object interactions using a single wearable camera. We evaluate our egocentric pose estimation method in both controlled laboratory settings and real-world scenarios.
accept
Originally there was a slight disagreement among the reviewers, with this paper receiving 3 positive ratings and 1 negative one. The reviewers acknowledge the interest of the studied scenario and that the empirical results are convincing. They nonetheless proposed several ways to improve the paper, by clarifying several aspects of the method and of the results, as well as discussing some additional works. The authors addressed these points in their feedback and managed to convince the most negative reviewer to raise their score. We therefore believe that this paper can be accepted to NeurIPS but strongly encourage the authors to incorporate their feedback in the paper for the final version.
test
[ "Je4YDc6kqsN", "r2sdo2j-EOq", "aDTNp6mluFk", "2CzNcKrOe8S", "VFAE83U7GCV", "8BIw6aSGT5i", "bV8_4YEP8Os", "-ZQJeOAPBvJ", "wEWFALeBto4", "090nbb7pVnt", "iQvITTITQ-", "NhJP0Pdd8fR", "CeHXgp55ImC" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you so much for reading our response and updating the review. We are glad and grateful that our response has addressed some of the concerns. We will continue to update our manuscript to add and clarify the important points raised during discussion (limitations, evaluation on AMASS dataset and additional dat...
[ -1, 6, -1, 7, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, 5, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "aDTNp6mluFk", "nips_2021_I1GHll1Z7E", "-ZQJeOAPBvJ", "nips_2021_I1GHll1Z7E", "bV8_4YEP8Os", "2CzNcKrOe8S", "8BIw6aSGT5i", "090nbb7pVnt", "CeHXgp55ImC", "r2sdo2j-EOq", "NhJP0Pdd8fR", "nips_2021_I1GHll1Z7E", "nips_2021_I1GHll1Z7E" ]
nips_2021_4VAp_PL9yKs
Never Go Full Batch (in Stochastic Convex Optimization)
Idan Amir, Yair Carmon, Tomer Koren, Roi Livni
accept
During the discussion phase, all reviewers and the AC found the results are interesting and novel enough for a presentation at NeurIPS. The paper has presented some solid results and bring new insights about negative results of full batch methods.
train
[ "YcgHZqlC7-3", "NBtJS_KnQF1", "uRDP4YjfTJU", "FnWIGirOU1_", "S6zCeaM6X5P", "EYPr2l0DCu_", "ae3sC3Zkeel", "Tva61pe3iZv", "4WjizIKpJkr", "YhHB7KGYoSQ", "06BO2whB2NN" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper considers a problem where one is trying to find an input that minimizes the expected value of a function that one has sample values of. Its goal is to show that algorithms that only have the ability to query the gradient of the average of the samples at specified points may need more queries than a func...
[ 6, 7, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, 2, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_4VAp_PL9yKs", "nips_2021_4VAp_PL9yKs", "FnWIGirOU1_", "S6zCeaM6X5P", "EYPr2l0DCu_", "NBtJS_KnQF1", "YcgHZqlC7-3", "06BO2whB2NN", "YhHB7KGYoSQ", "nips_2021_4VAp_PL9yKs", "nips_2021_4VAp_PL9yKs" ]
nips_2021_O8wI1avs4WF
Collaborative Learning in the Jungle (Decentralized, Byzantine, Heterogeneous, Asynchronous and Nonconvex Learning)
El Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Arsany Guirguis, Lê-Nguyên Hoang, Sébastien Rouault
accept
This paper proposes a Byzantine-robust fully decentralized learning approach under data heterogeneity. The reviews were initially quite mixed. After reading the author response, several reviewers updated their score. Some concerns about the role/relevance of using large batch sizes, as well as the lack of insights on what robustness can be achieved in the heterogeneous setting. However, the reviewers appreciated the response and recognize the relevance of the problem, approach and evaluation. Therefore, the paper is accepted. The authors are strongly encouraged to incorporate suggestions and additional details from the discussion, in particular to improve the clarity of the presentation and give more intuitions about the approach.
train
[ "qNIV_eEo0_P", "0el9RbzIjWb", "wmUJT0T5FjX", "fwNdtm_1dB", "MkP2AZRak3y", "Hg_r_Q82Xt7", "zQAuamGpxHO", "I_eedciEl14", "lyYQw4k7MUa", "tVHcXCMxoKT" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " I acknowledge that I have read the rebuttal. ", " I acknowledge that I have read the rebuttal. ", "The paper considers collaborative learning in an (asynchronous) decentralized setup in the presence of Byzantine nodes. The authors define two problems: Byzantine-robust collaborative learning and Byzantine-robu...
[ -1, -1, 7, 6, 7, -1, -1, -1, -1, 5 ]
[ -1, -1, 3, 3, 3, -1, -1, -1, -1, 4 ]
[ "zQAuamGpxHO", "lyYQw4k7MUa", "nips_2021_O8wI1avs4WF", "nips_2021_O8wI1avs4WF", "nips_2021_O8wI1avs4WF", "fwNdtm_1dB", "tVHcXCMxoKT", "wmUJT0T5FjX", "MkP2AZRak3y", "nips_2021_O8wI1avs4WF" ]
nips_2021_bDdfxLQITtu
Not All Low-Pass Filters are Robust in Graph Convolutional Networks
Graph Convolutional Networks (GCNs) are promising deep learning approaches in learning representations for graph-structured data. Despite the proliferation of such methods, it is well known that they are vulnerable to carefully crafted adversarial attacks on the graph structure. In this paper, we first conduct an adversarial vulnerability analysis based on matrix perturbation theory. We prove that the low- frequency components of the symmetric normalized Laplacian, which is usually used as the convolutional filter in GCNs, could be more robust against structural perturbations when their eigenvalues fall into a certain robust interval. Our results indicate that not all low-frequency components are robust to adversarial attacks and provide a deeper understanding of the relationship between graph spectrum and robustness of GCNs. Motivated by the theory, we present GCN-LFR, a general robust co-training paradigm for GCN-based models, that encourages transferring the robustness of low-frequency components with an auxiliary neural network. To this end, GCN-LFR could enhance the robustness of various kinds of GCN-based models against poisoning structural attacks in a plug-and-play manner. Extensive experiments across five benchmark datasets and five GCN-based models also confirm that GCN-LFR is resistant to the adversarial attacks without compromising on performance in the benign situation.
accept
All reviewers feel the paper is on the borderline but leaning towards accept. The meta-reviewer read the paper and feel the insight obtained from this paper is interesting. Therefore, the meat-reviewer decides to be conformal with the rest reviewers and suggest an acceptance of this submission.
test
[ "_SIUz1se3s", "rwA7FJGQyZF", "O6VuNKJW2cG", "xqaK5b-laSg", "FxPj76GuDEB", "n5K-gn0qdUx", "hQLYmr_65Jh", "Ncbw0nQjLwm", "D4WsvyR-8A-", "CktfFOlM2kK", "C2rTyHBlny7", "FJjHyAAOOxP", "MDzAvwXafsA", "4ZopCFdyn7f", "AB2PxJpPSdk", "tGi684Vv_UR", "bERz-JTCwp5", "ldMprGKnV1I", "4RBJsAYWix...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", ...
[ " We really appreciate your time and insightful comments, and we are more than happy to find that the your concerns have been addressed.\n\nSure, we will update the revised version accordingly as promised.\n\nIf you have further concerns that you think could be clarified, we will be very glad to discuss.\n", " Ma...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "xqaK5b-laSg", "FxPj76GuDEB", "nips_2021_bDdfxLQITtu", "C2rTyHBlny7", "K0t8FSSfUx", "tGi684Vv_UR", "dK3Eb1VGRWl", "aXSdsKb9kYp", "O0RaO02vjux", "O0RaO02vjux", "O6VuNKJW2cG", "4ZopCFdyn7f", "nips_2021_bDdfxLQITtu", "QCeZGBaCAh0", "MDzAvwXafsA", "O6VuNKJW2cG", "MDzAvwXafsA", "MDzAvwX...
nips_2021_o6s1b_-nDOE
Counterfactual Maximum Likelihood Estimation for Training Deep Networks
Although deep learning models have driven state-of-the-art performance on a wide array of tasks, they are prone to spurious correlations that should not be learned as predictive clues. To mitigate this problem, we propose a causality-based training framework to reduce the spurious correlations caused by observed confounders. We give theoretical analysis on the underlying general Structural Causal Model (SCM) and propose to perform Maximum Likelihood Estimation (MLE) on the interventional distribution instead of the observational distribution, namely Counterfactual Maximum Likelihood Estimation (CMLE). As the interventional distribution, in general, is hidden from the observational data, we then derive two different upper bounds of the expected negative log-likelihood and propose two general algorithms, Implicit CMLE and Explicit CMLE, for causal predictions of deep learning models using observational data. We conduct experiments on both simulated data and two real-world tasks: Natural Language Inference (NLI) and Image Captioning. The results show that CMLE methods outperform the regular MLE method in terms of out-of-domain generalization performance and reducing spurious correlations, while maintaining comparable performance on the regular evaluations.
accept
We thank the authors for their engaging submission on an important topic of research. A major shared concern of the reviewers is the lack of baseline methods in the empirical study. While the related work section describes the state of existing invariant learning algorithms, the authors make no direct comparisons in the experiments. And while there may be no directly comparable algorithm, the wealth of causal representation learning algorithms enumerated by reviewer S8oA could still be used to form predictions that highlight the key differences of the CMLE framework. Still though, this work introduces and empirically investigates a new objective for learning invariant representations. With the additional simulation study and reviewer responses, this work was viewed favorably by most reviewers. We encourage to the authors to incorporate insights from the simulation study into the manuscript.
train
[ "YDtRaZvdPPe", "aA9xYlglQcY", "prKj-JOLXiv", "jGnhor0p_At", "LWtIyQehf5k", "CjrChJP6xBS", "nVbvMuQqtRj", "MBAb86Gc6RJ", "TiEafFROrC7", "2_RESnCvWA5", "IaED1YXJq-", "pkFJJXgZ2JM", "3USzFVfOHqa" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper tackles the problem of spurious correlations caused by observed confounders and offers two potential solutions: implicit and explicit counterfactual maximum likelihood estimation. Specifically the paper deals with the setup of predicting the outcome Y of some action T on X and considers X to potentially ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 7, 7 ]
[ 2, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 2 ]
[ "nips_2021_o6s1b_-nDOE", "MBAb86Gc6RJ", "jGnhor0p_At", "pkFJJXgZ2JM", "3USzFVfOHqa", "2_RESnCvWA5", "IaED1YXJq-", "YDtRaZvdPPe", "nips_2021_o6s1b_-nDOE", "nips_2021_o6s1b_-nDOE", "nips_2021_o6s1b_-nDOE", "nips_2021_o6s1b_-nDOE", "nips_2021_o6s1b_-nDOE" ]
nips_2021_Rv3vp-JDUSJ
Robust Optimization for Multilingual Translation with Imbalanced Data
Xian Li, Hongyu Gong
accept
This paper studies the problem of training multilingual machine translation model in presence of imbalanced data, which is common in multilingual parallel corpus. The method is based on empirical observation that in multilingual MT training, "sharpness" of local curvature in the loss landscape causes interference among languages and instability. The paper then propose CATS, a curvature-based, online, per-language learning rate adjustment to improve the performance of low-resource languages in multilingual machine translation (MT) models. Experiments in three scenarios varying in multilinguality and data size (WMT-10, TED, OPUS-100) show that the technique works consistently, outperforming a widely-adopted temperature-based sampling baseline. Through a number of analysis and ablation experiments, they are able to provide insight into how and why the optimization technique works. Experiments can be improved with more analysis (e.g. upsampling T). The paper also lacks comparison with some recent strong baselines for multilingual machine translation. The authors may consider moving some results and analysis from appendix to the main content. The overall writing, including figures, tables and captions, could be improved as suggested by all reviewers.
train
[ "P4xAmYGWBp_", "NUX05sHagIR", "4NYcgMhQds0", "aBV6EOwPG_R", "LynxljgiJog", "JFpFXyQrWQB", "wPNCc0Y3HvZ", "cE1dccCG9r" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies the problem of robustness optimization in multilingual machine translation models. In response to this problem, the author empirically found that in the loss optimization of multilingual machine translation models under unbalanced data scenarios, \"sharpness\" of local curvature in the loss land...
[ 6, -1, -1, -1, -1, 6, 8, 7 ]
[ 4, -1, -1, -1, -1, 5, 5, 3 ]
[ "nips_2021_Rv3vp-JDUSJ", "P4xAmYGWBp_", "cE1dccCG9r", "wPNCc0Y3HvZ", "JFpFXyQrWQB", "nips_2021_Rv3vp-JDUSJ", "nips_2021_Rv3vp-JDUSJ", "nips_2021_Rv3vp-JDUSJ" ]
nips_2021_DDoDN0BLLhb
A/B/n Testing with Control in the Presence of Subpopulations
Yoan Russac, Christina Katsimerou, Dennis Bohle, Olivier Cappé, Aurélien Garivier, Wouter M. Koolen
accept
Dear authors, There was some disagreement between the reviewers. Given this, and taking into account all comments, as well as background of the reviewers, I decided to take a special look at the article. After reading the article, I believe this setting is well-motivated and interesting. The lower bounds do not seem very hard to derive, but I appreciate the effort to make them explicit on an illustrative example. The proof of Theorem 2 is heavily relying on previous work, working by reduction. Clarity should indeed be improved, regarding presentation of the algorithm and notations. [Small point: I suggest you add $\\beta$ in the notation for $S(\\mu)$ or $Alt(\\mu)$, and otherwise it is not clear where is the dependency in beta in Theorem 1] Regarding the algorithm, I do not encourage using a different beta(t,\delta) for the analysis to be true and a "stylized one" for the experiments, as this indicates mismatch between theory and experiments. Now, the algorithm does achieve optimality in the considered setting and is supported by illustrative numerical results. The discussion of the complexity of the algorithm and the handling of the $2^K$ sets to consider should be improved. Proposition 3 brings an interesting point, but I feel this part could be extended (for instance, what happens then for other distributions, is there any hope of having some reasonable complexity?) The numerical experiments should also be clarified, and seem to be restricted to small values of K: I encourage to consider examples with much larger $K$ as well, and perhaps highlight the (low?) computational complexity of the algorithm. All in all, I believe this is a simple adaptation of an existing work to a relevant problem. The paper has interesting bits, but each of them seem done shallowly. This makes the paper unfortunately borderline. I recommend acceptance, conditioning on the fact the authors provide more convincing discussion and experiments regarding the numerical complexity, but only if there is enough place for that. Otherwise I would be happy to see an updated version of this article taking into accounts all feedbacks.
train
[ "hOAxKEdPnfh", "g3zV5HywpPm", "fIkiblrCGsa", "L_bEHMFdeLN", "tRCKuxF-Mxp", "prde484f2cj", "JBAIULKxzmp", "lzE3yDSM1kR", "cuMvDfVaNSH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Re: Q3 - you don't have to include confidence intervals, if you remove the 'slight edge' claim. My only point was that the claim isn't backed up by the data currently in the table, and I would need some idea of the confidence intervals if I were to believe it.\n\nAlso, please see my updated official review for th...
[ -1, 7, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, 3, -1, -1, -1, -1, 2, 3, 4 ]
[ "L_bEHMFdeLN", "nips_2021_DDoDN0BLLhb", "cuMvDfVaNSH", "g3zV5HywpPm", "lzE3yDSM1kR", "JBAIULKxzmp", "nips_2021_DDoDN0BLLhb", "nips_2021_DDoDN0BLLhb", "nips_2021_DDoDN0BLLhb" ]
nips_2021_8hLuUnBwfM2
Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks
High-cardinality categorical features are a major challenge for machine learning methods in general and for deep learning in particular. Existing solutions such as one-hot encoding and entity embeddings can be hard to scale when the cardinality is very high, require much space, are hard to interpret or may overfit the data. A special scenario of interest is that of repeated measures, where the categorical feature is the identity of the individual or object, and each object is measured several times, possibly under different conditions (values of the other features). We propose accounting for high-cardinality categorical features as random effects variables in a regression setting, and consequently adopt the corresponding negative log likelihood loss from the linear mixed models (LMM) statistical literature and integrate it in a deep learning framework. We test our model which we call LMMNN on simulated as well as real datasets with a single categorical feature with high cardinality, using various baseline neural networks architectures such as convolutional networks and LSTM, and various applications in e-commerce, healthcare and computer vision. Our results show that treating high-cardinality categorical features as random effects leads to a significant improvement in prediction performance compared to state of the art alternatives. Potential extensions such as accounting for multiple categorical features and classification settings are discussed. Our code and simulations are available at https://github.com/gsimchoni/lmmnn.
accept
As the area chair, I recommend that this paper be accepted and published. The main reason is that the paper is genuinely useful to practitioners, because it addresses an issue that arises in many applications and the solution is clear and simple. A paper like this one should not be rejected because the authors have failed to do some work that reviewers would like to see. That work should be in future papers. The authors are clearly knowledgeable and competent. The technical work is of good quality and correct. Practitioners and future researchers can use and build on this. In the final version, hopefully the authors will address the last part of the following comment from an expert reviewer: "High-cardinality categorical variables are pervasive in practice. An effective, generic, clearly expressed solution to coping with such variables would certainly be a welcome contribution to the literature. This is true even if the solution has modest technical novelty; indeed, it would be a virtue if one could arrive at a simple solution based on existing ideas. One could make a case that the authors' solution is effective. However, I am less sure about generic and clearly expressed..."
train
[ "ZSoZqQdyTTc", "V5YKjaX0rtO", "_uhwaPkzF45", "9BtJwjePUig", "nzjE5XjFjxA", "_JtX8PfzPXs", "WmUerxTUzOL", "bQGKchIKNq", "Ga-4XnOLpLd", "H1I_5zo3EH2", "91FUelUFsLH", "fm8wc81syIt", "1ITXYaTEiGU" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed reply.\n\nI wanted to clarify the concern regarding \"in a somewhat restricted setting\". This was not referring to the decomposition of y as f(X) + g(Z) being somehow limited in capacity. The comment instead was referring to the restriction to regression settings with a single categorical...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "V5YKjaX0rtO", "_uhwaPkzF45", "Ga-4XnOLpLd", "nzjE5XjFjxA", "Ga-4XnOLpLd", "1ITXYaTEiGU", "fm8wc81syIt", "H1I_5zo3EH2", "91FUelUFsLH", "nips_2021_8hLuUnBwfM2", "nips_2021_8hLuUnBwfM2", "nips_2021_8hLuUnBwfM2", "nips_2021_8hLuUnBwfM2" ]
nips_2021_-oUhJJILWHb
Learning Debiased Representation via Disentangled Feature Augmentation
Image classification models tend to make decisions based on peripheral attributes of data items that have strong correlation with a target variable (i.e., dataset bias). These biased models suffer from the poor generalization capability when evaluated on unbiased datasets. Existing approaches for debiasing often identify and emphasize those samples with no such correlation (i.e., bias-conflicting) without defining the bias type in advance. However, such bias-conflicting samples are significantly scarce in biased datasets, limiting the debiasing capability of these approaches. This paper first presents an empirical analysis revealing that training with "diverse" bias-conflicting samples beyond a given training set is crucial for debiasing as well as the generalization capability. Based on this observation, we propose a novel feature-level data augmentation technique in order to synthesize diverse bias-conflicting samples. To this end, our method learns the disentangled representation of (1) the intrinsic attributes (i.e., those inherently defining a certain class) and (2) bias attributes (i.e., peripheral attributes causing the bias), from a large number of bias-aligned samples, the bias attributes of which have strong correlation with the target variable. Using the disentangled representation, we synthesize bias-conflicting samples that contain the diverse intrinsic attributes of bias-aligned samples by swapping their latent features. By utilizing these diversified bias-conflicting features during the training, our approach achieves superior classification accuracy and debiasing results against the existing baselines on both synthetic and real-world datasets.
accept
All reviewers are in agreement that the paper tackles an important problem in a simple yet effective, and enthusiastically recommend acceptance. The authors provided extensive experiments and explanations in their rebuttal which should be incorporated into the final version.
train
[ "JgdWd4364kl", "4hDJsazkzJ", "4s2XjLiOuRu", "s8R9s8AaVeR", "BqGCLLHtYJF", "rmZ8PEQtmr", "shsFF37HC6E", "ZULzEui9JkM", "my9djhuCwKG", "KYM8cYUI4OU", "4LkftcbcvRI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes a novel feature-level data augmentation technique to train image classifiers in a debiased manner. The approach consists of disentangling “intrinsic” and “bias” attributes from training set images and then synthesizing new “data points” by combining intrinsic and bias features from different ima...
[ 7, 7, -1, 7, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, 5, -1, 5, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_-oUhJJILWHb", "nips_2021_-oUhJJILWHb", "rmZ8PEQtmr", "nips_2021_-oUhJJILWHb", "ZULzEui9JkM", "4hDJsazkzJ", "4LkftcbcvRI", "s8R9s8AaVeR", "JgdWd4364kl", "nips_2021_-oUhJJILWHb", "nips_2021_-oUhJJILWHb" ]
nips_2021_ngdcA1tlDvj
Scallop: From Probabilistic Deductive Databases to Scalable Differentiable Reasoning
Deep learning and symbolic reasoning are complementary techniques for an intelligent system. However, principled combinations of these techniques have limited scalability, rendering them ill-suited for real-world applications. We propose Scallop, a system that builds upon probabilistic deductive databases, to bridge this gap. The key insight underlying Scallop is a provenance framework that introduces a tunable parameter to specify the level of reasoning granularity. Scallop thereby i) generalizes exact probabilistic reasoning, ii) asymptotically reduces computational cost, and iii) provides relative accuracy guarantees. On a suite of tasks that involve mathematical and logical reasoning, Scallop scales significantly better without sacrificing accuracy compared to DeepProbLog, a principled neural logic programming approach. We also create and evaluate on a real-world Visual Question Answering (VQA) benchmark that requires multi-hop reasoning. Scallop outperforms two VQA-tailored models, a Neural Module Networks based and a transformer based model, by 12.42% and 21.66% respectively.
accept
Four knowledgeable reviewers praised the idea of combining forward chaining with differentiable and probabilistic programming to tackle tasks such as VQA where the combination of symbolic and sub-symbolic systems clearly pays off. During the rebuttal, authors provided enough context and details to better cast the proposed Scallop into the literature and to better evaluate the experiments. The paper is accepted, as it can spawn further discussion in the neuro-symbolic community about such an important research direction. However, acceptance is conditional on the authors including all the promised (and discussed) details in the camera-ready. One additional point to clarify in the camera-ready is how Scallop differs from modern implementation of (Deep) ProbLog where essentially forward reasoning is used.
val
[ "gRDcbG97KWk", "rrIJnbdE818", "h8y7cTjCtcA", "gsf4buAowas", "6eSQHtqwEOJ", "UgAtoEBTw4", "THf9pU065j3", "ehXWgR6SuhX", "fFOFGe1adHp", "fz9Q3GatpVs", "MOzuFZ4aUBK", "dRx2urui2Mk", "_aQ9lkpqq1L", "ZQwnAupHUX" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your comment. It was not our intention to oversell our contributions, and this exchange is helpful to us for improving the presentation, especially in terms of clarifying the strengths and limitations of our approach in the introuction, which we will do in the revised version.", " Hi,\n\nI have re...
[ -1, -1, -1, 6, -1, -1, -1, -1, 7, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, 1, -1, -1, -1, -1, 2, -1, -1, -1, 3, 4 ]
[ "rrIJnbdE818", "THf9pU065j3", "6eSQHtqwEOJ", "nips_2021_ngdcA1tlDvj", "MOzuFZ4aUBK", "nips_2021_ngdcA1tlDvj", "ZQwnAupHUX", "dRx2urui2Mk", "nips_2021_ngdcA1tlDvj", "_aQ9lkpqq1L", "gsf4buAowas", "fFOFGe1adHp", "nips_2021_ngdcA1tlDvj", "nips_2021_ngdcA1tlDvj" ]
nips_2021_wP9twkexC3V
Learning to Synthesize Programs as Interpretable and Generalizable Policies
Recently, deep reinforcement learning (DRL) methods have achieved impressive performance on tasks in a variety of domains. However, neural network policies produced with DRL methods are not human-interpretable and often have difficulty generalizing to novel scenarios. To address these issues, prior works explore learning programmatic policies that are more interpretable and structured for generalization. Yet, these works either employ limited policy representations (e.g. decision trees, state machines, or predefined program templates) or require stronger supervision (e.g. input/output state pairs or expert demonstrations). We present a framework that instead learns to synthesize a program, which details the procedure to solve a task in a flexible and expressive manner, solely from reward signals. To alleviate the difficulty of learning to compose programs to induce the desired agent behavior from scratch, we propose to first learn a program embedding space that continuously parameterizes diverse behaviors in an unsupervised manner and then search over the learned program embedding space to yield a program that maximizes the return for a given task. Experimental results demonstrate that the proposed framework not only learns to reliably synthesize task-solving programs but also outperforms DRL and program synthesis baselines while producing interpretable and more generalizable policies. We also justify the necessity of the proposed two-stage learning scheme as well as analyze various methods for learning the program embedding. Website at https://clvrai.com/leaps.
accept
This paper presents an interesting idea of learning policies for solving RL problems through program synthesis. The key is to learn a good program embedding space, through 3 different losses that take care of 3 different aspects of a program representation. The reviewers all appreciate the clarity of the paper and liked the idea. The extensive ablations and supplementary material are also great for supporting the proposed approach. In the future showing that this approach works on other domains beyond Karel, e.g. Minecraft and CARLA as suggested by the authors in the discussion, would be great.
train
[ "Ghcv36fnZZV", "JV6Gnt2_h3O", "MEyXubgo3g", "TpZzcSCnVdr", "sQhjVkaEI9A", "Lgp7tfYsnnh", "aOmBPgXfdp", "GrDDwoKMHhn", "wnWJSkxIR4", "yYwipJtgWjB", "4rM4j4zxwYk", "ccPIoOUNj0g", "KvkEbpNFPt7", "_C19sV5Qfxf", "q2SH58oONUX", "NeTLmY8v8Pi", "P3T_CAC-mqw" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the thorough and encouraging review that points out the unclear parts of our submission. Also, we deeply appreciate the reviewer for not only spotting the errors in our DSL but also clearly stating how they should be fixed.", " Many thanks for the clarifications. I'm glad the review wa...
[ -1, -1, -1, -1, 7, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ -1, -1, -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "JV6Gnt2_h3O", "KvkEbpNFPt7", "TpZzcSCnVdr", "_C19sV5Qfxf", "nips_2021_wP9twkexC3V", "GrDDwoKMHhn", "nips_2021_wP9twkexC3V", "ccPIoOUNj0g", "P3T_CAC-mqw", "aOmBPgXfdp", "sQhjVkaEI9A", "aOmBPgXfdp", "NeTLmY8v8Pi", "sQhjVkaEI9A", "nips_2021_wP9twkexC3V", "nips_2021_wP9twkexC3V", "nips_...
nips_2021_t1czgrQOrwW
The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning
The visual system of mammals is comprised of parallel, hierarchical specialized pathways. Different pathways are specialized in so far as they use representations that are more suitable for supporting specific downstream behaviours. In particular, the clearest example is the specialization of the ventral ("what") and dorsal ("where") pathways of the visual cortex. These two pathways support behaviours related to visual recognition and movement, respectively. To-date, deep neural networks have mostly been used as models of the ventral, recognition pathway. However, it is unknown whether both pathways can be modelled with a single deep ANN. Here, we ask whether a single model with a single loss function can capture the properties of both the ventral and the dorsal pathways. We explore this question using data from mice, who like other mammals, have specialized pathways that appear to support recognition and movement behaviours. We show that when we train a deep neural network architecture with two parallel pathways using a self-supervised predictive loss function, we can outperform other models in fitting mouse visual cortex. Moreover, we can model both the dorsal and ventral pathways. These results demonstrate that a self-supervised predictive learning approach applied to parallel pathway architectures can account for some of the functional specialization seen in mammalian visual systems.
accept
This is a clear accept. All the reviewers were clearly positive, and I largely agree with their judgements. I think the reviews speak for themselves, and I don't have much to add here.
train
[ "dPcNMlQ6ezI", "OFwNPL9d1y", "OB5vMY3Gf_F", "Hwa8MMI-QmX", "bLIqyfNdRDH", "01q-Lr1A1mU", "wO2al5U01Z", "ngv0fI_KUfq", "XHLSL2e22Cg", "27C5beLzbXP", "ypVPPBNJObx", "2oPefGb3Hl-", "HZyO6JBynKu", "_NnUkvPB42w", "220EAwPwGQ1", "Ym92uqJ9MFn", "5cNzh-tCLU", "T05Oqcl5IlR" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Motivated by the literature showing that the primate visual system has two pathways (\"ventral\" and \"dorsal\") with distinct functional specializations, the authors try to build a model of a visual system that also has representationally and functionally distinct regions. They turn to calcium imaging data from m...
[ 7, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8 ]
[ 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_t1czgrQOrwW", "OB5vMY3Gf_F", "bLIqyfNdRDH", "nips_2021_t1czgrQOrwW", "01q-Lr1A1mU", "ngv0fI_KUfq", "_NnUkvPB42w", "27C5beLzbXP", "dPcNMlQ6ezI", "HZyO6JBynKu", "dPcNMlQ6ezI", "dPcNMlQ6ezI", "Hwa8MMI-QmX", "5cNzh-tCLU", "T05Oqcl5IlR", "nips_2021_t1czgrQOrwW", "nips_2021_t1cz...
nips_2021_f8Dqhg0w-7i
Adversarial Training Helps Transfer Learning via Better Representations
Transfer learning aims to leverage models pre-trained on source data to efficiently adapt to target setting, where only limited data are available for model fine-tuning. Recent works empirically demonstrate that adversarial training in the source data can improve the ability of models to transfer to new domains. However, why this happens is not known. In this paper, we provide a theoretical model to rigorously analyze how adversarial training helps transfer learning. We show that adversarial training in the source data generates provably better representations, so fine-tuning on top of this representation leads to a more accurate predictor of the target data. We further demonstrate both theoretically and empirically that semi-supervised learning in the source data can also improve transfer learning by similarly improving the representation. Moreover, performing adversarial training on top of semi-supervised learning can further improve transferability, suggesting that the two approaches have complementary benefits on representations. We support our theories with experiments on popular data sets and deep learning architectures.
accept
This is a solid paper that studies an important phenomenon that is not well understood: why adversarial training on the source data can improve the ability of models to transfer to new domains or targets? This is somewhat surprising as it might be easier to think of adversarial training as a task specific modification that can reduce performance on the source data, and the connection to transfer learning is not very intuitive on a first sight. The paper replicates an adversarially trained ($\ell_2$ robustness) Imegenet model providing better accuracy on CIFAR10. Starting from fixed-feature transfer learning (Salman et.al.), an adversarial training step is plugged into the fixed-feature learning step.The authors also show that pseudo-labeling can lead to better representations. The authors analyse a hierarchical linear model for multitask learning and provide Alg-1 as the linear representation learning algorithm. This setup gives an upper bound to excess risk that can be decomposed in two terms: representation error (related to the sum of angles between the estimated and true factor matrices) and a task dependent term. The representation error can be further bounded using a Davis-Kahan style perturbation bound (Wedin's theorem). The strategy of the paper is to extend Alg-1 to the adversarial training setting (Alg-2) and finally also to include pseudo labelling (Alg-3). The structure and the arguments of the paper are easy to follow. During rebuttal, several concerns were raised, I summarize main ones succinctly and less nuanced below: 1. The validity of the assumptions. Assumption 2 is very strong. No verification of assumptions. (This assumption is related to the magnitude of task specific weights $\alpha_k$ where a large signal-to-noise (SNR) corresponds to a large $||\alpha||$, a proxy of task uncertainty) 2. Pseudolabel scenario is not much related 3. Observed robustness could be explained by different resolution on the source and target datasets, Analysis does not explain/has a big gap to experiment as it requires too many tasks In my opinion, the authors very clearly addressed the concerns raised by the reviewers. In particular, the connection of variation in the SNR's to explain why adversarial training might help seems to come out naturally from the analysis and seems to be quite interesting. In particular, I feel the paper makes some relevant contributions. It carefully builds and contrasts the various algorithms and provides a compelling explanation for the observed phenomenon. Adversarial training will bias the model to focus on learning the representation out of those with large signal-to-noise ratios and enhances transferability. In my opinion, one possible useful addition to address point 3 above would have been to verify the theory with a set of synthetic simulations. This would be fairly straightforward starting from the model and could potentially illustrate the assumptions much more clearly by explicitly controlling the task distribution. Especially, I can envision an experiment that directly controls the task diversity and SNR's to empirically show the regime where robust training is helping. Many related concerns (regarding non-asymptotic validity) could have been partially addressed through such a synthetic experiment. Overall, I am persuaded that the work would be a solid contribution and in the lack of strong counter arguments from the reviewers, I am using my own judgement and independent evaluation to suggest acceptance.
train
[ "Z0mpVYBPZel", "v7a_0V9wxXP", "-9W2nYLTCCp", "TSd-mckds23", "tXGuvjpekA6", "Vf08RM8FmSk", "FNCTlC-bgKN", "iWwGA5YCKG", "6CEXB0g5Ssv", "Dc6k0uQyixd", "akLfw5uBSf", "-zjk6udUHki", "SQQeXJRwpV", "7uL9g9hmaQJ" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer, please feel free to tell us if you have any further concerns. Given that the discussion phase will end soon, we sincerely hope you could reevaluate our paper and raise our score if your concerns are addressed.\n", " Dear reviewer, please feel free to tell us if you have any further concerns. Give...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 2 ]
[ "Vf08RM8FmSk", "FNCTlC-bgKN", "TSd-mckds23", "tXGuvjpekA6", "iWwGA5YCKG", "Dc6k0uQyixd", "akLfw5uBSf", "6CEXB0g5Ssv", "-zjk6udUHki", "SQQeXJRwpV", "7uL9g9hmaQJ", "nips_2021_f8Dqhg0w-7i", "nips_2021_f8Dqhg0w-7i", "nips_2021_f8Dqhg0w-7i" ]
nips_2021_P7GUAXxS3ym
Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning
Human reasoning can be understood as an interplay between two systems: the intuitive and associative ("System 1") and the deliberative and logical ("System 2"). Neural sequence models---which have been increasingly successful at performing complex, structured tasks---exhibit the advantages and failure modes of System 1: they are fast and learn patterns from data, but are often inconsistent and incoherent. In this work, we seek a lightweight, training-free means of improving existing System 1-like sequence models by adding System 2-inspired logical reasoning. We explore several variations on this theme in which candidate generations from a neural sequence model are examined for logical consistency by a symbolic reasoning module, which can either accept or reject the generations. Our approach uses neural inference to mediate between the neural System 1 and the logical System 2. Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.
accept
This paper borrows insights from neuroscience and psychology, exploring how symbolic approaches can be combined with a large-scale neural language models to improve responses because systems like GPT3 make mistakes that humans would not. This work can be viewed in several different lights: evidence that the system1/system2 insight is useful for ML systems and should be explored more; further evidence that hybrid ML and symbolic approaches are promising; a way to improve (increasing coherence and accuracy) black-box ML systems for deployment. Indeed the paper speaks to many of these views, but the reviewers felt that the paper does none of them convincingly. The reviewers flagged several issues including: speed of the inference step, that this is not really a compelling instantiation & evaluation of the sys1/sys2 concept, and the consistency in the parsing process. One reviewer was concerned that the work builds on and uses a closed source entry (GPT3) for both practical and ethical reasons. Several reviewers flagged useful related works whose discussion will strengthen the work. In the end all four reviewers---all working in related areas---all agreed on rejection. The main concern being scalability and generality of the proposed approach due to the need to hand-design design world models for each domain. The reviewers found the lack of general principles & advice offered for this critical step unconvincing. The experiments presented did not do enough to convince on this critical question. Further, the discussion revealed the reviewers were not satisfied with this as a first step---that too much was being left to future work---and that non-trivial revisions are needed. Besides these concerns the work is polished and he author response comprehensive. On balance, this work was close but just below the bar.
test
[ "PnhsYJNJaie", "5QlOsX64dH", "RdAGRaVuiY8", "aAKTyLwFmg", "jF2LJDSfjiZ", "9grr7p6tu0T", "zm2H5VJgYJ", "0xzuiAthID", "D67dcWRlXP", "P-tl6lJp_hW", "ArJ-WIPQXEW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the reviewers took the time to reply to all my comments. I agree the improved performance is not a guarantee, but still it does not convince me enough to change my rating.", "The paper proposes a neuro-symbolic model to improve coherence and consistency in neural language generation models. In part...
[ -1, 5, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "0xzuiAthID", "nips_2021_P7GUAXxS3ym", "9grr7p6tu0T", "nips_2021_P7GUAXxS3ym", "5QlOsX64dH", "ArJ-WIPQXEW", "P-tl6lJp_hW", "D67dcWRlXP", "nips_2021_P7GUAXxS3ym", "nips_2021_P7GUAXxS3ym", "nips_2021_P7GUAXxS3ym" ]
nips_2021_zMZPDwm3H3
Learning the optimal Tikhonov regularizer for inverse problems
Giovanni S. Alberti, Ernesto De Vito, Matti Lassas, Luca Ratti, Matteo Santacesaria
accept
This paper studies a basic problem in optimal estimation, whose answer is well-known in the finite-dimensional case, and extends the characterization to inifinite-dimensional settings. In particular suppose we are given an observation of the form $A x + \epsilon$ and our goal is to estimate $x$ and achieve minimum mean-squared error. This paper studies the problem of characterizing the optimal regularizer, when constrained to a Tikhonov regularization scheme. They show that the optimal regularizer is independent of $A$ (this depends crucially on the assumption that $x$ and $\epsilon$ are independent) and moreover they give finite-sample guarantees for being able to approximate it. While the results are not surprising, there are many subtleties that just don’t happen in the finite dimensional case (e.g. $A$ will not in general have a bounded inverse). Overall the paper studies an important problem, makes a solid contribution and is well-written.
train
[ "kvrHEJzOXFu", "SIc8QIgUU_y", "yKdzgo6jMI9", "dbzUUiuIU5t", "zOpl8hel4u1", "giIxZ4S_t8J", "tFx14aT8Fjq", "9DcbJU2_Lj", "qdibMrKDta7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper studies the learning of the optimal generalized Tikhonov regularizer for linear inverse problems. Optimality is investigated with regards to the mean-squared error.\n\nThe authors show that the optimal parameters are independent of the forward operator and additive noise. Next they look into how the opt...
[ 6, 6, 6, -1, -1, -1, -1, -1, 7 ]
[ 2, 3, 3, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021_zMZPDwm3H3", "nips_2021_zMZPDwm3H3", "nips_2021_zMZPDwm3H3", "tFx14aT8Fjq", "SIc8QIgUU_y", "kvrHEJzOXFu", "qdibMrKDta7", "yKdzgo6jMI9", "nips_2021_zMZPDwm3H3" ]
nips_2021_CYUzpnOkFJp
NovelD: A Simple yet Effective Exploration Criterion
Efficient exploration under sparse rewards remains a key challenge in deep reinforcement learning. Previous exploration methods (e.g., RND) have achieved strong results in multiple hard tasks. However, if there are multiple novel areas to explore, these methods often focus quickly on one without sufficiently trying others (like a depth-wise first search manner). In some scenarios (e.g., four corridor environment in Sec 4.2), we observe they explore in one corridor for long and fail to cover all the states. On the other hand, in theoretical RL, with optimistic initialization and the inverse square root of visitation count as a bonus, it won't suffer from this and explores different novel regions alternatively (like a breadth-first search manner). In this paper, inspired by this, we propose a simple but effective criterion called NovelD by weighting every novel area approximately equally. Our algorithm is very simple but yet shows comparable performance or even outperforms multiple SOTA exploration methods in many hard exploration tasks. Specifically, NovelD solves all the static procedurally-generated tasks in Mini-Grid with just 120M environment steps, without any curriculum learning. In comparison, the previous SOTA only solves 50% of them. NovelD also achieves SOTA on multiple tasks in NetHack, a rogue-like game that contains more challenging procedurally-generated environments. In multiple Atari games (e.g., MonteZuma's Revenge, Venture, Gravitar), NovelD outperforms RND. We analyze NovelD thoroughly in MiniGrid and found that empirically it helps the agent explore the environment more uniformly with a focus on exploring beyond the boundary.
accept
After reading the reviews, the authors response and the discussions, I suggest to accept the paper. All reviewers agree that the paper presents an original idea in a clear way. Minor questions and concerns have been answered during the rebuttal by the authors. Overall the idea is simple but yet effective on a spectrum of different tasks. In addition, the method is compared with state of the art baselines. To improve the paper, experiments on more challenging environments such as 3D visual game could be considered.
train
[ "GvjSZjTD_8v", "w5pNRPPGZo", "1lE6oAqlnkS", "7vMZyHz0crE", "TgxwKH8prh", "zWXxORMPqVZ", "Ns46c8l6fRB", "Np1_Ixjxijs", "mrckcWC_nQi" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewers for the time spent reviewing our work, their thorough review, and their interest in our work. We'll answer some common questions here. \n\n**1. Comparison of NovelD with AGAC and RAPID on MiniGrid.**\n\nWe summarize the results of the comparison here between AGAC, RAPID, and NovelD below an...
[ -1, -1, -1, -1, -1, 6, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "nips_2021_CYUzpnOkFJp", "zWXxORMPqVZ", "mrckcWC_nQi", "Ns46c8l6fRB", "Np1_Ixjxijs", "nips_2021_CYUzpnOkFJp", "nips_2021_CYUzpnOkFJp", "nips_2021_CYUzpnOkFJp", "nips_2021_CYUzpnOkFJp" ]
nips_2021_yLyXqdsYho
On Margin-Based Cluster Recovery with Oracle Queries
Marco Bressan, Nicolò Cesa-Bianchi, Silvio Lattanzi, Andrea Paudice
accept
The paper discusses a clustering setting in which clusters are recovered using queries of the form “are two points x, x’ in the same cluster”, in a general notion of margin which generalizes previously studied cases. The paper is theoretical, with no experimental part. Reviews are high quality, with consistent pro-accept bottom line. Reviewers raised some concerns related to novelty of some notions defined in the paper, and also related to lack of experiments, but these do not seem to considerably impact the final opinion.
train
[ "J1-nQJv67p9", "KGAtIR_Pfp0", "IatEXIPn73F", "lFc3xkg02P", "nBCqACHKvf", "ol2GcynncOS", "J6fTcFApEr", "tG4Uoe0TjH", "AlOoiMAhLq4", "yjCh5d1bvEf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " I am satisfied with the authors' rebuttal.", "This work examines a cluster recovery problem where the algorithm has access to an oracle that can return whether two points lie in the same cluster. It builds on a line of work [Ashtiani et al. ‘16] [Bressan et al. ‘20] that does so with only O(log n) queries but u...
[ -1, 6, 7, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 3, 3, -1, -1, -1, -1, -1, -1, 4 ]
[ "AlOoiMAhLq4", "nips_2021_yLyXqdsYho", "nips_2021_yLyXqdsYho", "nBCqACHKvf", "ol2GcynncOS", "J6fTcFApEr", "IatEXIPn73F", "KGAtIR_Pfp0", "yjCh5d1bvEf", "nips_2021_yLyXqdsYho" ]
nips_2021_-xEk43f_EO6
Multi-Scale Representation Learning on Proteins
Proteins are fundamental biological entities mediating key roles in cellular function and disease. This paper introduces a multi-scale graph construction of a protein –HoloProt– connecting surface to structure and sequence. The surface captures coarser details of the protein, while sequence as primary component and structure –comprising secondary and tertiary components– capture finer details. Our graph encoder then learns a multi-scale representation by allowing each level to integrate the encoding from level(s) below with the graph at that level. We test the learned representation on different tasks, (i.) ligand binding affinity (regression), and (ii.) protein function prediction (classification).On the regression task, contrary to previous methods, our model performs consistently and reliably across different dataset splits, outperforming all baselines on most splits. On the classification task, it achieves a performance close to the top-performing model while using 10x fewer parameters. To improve the memory efficiency of our construction, we segment the multiplex protein surface manifold into molecular superpixels and substitute the surface with these superpixels at little to no performance loss.
accept
The paper proposes a very interesting approach for protein representation learning. The AC and reviewers greatly appreciated the author feedback and we urge the authors to incorporate their points into the manuscript. In particular it would be important to include the comparison with ProtBERT-BFD and the extended analysis results. If the authors cannot include results for larger number of parameters as promised, it is critical that they include comments on limited resources, where the bottlenecks are and path to resolve these.
train
[ "QI6ZgPjVvqg", "kCPkgoUZCYj", "NcrbioAA8s", "S2Cw0zp-pwM", "rPm6fwP-yE", "EI0ACHK3-gB", "fnoffQZykd6", "h-sP8doJK1h", "6ddYYad2xo", "Ij5iKaf0EFD", "qXha_TXEutE" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The authors introduce a representation learning approach to protein structures which encompasses primary sequence, secondary structure, inter-residue distances, and surface features captured by triangulations. These different scales are encoded using two linked graph representations, and message passing networks a...
[ 7, -1, 7, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_-xEk43f_EO6", "EI0ACHK3-gB", "nips_2021_-xEk43f_EO6", "6ddYYad2xo", "Ij5iKaf0EFD", "fnoffQZykd6", "h-sP8doJK1h", "NcrbioAA8s", "QI6ZgPjVvqg", "qXha_TXEutE", "nips_2021_-xEk43f_EO6" ]
nips_2021_sl_0rQmHxQk
Sparse Quadratic Optimisation over the Stiefel Manifold with Application to Permutation Synchronisation
We address the non-convex optimisation problem of finding a sparse matrix on the Stiefel manifold (matrices with mutually orthogonal columns of unit length) that maximises (or minimises) a quadratic objective function. Optimisation problems on the Stiefel manifold occur for example in spectral relaxations of various combinatorial problems, such as graph matching, clustering, or permutation synchronisation. Although sparsity is a desirable property in such settings, it is mostly neglected in spectral formulations since existing solvers, e.g. based on eigenvalue decomposition, are unable to account for sparsity while at the same time maintaining global optimality guarantees. We fill this gap and propose a simple yet effective sparsity-promoting modification of the Orthogonal Iteration algorithm for finding the dominant eigenspace of a matrix. By doing so, we can guarantee that our method finds a Stiefel matrix that is globally optimal with respect to the quadratic objective function, while in addition being sparse. As a motivating application we consider the task of permutation synchronisation, which can be understood as a constrained clustering problem that has particular relevance for matching multiple images or 3D shapes in computer vision, computer graphics, and beyond. We demonstrate that the proposed approach outperforms previous methods in this domain.
accept
The proposed method for sparse quadratic optimization over the Stiefel manifold and its analysis are interesting. However, the technical presentation and theoretical results have to be strengthened.
train
[ "1u2Ghbv_qYF", "PAqqFMUXp5-", "NM3LpGptADM", "m_qUa-cvtjL", "NatWZZU-5Z", "gN3HB-pZKe3", "vndsjTTyut", "TUOncEBifgA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "In this paper the authors propose a modification of a well-known algorithm, in order to maximize a quadratic function over the Stiefel manifold, promoting sparsity at the same time. The base method is not a gradient-retraction over a manifold algorithm (although some tools are used in the middle), but a QR type of...
[ 6, 4, 6, -1, -1, -1, -1, 7 ]
[ 3, 5, 5, -1, -1, -1, -1, 5 ]
[ "nips_2021_sl_0rQmHxQk", "nips_2021_sl_0rQmHxQk", "nips_2021_sl_0rQmHxQk", "PAqqFMUXp5-", "TUOncEBifgA", "1u2Ghbv_qYF", "NM3LpGptADM", "nips_2021_sl_0rQmHxQk" ]
nips_2021_XwetFe0U63c
Second-Order Neural ODE Optimizer
We propose a novel second-order optimization framework for training the emerging deep continuous-time models, specifically the Neural Ordinary Differential Equations (Neural ODEs). Since their training already involves expensive gradient computation by solving a backward ODE, deriving efficient second-order methods becomes highly nontrivial. Nevertheless, inspired by the recent Optimal Control (OC) interpretation of training deep networks, we show that a specific continuous-time OC methodology, called Differential Programming, can be adopted to derive backward ODEs for higher-order derivatives at the same O(1) memory cost. We further explore a low-rank representation of the second-order derivatives and show that it leads to efficient preconditioned updates with the aid of Kronecker-based factorization. The resulting method – named SNOpt – converges much faster than first-order baselines in wall-clock time, and the improvement remains consistent across various applications, e.g. image classification, generative flow, and time-series prediction. Our framework also enables direct architecture optimization, such as the integration time of Neural ODEs, with second-order feedback policies, strengthening the OC perspective as a principled tool of analyzing optimization in deep learning. Our code is available at https://github.com/ghliu/snopt.
accept
This paper proposes a second-order optimization algorithm for neural ODEs. Inspired by optimal control theory, the paper derives backward ODEs for higher-order derivatives at constant memory cost. The authors shows their method achieves faster convergence than first-order methods across three different tasks. The contribution is novel and relevant to NeurIPS, and is appreciated by the reviewers. The main criticism that surfaced during review is the absence of certain baselines, raised by reviewer ikHp: After back and forth discussion between the authors and reviewer this concern was addressed sufficiently for the reviewer to raise their review score. The authors also replied extensively to comments by the other reviewers, and no further serious concerns were identified. The reviewers formed a consensus recommendation to accept the paper.
train
[ "vgOe7t-JX1z", "6V3b3hnWSRn", "v33Kb5ITML6", "dblP4oL7EB", "4Tb-jTay-g6", "eWmUCqky92y", "-dWDnoxkbt", "8GYRBR9dHGO", "SQ1FtJ0s1bX", "mzrutTNkKA-", "cN1RxrhpARk", "WKItDlSu-P", "7WP_7Aifo-j", "0hvwpKkoT5a", "dpviFdIhOTy" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents a novel second-order optimization technique for neural ordinary differential equations (NODEs), derived from the optimal control perspective of NODEs. In a nutshell, the authors consider the second-order Taylor expansion of the loss term, leading to matrix-valued ODEs that give second-order der...
[ 8, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 5, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_XwetFe0U63c", "v33Kb5ITML6", "dblP4oL7EB", "WKItDlSu-P", "nips_2021_XwetFe0U63c", "-dWDnoxkbt", "8GYRBR9dHGO", "mzrutTNkKA-", "nips_2021_XwetFe0U63c", "4Tb-jTay-g6", "vgOe7t-JX1z", "dpviFdIhOTy", "0hvwpKkoT5a", "nips_2021_XwetFe0U63c", "nips_2021_XwetFe0U63c" ]
nips_2021_yGKklt8wyV
Graph Neural Networks with Local Graph Parameters
Various recent proposals increase the distinguishing power of Graph Neural Networks (GNNs) by propagating features between k-tuples of vertices. The distinguishing power of these “higher-order” GNNs is known to be bounded by the k-dimensional Weisfeiler-Leman (WL) test, yet their O(n^k) memory requirements limit their applicability. Other proposals infuse GNNs with local higher-order graph structural information from the start, hereby inheriting the desirable O(n) memory requirement from GNNs at the cost of a one-time, possibly non-linear, preprocessing step. We propose local graph parameter enabled GNNs as a framework for studying the latter kind of approaches and precisely characterize their distinguishing power, in terms of a variant of the WL test, and in terms of the graph structural properties that they can take into account. Local graph parameters can be added to any GNN architecture, and are cheap to compute. In terms of expressive power, our proposal lies in the middle of GNNs and their higher-order counterparts. Further, we propose several techniques to aide in choosing the right local graph parameters. Our results connect GNNs with deep results in finite model theory and finite variable logics. Our experimental evaluation shows that adding local graph parameters often has a positive effect for a variety of GNNs, datasets and graph learning tasks.
accept
This paper unifies several classes of recently proposed graph neural network architectures via the notion of graph homomorphisms. The reviewers agree that presented theoretical framework is novel. Even more importantly, it can be successfully applied to propose new GNN architectures. The paper is well written and provides rigorous mathematical analysis of several GNN methods that were previously analyzed only empirically.
train
[ "UBusiRwcA-X", "Bq49a6zwY1", "iRc5fEkKCyB", "qSX4jGaJXy_", "y0dqkK11ozi", "QG1SManlj3x", "yhbHk89eaW2", "9jAl-2jNBbn", "XLBzfwAFcM5", "O7kk84cNpCi", "K2fsBXjbyxr" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the final suggestions. Just to clarify a bit more on how a precise characterization of GSNs can be obtained:\nThe idea is to define a special kind of homomorphism $h:T^r\\to G^v$ from (rooted) pattern trees $T^r$ to (rooted) graphs $G^v$ such that: \n* (i) $h$ is a standard homomorphism...
[ -1, 7, -1, 5, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, 4, -1, 4, -1, -1, -1, -1, -1, 3, 2 ]
[ "iRc5fEkKCyB", "nips_2021_yGKklt8wyV", "yhbHk89eaW2", "nips_2021_yGKklt8wyV", "nips_2021_yGKklt8wyV", "qSX4jGaJXy_", "Bq49a6zwY1", "O7kk84cNpCi", "K2fsBXjbyxr", "nips_2021_yGKklt8wyV", "nips_2021_yGKklt8wyV" ]
nips_2021_OItvP2-i9j
Closing the Gap: Tighter Analysis of Alternating Stochastic Gradient Methods for Bilevel Problems
Tianyi Chen, Yuejiao Sun, Wotao Yin
accept
It is the consensus of the reviewers that this paper makes a worthwhile contribution to stochastic optimization. The analysis of alternating gradient descent is tight. The implications of the analysis on stochastic compositional, min-max, and bilevel optimizations are interesting and useful.
train
[ "p4Nxp6d9dew", "gJiokPc3Jac", "2OkEnPSnm0", "B6sXXophyon", "xOnhR4Oz4O", "obrMbtp6eqc", "GOPR-O52PAs", "g801np4l5Wa", "GQb4uAgxs-" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies the problem of solving a stochastic bilevel optimization problem, where the upper level optimization problem depends on the solution of the lower level optimization problem. The authors show that this stochastic bilevel optimization problem covers three different classes of problems (stochastic ...
[ 8, -1, -1, -1, -1, -1, 7, 8, 8 ]
[ 3, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "nips_2021_OItvP2-i9j", "B6sXXophyon", "GQb4uAgxs-", "p4Nxp6d9dew", "g801np4l5Wa", "GOPR-O52PAs", "nips_2021_OItvP2-i9j", "nips_2021_OItvP2-i9j", "nips_2021_OItvP2-i9j" ]
nips_2021_i8kfkuiCJCI
Dense Unsupervised Learning for Video Segmentation
We present a novel approach to unsupervised learning for video object segmentation (VOS). Unlike previous work, our formulation allows to learn dense feature representations directly in a fully convolutional regime. We rely on uniform grid sampling to extract a set of anchors and train our model to disambiguate between them on both inter- and intra-video levels. However, a naive scheme to train such a model results in a degenerate solution. We propose to prevent this with a simple regularisation scheme, accommodating the equivariance property of the segmentation task to similarity transformations. Our training objective admits efficient implementation and exhibits fast training convergence. On established VOS benchmarks, our approach exceeds the segmentation accuracy of previous work despite using significantly less training data and compute power.
accept
The paper discusses a method for unsupervised learning of correspondences across video frames. Different from prior work like CRW [13] the proposed method offers additional flexibility (e.g., an anchor can be matched to various points). To establish the use of this additional flexibility the reviewers asked for additional baselines which the authors provided in the rebuttal. In a discussion one reviewer remained concerned about the novelty and its use (the authors admit that the improvement compared to [13] is "slight"). Moreover, the authors also state that "testing on Kinetics or TrackingNet would be useful. In particular, we recognise that training on Kinetics-400 would enable a direct comparison to CRW [13]." Up until writing of this meta-review the authors have not provided this result. AC thinks a careful comparison to very related work is desirable and shouldn't be omitted. Further, as suggested by the reviewer that remained more concerned, AC concurs that the writing and organization of this paper should be improved significantly and AC can understand why the reviewer remained concerned. AC strongly encourages the authors to improve the paper by adding additional experimental evidence and by assessing organization.
val
[ "RMni2J8N2bT", "8QelolCf-Pg", "cfq5lGO4ckZ", "ClqD4wLb9s", "PJZVz4SBdEL", "UIFdGqtvuSA", "tLLWjX4BzSE", "kh09arpU1yF", "SwseCRn173K", "Ji8CHTCkvAC" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a method for self-supervised pretraining on video:\n\nGiven a batch of video clips, dense per-frame features are extracted from a backbone architecture like ResNet. The objective is to apply a standard contrastive penalty where a softmax loss picks out a positive pair of feature embeddings amid...
[ 6, -1, -1, -1, -1, -1, -1, 7, 7, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "nips_2021_i8kfkuiCJCI", "ClqD4wLb9s", "RMni2J8N2bT", "SwseCRn173K", "Ji8CHTCkvAC", "kh09arpU1yF", "nips_2021_i8kfkuiCJCI", "nips_2021_i8kfkuiCJCI", "nips_2021_i8kfkuiCJCI", "nips_2021_i8kfkuiCJCI" ]
nips_2021_SQm_poGrlj
Charting and Navigating the Space of Solutions for Recurrent Neural Networks
In recent years Recurrent Neural Networks (RNNs) were successfully used to model the way neural activity drives task-related behavior in animals, operating under the implicit assumption that the obtained solutions are universal. Observations in both neuroscience and machine learning challenge this assumption. Animals can approach a given task with a variety of strategies, and training machine learning algorithms introduces the phenomenon of underspecification. These observations imply that every task is associated with a space of solutions. To date, the structure of this space is not understood, limiting the approach of comparing RNNs with neural data.Here, we characterize the space of solutions associated with various tasks. We first study a simple two-neuron network on a task that leads to multiple solutions. We trace the nature of the final solution back to the network’s initial connectivity and identify discrete dynamical regimes that underlie this diversity. We then examine three neuroscience-inspired tasks: Delayed discrimination, Interval discrimination, and Time reproduction. For each task, we find a rich set of solutions. One layer of variability can be found directly in the neural activity of the networks. An additional layer is uncovered by testing the trained networks' ability to extrapolate, as a perturbation to a system often reveals hidden structure. Furthermore, we relate extrapolation patterns to specific dynamical objects and effective algorithms found by the networks. We introduce a tool to derive the reduced dynamics of networks by generating a compact directed graph describing the essence of the dynamics with regards to behavioral inputs and outputs. Using this representation, we can partition the solutions to each task into a handful of types and show that neural features can partially predict them.Taken together, our results shed light on the concept of the space of solutions and its uses both in Machine learning and in Neuroscience.
accept
The reviewers appreciated the ideas behind this work but agreed that it needs another conference cycle's worth of work before it is ready for publication. Though the authors have some promising new results, the changes are too substantial to be vetted for this year's conference.
train
[ "xd8fnamEOzC", "3xhQl99ism3", "aZ9LscY9ebN", "Yef5byAiGxo", "foA8le0Hl4", "k0AuENrf9Y5", "y9D-w5EqWWq", "JQxWltM0YG", "4w-rJ2djcH-", "l0zSp9JapjR" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed response, and I think you've done a pretty good job of considering the feedback. I appreciate your attention to my comments on related work and clarity. \n\nI appreciate the motivation for the 2-by-2 case, but my main concern was with originality, not quality. Like the other reviewers have...
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "foA8le0Hl4", "k0AuENrf9Y5", "nips_2021_SQm_poGrlj", "y9D-w5EqWWq", "l0zSp9JapjR", "4w-rJ2djcH-", "JQxWltM0YG", "nips_2021_SQm_poGrlj", "nips_2021_SQm_poGrlj", "nips_2021_SQm_poGrlj" ]
nips_2021_Mobm1AGs64v
Fast Training Method for Stochastic Compositional Optimization Problems
The stochastic compositional optimization problem covers a wide range of machine learning models, such as sparse additive models and model-agnostic meta-learning. Thus, it is necessary to develop efficient methods for its optimization. Existing methods for the stochastic compositional optimization problem only focus on the single machine scenario, which is far from satisfactory when data are distributed on different devices. To address this problem, we propose novel decentralized stochastic compositional gradient descent methods to efficiently train the large-scale stochastic compositional optimization problem. To the best of our knowledge, our work is the first one facilitating decentralized training for this kind of problem. Furthermore, we provide the convergence analysis for our methods, which shows that the convergence rate of our methods can achieve linear speedup with respect to the number of devices. At last, we apply our decentralized training methods to the model-agnostic meta-learning problem, and the experimental results confirm the superior performance of our methods.
accept
This paper considers the Stochastic Compositional Optimization on the decentralized training setting, which is an interesting and novel problem, as recognized by reviewers. A decentralized stochastic compositional gradient method is proposed. The proved convergence rate is provided as well. Most reviewers' comments have been addressed. Although we decide to accept this paper, authors should realize that this rate is not tight, comparing to the special case (single machine), which should be explicitly pointed out in the revision.
train
[ "0bB8kD8HgQ", "WGKegqjKi2O", "_w-URuR0SBw", "RrrPaprIXpd", "zaJ_WTempeV", "3etElp_eqgT", "XSNANbqzauH", "mfzLkdynuHX", "Z6k6LlLq_eC", "qNk6-sX-lRJ", "PBZOVfmPUkU", "O3WHOOMMIp" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper presents two algorithms for distributed optimization of stochastic composite objectives where one is interested in optimizing one outer objective that is computed on points that depend on another function (called the inner objective in the paper).\nThe authors adapt two strategies for distributed optimiz...
[ 7, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, 6 ]
[ 2, 3, -1, -1, -1, -1, 5, -1, -1, -1, -1, 4 ]
[ "nips_2021_Mobm1AGs64v", "nips_2021_Mobm1AGs64v", "RrrPaprIXpd", "zaJ_WTempeV", "3etElp_eqgT", "Z6k6LlLq_eC", "nips_2021_Mobm1AGs64v", "WGKegqjKi2O", "O3WHOOMMIp", "0bB8kD8HgQ", "XSNANbqzauH", "nips_2021_Mobm1AGs64v" ]
nips_2021_AjfD1JjeVKN
Dual-stream Network for Visual Recognition
Transformers with remarkable global representation capacities achieve competitive results for visual tasks, but fail to consider high-level local pattern information in input images. In this paper, we present a generic Dual-stream Network (DS-Net) to fully explore the representation capacity of local and global pattern features for image classification. Our DS-Net can simultaneously calculate fine-grained and integrated features and efficiently fuse them. Specifically, we propose an Intra-scale Propagation module to process two different resolutions in each block and an Inter-Scale Alignment module to perform information interaction across features at dual scales. Besides, we also design a Dual-stream FPN (DS-FPN) to further enhance contextual information for downstream dense predictions. Without bells and whistles, the proposed DS-Net outperforms DeiT-Small by 2.4\% in terms of top-1 accuracy on ImageNet-1k and achieves state-of-the-art performance over other Vision Transformers and ResNets. For object detection and instance segmentation, DS-Net-Small respectively outperforms ResNet-50 by 6.4\% and 5.5 \% in terms of mAP on MSCOCO 2017, and surpasses the previous state-of-the-art scheme, which significantly demonstrates its potential to be a general backbone in vision tasks. The code will be released soon.
accept
The reviews are split between accept and borderline reject recommendations. Two reviewers with positive ratings (7 and 8) appreciated that the hybrid CNN+Transformer approach is simple and intuitive. They also liked that the paper reports good empirical performance. The other two reviewers with negative ratings (5 and 5) raised concerns about limited novelty and insufficient/unconvincing empirical results. They criticized that the novelty isn't particularly strong because prior have already looked into the dual-stream approach combining different architectures as well as multi-resolution processing. They also shared a common concern that there is insufficient empirical evidence showing that the proposed approach is truly beneficial compared to simpler alternatives, especially at a large-scale regime, such as Swin Transformer (Swin-S) that achieves similar performance at a comparable parameter size (Swin-S: 83% w/ 50M params vs. this work: 83.1% w/ 46M params). Separate from novelty/experiment concerns, all four reviewers raised several writing issues. This meta-reviewer carefully read the reviews, rebuttal, post-rebuttal discussion, and the paper in detail to fully understand the concerns raised by the reviewers. I agree with reviewers cxxV and LMod that the proposed idea of combining convolution and self-attention operations is interesting and novel -- it is a simple and effective, yet non-trivial combination than simply stacking CNN and Transformer side-by-side. Plus, as the reviewer cxxV pointed out, the paper reports impressive transfer results on ImageNet and COCO, the latter of which is particularly interesting because it is far from being solved (compared to ImageNet). I do however share the concerns with tvGe and ybP5, and including additional results as requested by the two reviewers will make the paper much stronger. The authors are urged to incorporate all the feedback provided by the reviews in their final version.
train
[ "bnRVSK6kyE3", "SuaEKqI-2I2", "4XJaD4CIWJ", "yEz6BK6zlT5", "dp_HVLBETU", "mCJZxnumBwT", "Rb4f_ocCLl2", "eEPB1D74CgU" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presented a Dual-stream Network for image classification. It combines the representation of local and global pattern features by using self-attention and convolution together. It also proposed an Intrascale Propagation module to process two different resolutions in each block. In addition, it also intr...
[ 5, 5, -1, -1, -1, -1, 7, 8 ]
[ 5, 4, -1, -1, -1, -1, 5, 4 ]
[ "nips_2021_AjfD1JjeVKN", "nips_2021_AjfD1JjeVKN", "bnRVSK6kyE3", "Rb4f_ocCLl2", "SuaEKqI-2I2", "eEPB1D74CgU", "nips_2021_AjfD1JjeVKN", "nips_2021_AjfD1JjeVKN" ]
nips_2021_YTkQQrqSyE1
Estimating High Order Gradients of the Data Distribution by Denoising
The first order derivative of a data density can be estimated efficiently by denoising score matching, and has become an important component in many applications, such as image generation and audio synthesis. Higher order derivatives provide additional local information about the data distribution and enable new applications. Although they can be estimated via automatic differentiation of a learned density model, this can amplify estimation errors and is expensive in high dimensional settings. To overcome these limitations, we propose a method to directly estimate high order derivatives (scores) of a data density from samples. We first show that denoising score matching can be interpreted as a particular case of Tweedie’s formula. By leveraging Tweedie’s formula on higher order moments, we generalize denoising score matching to estimate higher order derivatives. We demonstrate empirically that models trained with the proposed method can approximate second order derivatives more efficiently and accurately than via automatic differentiation. We show that our models can be used to quantify uncertainty in denoising and to improve the mixing speed of Langevin dynamics via Ozaki discretization for sampling synthetic data and natural images.
accept
All reviewers are in unanimous agreement for acceptance. Please incorporate the reviewers' feedback for the camera-ready version. Congratulations on nice work!
val
[ "17oMJjRlzv1", "uWUHVR7p3XS", "s2uRBNnzI5", "wJ8L4Tqp9Lo", "pgVOnoIAWzV", "sl1Xvg-uiv", "nKWZZzC0yvG", "ofwDYr-SJO", "gs4PZsxCSRB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "- The author show that denoising score matching with Gaussian noise can be derived from Tweedie's formula through the lens of least square regressions.\n- This provides a new interpretation of the first score (gradient of log density).\n- They use the generalization of Tweedie's formula to higher orders to derive ...
[ 7, -1, -1, -1, -1, -1, 7, 8, 7 ]
[ 3, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "nips_2021_YTkQQrqSyE1", "pgVOnoIAWzV", "ofwDYr-SJO", "gs4PZsxCSRB", "17oMJjRlzv1", "nKWZZzC0yvG", "nips_2021_YTkQQrqSyE1", "nips_2021_YTkQQrqSyE1", "nips_2021_YTkQQrqSyE1" ]
nips_2021_8fztRILSxL
Machine versus Human Attention in Deep Reinforcement Learning Tasks
Deep reinforcement learning (RL) algorithms are powerful tools for solving visuomotor decision tasks. However, the trained models are often difficult to interpret, because they are represented as end-to-end deep neural networks. In this paper, we shed light on the inner workings of such trained models by analyzing the pixels that they attend to during task execution, and comparing them with the pixels attended to by humans executing the same tasks. To this end, we investigate the following two questions that, to the best of our knowledge, have not been previously studied. 1) How similar are the visual representations learned by RL agents and humans when performing the same task? and, 2) How do similarities and differences in these learned representations explain RL agents' performance on these tasks? Specifically, we compare the saliency maps of RL agents against visual attention models of human experts when learning to play Atari games. Further, we analyze how hyperparameters of the deep RL algorithm affect the learned representations and saliency maps of the trained agents. The insights provided have the potential to inform novel algorithms for closing the performance gap between human experts and RL agents.
accept
UPDATE: The revision has been reviewed and this paper has now been accepted. ---- After extensive discussions among SACs, ACs, and the program chairs, we have decided to conditionally accept this paper. The crux of the issue is that the paper makes unsubstantiated causal claims about perception based on exploratory analyses (most notably in the final paragraphs of 4.3 and 4.4). There was some debate around whether this constitutes a "fatal flaw," but it is certainly an issue that needs to be addressed. The experiments and analyses should be reframed as exploratory first steps. We hope that the authors will take into account feedback from the reviewers. Specifically, in order to be accepted, the following conditions must be met: - The experiments and analyses presented in the paper should be reframed as exploratory rather than causal or confirmatory. - All unsubstantiated claims and interpretations of results (including those made in the final paragraphs of 4.3 and 4.4, but elsewhere in the paper too) should be removed. ---- The original meta-review for this copy of the paper follows: Given that three reviewers have voted to reject and the fourth reviewer has (privately) expressed some concerns about the paper in the reviewer discussion, I will be recommending reject. Central to the issue with this paper is reviewer YjzP's comment: "The authors suggest that perception is a core component of performance, which is perfectly reasonable, but it is not clear that scoring perception by human-correlated saliency is meaningful or useful". the author's response that "(1) if the agents have less similar attention to humans when they fail, they are more likely to have bad perception and thus the task is more perceptually difficult" seems to be a correlative vs. causal claim that remains to be shown. The paper does not address limitations of saliency maps, or control for those limitations as null hypotheses in their experiments - the papers assume that the saliency model for human gaze and the saliency model for RL agent do explain agent behavior. The authors claim that they use a more explainable saliency method (perturbation-based), but the issue goes beyond saliency vs. perturbation. The paper "Axiomatic Attribution for Deep Neural Networks" (https://arxiv.org/abs/1703.01365) points some of these issues out. The paper is also missing an important citation to a recently published paper, "Exploratory not Explanatory: Counterfactual Analysis of Saliency Maps for Deep RL" (ICLR 2020, https://arxiv.org/pdf/1912.05743.pdf), with a more rigorous analysis on the pitfalls of saliency (Section 3.2). Along these lines, all four reviewers and myself included would like to see more "counterfactual evaluation of claims", and "falsifiable hypotheses from saliency maps" for the claims made in the paper. An example of "falsifiable hypothesis": Although CC/KL is highly correlated with game score (>0.8), this does not imply that the agent is attending in the same way as the agent. A test example of how this might occur: suppose the RL agent learns to attend to 20 different things, 1 of which is what the human attends to. At the start of training, RL agent does not attend to the 1 thing, and performance is low. After training, agent attends to the 1 thing (inducing a mildly weak correlation with human attention), and scores are high. The correlation is still weak, but is slightly higher. Then you measure a high correlation between the CC/KL metric and performance, which has essentially magnified the effect of the weak correlation into a strong one on performance. This is one null hypothesis for the results that the authors did not rule out.
train
[ "fuuUfAFNUye", "ZG_ygEfF__O", "a_sxWVvZKFZ", "PJacKj6HVNx", "JxWIX6Q9D4", "R5vK-taFbM", "OtmIQU-i8T2", "9PL53iDJtP3", "MS3FtlJ3hpc", "UO3stfkMbR", "FbU50tGIqP8", "aaOLEmhLiL" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We kindly point the reviewer to Sections 4 and 5 where we made the conclusions and stated the contributions of this work. We concluded that there is a significant correlation between the agents’ performance and their attention similarity with humans, both across games and across architectures. The tasks we select...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, 7, 3, 4 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "PJacKj6HVNx", "JxWIX6Q9D4", "nips_2021_8fztRILSxL", "9PL53iDJtP3", "OtmIQU-i8T2", "FbU50tGIqP8", "aaOLEmhLiL", "a_sxWVvZKFZ", "UO3stfkMbR", "nips_2021_8fztRILSxL", "nips_2021_8fztRILSxL", "nips_2021_8fztRILSxL" ]
nips_2021_OWwm6hzMDsU
Reusing Combinatorial Structure: Faster Iterative Projections over Submodular Base Polytopes
Jai Moondra, Hassan Mortagy, Swati Gupta
accept
This paper presents new techniques to speed up projections to submodular base polytopes. The authors gave theoretical results and some experimental evidence regarding the speed-up. I highly appreciated the rebuttal and discussion followed for this paper. As reviewers indicated, there is a mismatch between the theory and practice. The experiments are quite small and miss important baselines. So, it is not clear whether the proposed method is really useful. Unless the authors address this issue, the paper may not be accepted in the current format.
train
[ "GvVR01IcnCn", "2LWnWOdr0lt", "Vsi5oTWlSu", "l0dJdyg8-Iz", "gqdY8nyqrep", "PTxC2VfigA", "Hu7pBO2G8SH", "yqcUsiF_Y8I" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper considers the problem of Bregman projections on Submodular Base polytopes. The authors present several tools that could speed up computation when this projection is computed for several nearby points, as is the case for example in iterative algorithms such as mirror descent. In particular, the authors f...
[ 6, -1, -1, -1, -1, 6, 5, 4 ]
[ 5, -1, -1, -1, -1, 4, 4, 4 ]
[ "nips_2021_OWwm6hzMDsU", "Hu7pBO2G8SH", "yqcUsiF_Y8I", "PTxC2VfigA", "GvVR01IcnCn", "nips_2021_OWwm6hzMDsU", "nips_2021_OWwm6hzMDsU", "nips_2021_OWwm6hzMDsU" ]
nips_2021_vERYhbX_6Y
Constrained Optimization to Train Neural Networks on Critical and Under-Represented Classes
Deep neural networks (DNNs) are notorious for making more mistakes for the classes that have substantially fewer samples than the others during training. Such class imbalance is ubiquitous in clinical applications and very crucial to handle because the classes with fewer samples most often correspond to critical cases (e.g., cancer) where misclassifications can have severe consequences.Not to miss such cases, binary classifiers need to be operated at high True Positive Rates (TPRs) by setting a higher threshold, but this comes at the cost of very high False Positive Rates (FPRs) for problems with class imbalance. Existing methods for learning under class imbalance most often do not take this into account. We argue that prediction accuracy should be improved by emphasizing the reduction of FPRs at high TPRs for problems where misclassification of the positive, i.e. critical, class samples are associated with higher cost.To this end, we pose the training of a DNN for binary classification as a constrained optimization problem and introduce a novel constraint that can be used with existing loss functions to enforce maximal area under the ROC curve (AUC) through prioritizing FPR reduction at high TPR. We solve the resulting constrained optimization problem using an Augmented Lagrangian method (ALM).Going beyond binary, we also propose two possible extensions of the proposed constraint for multi-class classification problems.We present experimental results for image-based binary and multi-class classification applications using an in-house medical imaging dataset, CIFAR10, and CIFAR100. Our results demonstrate that the proposed method improves the baselines in majority of the cases by attaining higher accuracy on critical classes while reducing the misclassification rate for the non-critical class samples.
accept
This paper proposes a novel approach to train deep neural networks for classification tasks that involve high class imbalance, which is ubiquitous in medical problems. The novelty of the proposed method is very high, and related works are presented comprehensively. The paper is overall very well written and structured. The potential impact of the proposed algorithm is high given that class imbalance is a common problem in many real-world applications. Reviewers raised major concerns regarding various aspects of experiments, which were successfully addressed by the authors. Overall, this paper constitutes an important contribution to the field and passes the bar for the acceptance to NeurIPS.
test
[ "VCRTds7_WB-", "59aMZdwBjbW", "aDlMRISpoHi", "rR6b0Nv5FuM", "uMk4m5D949", "SCEmjVq3c0E", "aJySl_HWP2M", "6HY7U1o5KLi", "zftWnMCo6sr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes a novel method to perform classification with high imbalanced classes. The aim of the method is to minimise the FPR at high TPR -- this is suitable for many safety-critical applications where a misclassification can have dire consequences. In order to solve the problem the authors formulate the ...
[ 7, 6, -1, -1, -1, -1, -1, -1, 7 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_vERYhbX_6Y", "nips_2021_vERYhbX_6Y", "VCRTds7_WB-", "VCRTds7_WB-", "59aMZdwBjbW", "59aMZdwBjbW", "zftWnMCo6sr", "59aMZdwBjbW", "nips_2021_vERYhbX_6Y" ]
nips_2021_ykN3tbJ0qmX
Collapsed Variational Bounds for Bayesian Neural Networks
Recent interest in learning large variational Bayesian Neural Networks (BNNs) has been partly hampered by poor predictive performance caused by underfitting, and their performance is known to be very sensitive to the prior over weights. Current practice often fixes the prior parameters to standard values or tunes them using heuristics or cross-validation. In this paper, we treat prior parameters in a distributional way by extending the model and collapsing the variational bound with respect to their posteriors. This leads to novel and tighter Evidence Lower Bounds (ELBOs) for performing variational inference (VI) in BNNs. Our experiments show that the new bounds significantly improve the performance of Gaussian mean-field VI applied to BNNs on a variety of data sets, demonstrating that mean-field VI works well even in deep models. We also find that the tighter ELBOs can be good optimization targets for learning the hyperparameters of hierarchical priors.
accept
This paper tightens the standard variational bound optimized in Bayesian Deep Learning by drawing on collapsed variational inference. The authors consider the prior parameters as latent variables and derive a hierarchical variational inference procedure in which the top-level latent variables are marginalized out. The authors showed strong empirical performance of their method compared to a variety of baselines. The paper provides a non-trivial methodological contribution to the Bayesian deep learning community. Its mathematical derivations are not simple but generally well-explained (the authors sometimes don’t clearly distinguish between latent variables and variational parameters at times). The main point of criticism was suboptimal structuring and writing. I strongly encourage the authors to make their paper more accessible by following the detailed advice that the reviewers provided. Overall, this is a very good paper.
val
[ "bCxJEHc8KIE", "TMNX0x3m_9Z", "rDB448UR5Ho", "saOnYXJ1nu0", "47c_5bVYRXf", "mpMFr3zVPEB", "2ikG1YD_l3", "yqYOLrH6jFI", "6Y0z0RwELV", "rU4XbHld4no", "sr-HRdpKVi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " I feel like you have a very good plan for how to improve the paper. Unfortunately, because you aren't able to upload a revised version at this stage, I don't feel I can increase my score because it would involve significant revisions.\n\nHowever, I also think that the paper is sufficiently interesting that it may...
[ -1, 6, 5, -1, 8, -1, -1, -1, -1, -1, 5 ]
[ -1, 2, 4, -1, 3, -1, -1, -1, -1, -1, 3 ]
[ "mpMFr3zVPEB", "nips_2021_ykN3tbJ0qmX", "nips_2021_ykN3tbJ0qmX", "2ikG1YD_l3", "nips_2021_ykN3tbJ0qmX", "sr-HRdpKVi", "rDB448UR5Ho", "TMNX0x3m_9Z", "nips_2021_ykN3tbJ0qmX", "47c_5bVYRXf", "nips_2021_ykN3tbJ0qmX" ]
nips_2021_BaHth99Sp45
Consistent Estimation for PCA and Sparse Regression with Oblivious Outliers
Tommaso d'Orsi, Chih-Hung Liu, Rajai Nasser, Gleb Novikov, David Steurer, Stefan Tiegel
accept
This paper studies the fundamental problems of Principal Component Analysis (PCA) and Sparse Linear Regression (SLR) in the presence of oblivious outliers. In the oblivious noise model, a fraction of the labels can be adversarially corrupted and the corruptions are assumed to be independent of the covariates. This corruption model is significantly weaker than other models studied in the recent literature. In particular, recovery is information-theoretically possible even when the fraction of inliers is close to zero. Prior work had developed efficient algorithms for linear regression in the oblivious noise model. The current work extends these previous ideas to PCA and SLR, obtaining efficient algorithms with near-optimal statistical guarantees under a range of distributional assumptions. The reviewers agreed that the contributions are significant and the paper should appear in NeurIPS.
train
[ "ej7pVfTgqD", "dCqbhhf8i8H", "hGs6Dos17cJ", "cUiFC6H2fr", "LMYZFz-0Vd6", "e8DfPTeIn53", "Rz6nXfYXUEg", "R-nwD5Wm0-a", "Wsup8Tz9smt", "NxLdpcHAJO" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies sparse regression and principal component analysis under \noblivious perturbations. \n\nMain contributions:\n1. The authors propose estimators that achieve the optimal error guarantees for PCA and sparse regression under oblivious perturbations separately, by minimizing the Huber loss function w...
[ 7, 7, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ 2, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_BaHth99Sp45", "nips_2021_BaHth99Sp45", "cUiFC6H2fr", "e8DfPTeIn53", "NxLdpcHAJO", "dCqbhhf8i8H", "Wsup8Tz9smt", "ej7pVfTgqD", "nips_2021_BaHth99Sp45", "nips_2021_BaHth99Sp45" ]
nips_2021_9-sCrvMbL9
Offline Constrained Multi-Objective Reinforcement Learning via Pessimistic Dual Value Iteration
In constrained multi-objective RL, the goal is to learn a policy that achieves the best performance specified by a multi-objective preference function under a constraint. We focus on the offline setting where the RL agent aims to learn the optimal policy from a given dataset. This scenario is common in real-world applications where interactions with the environment are expensive and the constraint violation is dangerous. For such a setting, we transform the original constrained problem into a primal-dual formulation, which is solved via dual gradient ascent. Moreover, we propose to combine such an approach with pessimism to overcome the uncertainty in offline data, which leads to our Pessimistic Dual Iteration (PEDI). We establish upper bounds on both the suboptimality and constraint violation for the policy learned by PEDI based on an arbitrary dataset, which proves that PEDI is provably sample efficient. We also specialize PEDI to the setting with linear function approximation. To the best of our knowledge, we propose the first provably efficient constrained multi-objective RL algorithm with offline data without any assumption on the coverage of the dataset.
accept
In this paper, the authors consider the offline learning for constrained multi-objective MDP (CMOMDP). Specifically, the authors proposed an algorithm which exploits the primal dual structure with pessimistic planning. The algorithm is extended for linear kernel CMOMDP. Moreover, the authors also provided rigorous suboptimality and constraint violation guarantees. Although as most of the reviewers pointed out, the original version of the paper is lacking some related work discussion about offline RL and constrained RL, and empirical justification of the algorithm, the authors provided these parts during the rebuttal period, therefore, making the paper more convincing. I recommend acceptance for this paper. Please take the reviewers's other suggestions, especially adding the intuition/sketch of the proof into main text, to improve the paper.
test
[ "hpucoY3coJO", "QKpinlKmfja", "96Cs-mjzoOw", "fEDWQ15dbyY", "2kG6O_cTP4", "CNh73OzISzX", "uPLzIlC8Y2b", "DuJpQLB8wZ9", "aYlMfA0S5ld", "9Kr1vrTi8MO", "bg0odLFvJG", "4asgY_-HcJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank for your detailed reply. These additions will surely benefit the paper.\n", "This work considers constrained multi-objective Markov decision processes (CMOMDPs). The authors propose an algorithm to obtain the optimal policy in the offline setting. Optimality and constraint violation bounds are provided. ...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3 ]
[ "2kG6O_cTP4", "nips_2021_9-sCrvMbL9", "fEDWQ15dbyY", "4asgY_-HcJ", "QKpinlKmfja", "bg0odLFvJG", "9Kr1vrTi8MO", "nips_2021_9-sCrvMbL9", "nips_2021_9-sCrvMbL9", "nips_2021_9-sCrvMbL9", "nips_2021_9-sCrvMbL9", "nips_2021_9-sCrvMbL9" ]
nips_2021_BpFtRaPGRTZ
Absolute Neighbour Difference based Correlation Test for Detecting Heteroscedastic Relationships
It is a challenge to detect complicated data relationships thoroughly. Here, we propose a new statistical measure, named the absolute neighbour difference based neighbour correlation coefficient, to detect the associations between variables through examining the heteroscedasticity of the unpredictable variation of dependent variables. Different from previous studies, the new method concentrates on measuring nonfunctional relationships rather than functional or mixed associations. Either used alone or in combination with other measures, it enables not only a convenient test of heteroscedasticity, but also measuring functional and nonfunctional relationships separately that obviously leads to a deeper insight into the data associations. The method is concise and easy to implement that does not rely on explicitly estimating the regression residuals or the dependencies between variables so that it is not restrict to any kind of model assumption. The mechanisms of the correlation test are proved in theory and demonstrated with numerical analyses.
accept
This paper proposes a measure called absolute neighbour difference-based neighbour correlation coefficient, to detect the "heteroscedasticity" in the nonfunctional relationship between variables. All reviewers agree that the detection of "heteroscedastic relations" is an important issue in learning and that the proposed method, although simple and intuitive, is novel and has practical implications.
train
[ "I0VnXN_wI8", "EVw4BnJhc0D", "1179-p8kfGr", "1RPdmOCURgZ", "6FJ0MNGxVmj", "YqgXxN6yOhr", "XYACGgivS9F", "X41uLdV4mU6", "SQkZ-hIUZu", "QTjjemhjRar", "V_qHeZ9vqvO", "bww6NKiLHOy", "-N0LD2nwOTE" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer v9y1 \n\nFirst of all. we’d like to thank you for your careful reading and valuable comments. Then we address major concerns below.\n\nQuestion: One key point is that all data points have to be close to each other, otherwise the assumption does not hold. On one hand, it is quite demanding to compute...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 7, 6, 7 ]
[ -1, -1, 2, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "V_qHeZ9vqvO", "X41uLdV4mU6", "nips_2021_BpFtRaPGRTZ", "XYACGgivS9F", "SQkZ-hIUZu", "QTjjemhjRar", "1179-p8kfGr", "bww6NKiLHOy", "-N0LD2nwOTE", "nips_2021_BpFtRaPGRTZ", "nips_2021_BpFtRaPGRTZ", "nips_2021_BpFtRaPGRTZ", "nips_2021_BpFtRaPGRTZ" ]
nips_2021_wF-llA3k32
Batch Multi-Fidelity Bayesian Optimization with Deep Auto-Regressive Networks
Bayesian optimization (BO) is a powerful approach for optimizing black-box, expensive-to-evaluate functions. To enable a flexible trade-off between the cost and accuracy, many applications allow the function to be evaluated at different fidelities. In order to reduce the optimization cost while maximizing the benefit-cost ratio, in this paper we propose Batch Multi-fidelity Bayesian Optimization with Deep Auto-Regressive Networks (BMBO-DARN). We use a set of Bayesian neural networks to construct a fully auto-regressive model, which is expressive enough to capture strong yet complex relationships across all the fidelities, so as to improve the surrogate learning and optimization performance. Furthermore, to enhance the quality and diversity of queries, we develop a simple yet efficient batch querying method, without any combinatorial search over the fidelities. We propose a batch acquisition function based on Max-value Entropy Search (MES) principle, which penalizes highly correlated queries and encourages diversity. We use posterior samples and moment matching to fulfill efficient computation of the acquisition function, and conduct alternating optimization over every fidelity-input pair, which guarantees an improvement at each step. We demonstrate the advantage of our approach on four real-world hyperparameter optimization applications.
accept
Ultimately, I am recommending this paper for acceptance because I believe the approach to multi-fidelity optimization proposed by the authors is sufficiently novel and the experimental evaluation is sufficiently strong, and I am thus in agreement with Reviewer gu5R and Reviewer Hcby that, after substantial incorporation of feedback from the reviewers, the paper is solid. Furthermore, I believe that you do indeed already include several of the ablation studies asked for (e.g., BMBO-DARN-1 vs MF-MES is equivalent to an ablation of how much the DARN matters). While I believe the paper has flaws (see below), I believe most of these flaws are in argumentation rather than in execution, and would not require changes significant enough to warrant rejection. With that said, I want to be clear that several of the criticisms raised by Reviewer zV6Z and Reviewer JVN7 seem valid to me, and for at least a few of them, I found the author response to be unconvincing. First, the claims made on lines 143-149 and repeated in the author feedback that it is expected for batch acquisition to be more query efficient than sequential acquisition seem specious to me, and I would remove or rephrase them. If I am selecting a set of points x_1,...,x_k, clearly there is nothing inherent about knowing the labels y_1,...,y_{k-1} that forces a worse choice of x_{k}. Rather, certain sequential acquisition functions like EI are known for being myopic. Having access to more information should always enable more informed decisions than having access to less information. Saying that sequential decision making can "underrate the correlations between consecutive queries" does not seem rigorous enough to explain this claim. Rather, batch acquisition is typically more efficient than sequential optimization in terms of total wall-clock time, and indeed the results presented in Figure 1 are given in terms of time. While the advantages of your approach in terms of time elapsed are clear, I would **strongly recommend** the authors additionally use the supplementary materials to report results in terms of query efficiency to remove this confusion. Second, your claim that "our topic/work is multi-fidelity batch BO and is irrelevant to early-stopping" seems overly dismissive of Reviewer zV6Z's concern. Every experiment in Figure 4 falls precisely into the setting Reviewer zV6Z is describing, where training can be stopped at any time (e.g., early stopping) rather than pre-determining a set of training lengths to use as epochs, as for example Freeze-thaw BO (Swersky et al., 2014) does, which you yourself cite. While of course in the general setting, fidelities can be arbitrary discrete measures of precision that do not lend themself to early stopping, the experiments you choose to run in your paper do indeed fit the setting considered by Swersky et al. 2014 and others. I would strongly encourage the authors to reconsider which criticisms may be valid and which may be dismissed when preparing the camera ready version, as many of the questions raised here are likely to be shared by readers of the paper at large.
train
[ "Mshr2-eRqG5", "AhNsNUmHwE", "01rjdfhrWLV", "Wwv0i3Gv4Rb", "Mj6a4yBgYzG", "xf6JC1ofqWf", "MWLQfYQoP6I", "YwSjAqXW6e", "aFuInLqBeEx", "yu0JbEUDdGe", "qUMdO1-llDJ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response - my score for the paper remains the same. I have also read through all other reviews/rebuttals for this submission and am participating in the discussion on the paper's outcome.", " Thanks for your detailed response and for clarifying most of questions. I still have three things bas...
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "xf6JC1ofqWf", "01rjdfhrWLV", "YwSjAqXW6e", "MWLQfYQoP6I", "yu0JbEUDdGe", "qUMdO1-llDJ", "aFuInLqBeEx", "nips_2021_wF-llA3k32", "nips_2021_wF-llA3k32", "nips_2021_wF-llA3k32", "nips_2021_wF-llA3k32" ]
nips_2021_OKrNPg3xR3T
Mastering Atari Games with Limited Data
Reinforcement learning has achieved great success in many applications. However, sample efficiency remains a key challenge, with prominent methods requiring millions (or even billions) of environment steps to train. Recently, there has been significant progress in sample efficient image-based RL algorithms; however, consistent human-level performance on the Atari game benchmark remains an elusive goal. We propose a sample efficient model-based visual RL algorithm built on MuZero, which we name EfficientZero. Our method achieves 190.4% mean human performance and 116.0% median performance on the Atari 100k benchmark with only two hours of real-time game experience and outperforms the state SAC in some tasks on the DMControl 100k benchmark. This is the first time an algorithm achieves super-human performance on Atari games with such little data. EfficientZero's performance is also close to DQN's performance at 200 million frames while we consume 500 times less data. EfficientZero's low sample complexity and high performance can bring RL closer to real-world applicability. We implement our algorithm in an easy-to-understand manner and it is available at https://github.com/YeWR/EfficientZero. We hope it will accelerate the research of MCTS-based RL algorithms in the wider community.
accept
The reviewers were in universal agreement the paper proposed a novel, well justified method with good empirical results. Please include results on mujoco; it would be particularly interesting to include the ablation where MCTS is only used to generate learning targets, but not to act, in order to improve response time (you suggest this ablation in the rebuttal, and is also investigated in [1]). Also, if possible, please discuss or include some experiments in the 'data-rich' regime (high number of training frames). A method that strongly improves performance in the low-data regime, but catastrophically degrades performance as the amount of experience scales up, is of more limited interest than one which is data efficient while retaining good performance in data-rich regime. [1] On the role of planning in model-based deep RL, Hamrick et. al
train
[ "nC24UK9v3r", "yf8-6fMD5hP", "M28fnicX-DC", "k0p_1SuGY_B", "W3BUEpC2Dmh", "oeo1-ta0Ov", "zQ7tonpazxc", "eku7HLW8kf_", "MUtLuRIVTx7", "R6syLRJ6d3" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response! It answers my questions well. I will stick to my initial evaluation and vote for acceptance.", " Thank you, I am satisfied with the response and will maintain my score as-is. \n\nIf accepted and there is sufficient room (or in the appendix), I would like to see the computational cost...
[ -1, -1, -1, -1, -1, -1, 5, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "k0p_1SuGY_B", "W3BUEpC2Dmh", "R6syLRJ6d3", "MUtLuRIVTx7", "eku7HLW8kf_", "zQ7tonpazxc", "nips_2021_OKrNPg3xR3T", "nips_2021_OKrNPg3xR3T", "nips_2021_OKrNPg3xR3T", "nips_2021_OKrNPg3xR3T" ]
nips_2021_1fnkvjzVdu9
Dealing With Misspecification In Fixed-Confidence Linear Top-m Identification
Clémence Réda, Andrea Tirinzoni, Rémy Degenne
accept
The review team seems aligned that the paper meets the NeurIPS threshold with one borderline vote. I wanted to thank the reviewers and authors for the detailed back and forth during the discussion period. I have also been informed from that process on how the authors consider their contributions and the limitations of the model and results. While several points remain hanging and there is some disagreement over the numerical results, I feel the paper has a good story to tell, and while the issue of misspecification is not new in the MAB literature, the pure exploration problem does raise its own challenges and hence this is a worthwhile paper that will have an audience. I have to agree with reviewer 2 (vF9d) about the bound on t_0. The relation to Kauffman (2016) in terms of lower bounds should also be better discussed and perhaps more broadly and more clearly laying out what additional complexity in analysis stems from misspecification relative to earlier work that study the fixed confidence setting.
train
[ "zSk4-bkd9n", "BhG6So7HvHq", "70-RWKKwSDk", "uYzyj4Jkizi", "Tp7pRMezhMV", "09scXLRTxsc", "jt4d7vkZi6w", "_J4R_dY9k2N", "19Yjgur-ge6", "a27ANJw6kj" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the follow-up. We would like to provide some additional arguments on why we believe our experimental protocol to be fair.\n\nIn the fixed-confidence setting, an algorithm is described by two main components, a stopping rule and a sampling rule. Both play a fundamental role in characteriz...
[ -1, -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 1, 3, 3 ]
[ "BhG6So7HvHq", "09scXLRTxsc", "uYzyj4Jkizi", "Tp7pRMezhMV", "19Yjgur-ge6", "a27ANJw6kj", "_J4R_dY9k2N", "nips_2021_1fnkvjzVdu9", "nips_2021_1fnkvjzVdu9", "nips_2021_1fnkvjzVdu9" ]
nips_2021_QWIvzSQaX5
Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability
Generalization is a central challenge for the deployment of reinforcement learning (RL) systems in the real world. In this paper, we show that the sequential structure of the RL problem necessitates new approaches to generalization beyond the well-studied techniques used in supervised learning. While supervised learning methods can generalize effectively without explicitly accounting for epistemic uncertainty, we describe why appropriate uncertainty handling can actually be essential in RL. We show that generalization to unseen test conditions from a limited number of training conditions induces a kind of implicit partial observability, effectively turning even fully-observed MDPs into POMDPs. Informed by this observation, we recast the problem of generalization in RL as solving the induced partially observed Markov decision process, which we call the epistemic POMDP. We demonstrate the failure modes of algorithms that do not appropriately handle this partial observability, and suggest a simple ensemble-based technique for approximately solving the partially observed problem. Empirically, we demonstrate that our simple algorithm derived from the epistemic POMDP achieves significant gains in generalization over current methods on the Procgen benchmark suite.
accept
The paper provides a conceptually interesting perspective on dynamics generalization in RL by drawing a connection to a special subtype of the Bayesian RL formalism. The work's experiments got sufficiently beefed as a result of the discussion with the reviewers, and now the paper is strong both from the conceptual and empirical standpoints. It is important that the authors pay special attention to incorporating the clarifications they gave in response to reviewers' questions into the final paper version. The paper's impact will heavily depend on its clarity and its success in ensuring that readers will see the distinction of the paper's model from general Bayesian RL, something that initially confused several reviewers.
train
[ "3M2iNSzdpRA", "dTzC7CNM3mR", "4UwE-xvPi2z", "QPeeSz1GSYw", "WlaeBeZ6Q97", "BMiIMLUwZi", "vslYIk8_J-S", "XGsY5iDd4K8", "73Ve2RDMaM", "Fe8_pEIe9lB", "iXXGI92zg5", "VXvpgxwqwm2" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "* This paper uses the insights from Bayesian RL to solve the problem of generalization in RL. In particular, the authors formulated the generalization problem as an epistemic POMDP and proposed an ensemble-based algorithm, called LEEP, to approximately solve the problem.\n\n* The authors empirically demonstrated t...
[ 7, -1, 7, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_QWIvzSQaX5", "QPeeSz1GSYw", "nips_2021_QWIvzSQaX5", "WlaeBeZ6Q97", "4UwE-xvPi2z", "4UwE-xvPi2z", "VXvpgxwqwm2", "iXXGI92zg5", "nips_2021_QWIvzSQaX5", "3M2iNSzdpRA", "nips_2021_QWIvzSQaX5", "nips_2021_QWIvzSQaX5" ]
nips_2021_1ANcwXQuijU
Set Prediction in the Latent Space
Set prediction tasks require the matching between predicted set and ground truth set in order to propagate the gradient signal. Recent works have performed this matching in the original feature space thus requiring predefined distance functions. We propose a method for learning the distance function by performing the matching in the latent space learned from encoding networks. This method enables the use of teacher forcing which was not possible previously since matching in the feature space must be computed after the entire output sequence is generated. Nonetheless, a naive implementation of latent set prediction might not converge due to permutation instability. To address this problem, we provide sufficient conditions for permutation stability which begets an algorithm to improve the overall model convergence. Experiments on several set prediction tasks, including image captioning and object detection, demonstrate the effectiveness of our method.
accept
This paper proposes a novel method for set prediction tasks, whose goal is to predict multiple elements without consideration of their orderings. A limitation of the existing set prediction methods is that they need to solve for the matching problem between the predicted and the ground-truth set under a certain distance metric, but the choice of distance metric is critical for the convergence property of the matching algorithm. To deal with this issue, the authors propose to learn a latent space and perform set prediction tasks in this space, which they refer to as Latent Set Prediction (LSP). LSP is beneficial over existing methods as it allows us to use simple Euclidean distance, which eliminates the need of selecting a specific hand-crafted distance metric, and enables efficient matching with teacher-forcing. However, a naive LSP may not converge well due to the instability of the matchings across the set elements, and the authors propose techniques to allow stable pairing of the elements across two sets, further showing its convergence guarantee. The authors validate LSP on semantic scene description, multi-modal report generation, and the object detection task, and the results show the effectiveness of the proposed LSP over relevant baselines. The paper received split initial reviews with three leaning negative and two positive. However, despite the negative scores, most reviewers found the paper well-written, the tackled problem important, and the discussion on the limitations of the existing set prediction techniques insightful. Also, they considered the proposed set matching technique to be sound and nontrivial, and the provided theoretical convergence guarantee to be valuable. However, the reviewers had the following common concerns: There exists an obvious gap between theory and practice, which renders the convergence property of the proposed method unclear in real-world scenarios, but there is no empirical analysis of convergence. The baselines used for the experiments are weak, and the proposed method is not validated against some highly relevant baselines for set prediction (e.g. TSPN and DSPN). There is no ablation study of the proposed losses and techniques, which are essential in verifying their effectiveness. The experimental validation part of the paper is not well organized, and comes with different baselines and evaluation protocols. During the discussion period, the authors dealt away most of the concerns by providing experimental results with relevant baselines, providing the results of the ablation studies, and by providing the learning curves. The reviewers revised their reviews and increased their scores as they found the new results and discussions satisfactory, and reached a consensus to accept the paper. I also agree with the reviewers that the paper is proposing a highly original idea and sound methods to solve a well-motivated problem, and that the theoretical analysis is nice. The only weak part was experimental validation, but this has been satisfactorily addressed in the author responses, and I believe that the paper will be in a very good shape after incorporating all the discussions and results into the revised version of the paper.
test
[ "4XLTRawXXqC", "_ptYUCMGxs2", "JhFimYVjspw", "ljJzgf6BvAd", "W7FHYf04PX", "fjINd-7Wyj1", "6tMj9EflSWo", "7F2VDtkOBMq", "VQrawE2z7S", "HysoIqd6u8t", "Ldd1H3vpHa9", "iGXOEhIdy4", "dEY9RoPRBOW", "Wv5T94dtzJ1", "pa_MWrRyFi", "h93G8vjJkCQ", "OERlQgm5BUf", "UXcQ3n6z8B5", "UBbK0CrUGdJ",...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_revie...
[ "This paper proposes LSP, an approach to set-to-set prediction that doesn’t require handcrafting a distance metric. This distance metric is often needed to perform bijective matching (e.g., the Hungarian algorithm) between the predicted set and the ground-truth set. The key idea is to perform the matching in the la...
[ 6, -1, -1, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ 3, -1, -1, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_1ANcwXQuijU", "fjINd-7Wyj1", "7F2VDtkOBMq", "nips_2021_1ANcwXQuijU", "nips_2021_1ANcwXQuijU", "ODcrbnN1pj2", "W7FHYf04PX", "pa_MWrRyFi", "Ldd1H3vpHa9", "dEY9RoPRBOW", "iGXOEhIdy4", "xhVvDTOnF4M", "UXcQ3n6z8B5", "W7FHYf04PX", "ljJzgf6BvAd", "k0nZKc_8xOa", "4XLTRawXXqC", "...
nips_2021_Ri2G086_3v
Best of Both Worlds: Practical and Theoretically Optimal Submodular Maximization in Parallel
Yixin Chen, Tonmoy Dey, Alan Kuhnle
accept
The reviewers liked both the theoretical guarantees and the empirical performance achieved by the algorithm proposed in this paper, which improves over state-of-the-art algorithms for submodular maximization. For the final version, please include the additional comparison to [FMZ19] that was discussed in the response.
val
[ "T4rICHLLD5B", "xvCIQQwCMN2", "KTnkEP0yNK", "jjQVC2qpebu", "w3zKg4lq8T", "LFLgBu5zXMH", "KE7BK019Kvl", "2T28cdccNYo" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes several new algorithms for maximizing submodular functions which when combined achieve state of the art combination of approximation ratio, adaptive complexity, and query complexity. Empirically, the combined LS+PGB algorithm is also equal to or better than the recent FAST algorithm.\n This pa...
[ 8, 6, -1, -1, -1, -1, 7, 6 ]
[ 4, 5, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_Ri2G086_3v", "nips_2021_Ri2G086_3v", "xvCIQQwCMN2", "2T28cdccNYo", "T4rICHLLD5B", "KE7BK019Kvl", "nips_2021_Ri2G086_3v", "nips_2021_Ri2G086_3v" ]
nips_2021_lDzLzhUIwBq
Fine-grained Generalization Analysis of Inductive Matrix Completion
Antoine Ledent, Rodrigo Alves, Yunwen Lei, Marius Kloft
accept
This paper studies inductive matrix completion with side information. They provide distribution-free bounds that improve upon previous work. New guarantees are also obtained for weighted nuclear norm minimization under arbitrary sampling. The reviewers agree that this paper makes solid contributions and involves interesting techniques. On the other hand, the reviewers also find that the authors should better explain the Lipschitz/boundedness assumptions as well as the novelties for the weighted trace norm. and the clarity of the presentation can be improved.
train
[ "j0I9H7zNd8R", "Hif7fzEdXrK", "xxqXDXsXon", "As1y4zlOdbo", "zqi65vLhK3q", "86W_sfz0AuD", "p9Ubd0ELEih", "A7lrr5LSpwx", "GV3j9sr2W7y", "vAUbB1kQR_Q", "Uicz8zAS9FE", "u1LIBSQ02Mq", "IFUe1vDZ1u4", "DiT_8hsvseu", "FSoXu6yoDKI" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your further comment.\n \n**Summary**: Truncating the entries results in a manageable Rademacher complexity. If one assumes that both the ground truth and the noise are bounded by a constant $C$, one can use this to prove that the following strategy yields a solution with similar guarantees to the ...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 3, 2, 1 ]
[ "xxqXDXsXon", "Uicz8zAS9FE", "As1y4zlOdbo", "zqi65vLhK3q", "GV3j9sr2W7y", "nips_2021_lDzLzhUIwBq", "vAUbB1kQR_Q", "nips_2021_lDzLzhUIwBq", "IFUe1vDZ1u4", "86W_sfz0AuD", "FSoXu6yoDKI", "DiT_8hsvseu", "nips_2021_lDzLzhUIwBq", "nips_2021_lDzLzhUIwBq", "nips_2021_lDzLzhUIwBq" ]
nips_2021_5JvnsAdf6Vz
Learning Frequency Domain Approximation for Binary Neural Networks
Binary neural networks (BNNs) represent original full-precision weights and activations into 1-bit with sign function. Since the gradient of the conventional sign function is almost zero everywhere which cannot be used for back-propagation, several attempts have been proposed to alleviate the optimization difficulty by using approximate gradient. However, those approximations corrupt the main direction of factual gradient. To this end, we propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs, namely frequency domain approximation (FDA). The proposed approach does not affect the low-frequency information of the original sign function which occupies most of the overall energy, and high-frequency coefficients will be ignored to avoid the huge computational overhead. In addition, we embed a noise adaptation module into the training phase to compensate the approximation error. The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy. Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/FDA-BNN.
accept
The paper worked on binarized neural networks and proposed to find better surrogates of the sign function in the frequency domain instead of the spatial domain. The idea is motivated, novel, and simple --- I am quite surprised such a simple idea can work fairly well, as a simple idea that works is the greatest to me. The motivation and the key to its success is that approximating the sign function in the frequency domain would drop some high-frequency terms, which cannot affect the quality of gradients much since the gradient directions are mainly determined by the low-frequency terms. The proposed noise adaptation module puts some additional technical difficulty/quality into the paper and is what I am mainly interested in (I have gone through the paper by myself and asked some questions in a separate post). Although there were some concerns in the beginning, the authors have done a particularly good job in their rebuttal. In the end, all the reviewers hold very positive opinions and vote for acceptance. I think the idea (i.e., frequency domain approximation plus noise adaptation layer) can also be used in other areas besides BNN, since non-smooth functions are common and straight-through estimator is popular in deep learning or the entire machine learning. I am sure that this paper will be impactful and therefore I recommend it as an oral presentation.
train
[ "boD4v8ehnYa", "EkbzGK-7vB", "tCWNpm7nslU", "Qp7GxsLqlDz", "4xS5PTlkg_c", "lio5ZItoLqv4", "Tj5maBuNh9oh", "V4lnzhag1-p-", "x96iOsUqpC" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This is a novel gradient estimation method for binary networks based on the Fourier series estimation of sign function. By doing this, the energy in low-frequency domain is preserved, and that of high frequency is estimated by a noise adaptation module. Basically, the idea of using Fourier series to estimate sign...
[ 8, 8, 8, 7, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "nips_2021_5JvnsAdf6Vz", "nips_2021_5JvnsAdf6Vz", "nips_2021_5JvnsAdf6Vz", "nips_2021_5JvnsAdf6Vz", "Qp7GxsLqlDz", "boD4v8ehnYa", "tCWNpm7nslU", "Qp7GxsLqlDz", "EkbzGK-7vB" ]
nips_2021_mHHU6KWQ1ci
Reformulating Zero-shot Action Recognition for Multi-label Actions
The goal of zero-shot action recognition (ZSAR) is to classify action classes which were not previously seen during training. Traditionally, this is achieved by training a network to map, or regress, visual inputs to a semantic space where a nearest neighbor classifier is used to select the closest target class. We argue that this approach is sub-optimal due to the use of nearest neighbor on static semantic space and is ineffective when faced with multi-label videos - where two semantically distinct co-occurring action categories cannot be predicted with high confidence. To overcome these limitations, we propose a ZSAR framework which does not rely on nearest neighbor classification, but rather consists of a pairwise scoring function. Given a video and a set of action classes, our method predicts a set of confidence scores for each class independently. This allows for the prediction of several semantically distinct classes within one video input. Our evaluations show that our method not only achieves strong performance on three single-label action classification datasets (UCF-101, HMDB, and RareAct), but also outperforms previous ZSAR approaches on a challenging multi-label dataset (AVA) and a real-world surprise activity detection dataset (MEVA).
accept
There was active engagement between the authors and reviewers following the rebuttal. It is important that the authors update the paper with the new results requested by the reviewers, as well as update the related work section with (i) missing citations for video ZSL, and (ii) related work on image ZSL, that should also be discussed. One other issue that could be more thoroughly discussed is the significance of the type of interaction, where multiplicative gives a subtantial performance improvement over the other choices.
train
[ "P9qZ_3lEg6f", "ichM668Hhyl", "cSGZixOWxP8", "r_5SF3Fagvg", "Ko9Xl7PlFJj", "mtfAT0KwxHr", "1J3ah_T9Q8M", "VE0MN7COBB4", "e53CHNniuL", "TxTAUtRcT9D", "GbT2LVLXo0", "DQm0bCuoeJt", "7cvhwN3vw3R" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work investigates zero-shot action recognition from a multi-label perspective. Rather than a nearest neighbour action selection, this work opts for a binary cross-entropy optimization. Experiments on common datasets as well as AVA and MEVA show the effectiveness of the approach. Strengths:\n\nIn zero-shot ac...
[ 5, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 5, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "nips_2021_mHHU6KWQ1ci", "r_5SF3Fagvg", "nips_2021_mHHU6KWQ1ci", "mtfAT0KwxHr", "1J3ah_T9Q8M", "VE0MN7COBB4", "TxTAUtRcT9D", "P9qZ_3lEg6f", "cSGZixOWxP8", "7cvhwN3vw3R", "DQm0bCuoeJt", "nips_2021_mHHU6KWQ1ci", "nips_2021_mHHU6KWQ1ci" ]
nips_2021_4wVlNqBJXg
Optimal Best-Arm Identification Methods for Tail-Risk Measures
Shubhada Agrawal, Wouter M. Koolen, Sandeep Juneja
accept
This paper studies the multi-armed bandit problem with the goal of identifying the arm with the smallest CVaR/VaR/conic combination of CVaR and the mean. It is a significant question with applications in various domains. This paper proposes an algorithm that projects the empirical distribution to the considered family of distributions and considers an exploratory transform that guides the arm sampling distribution. It shows that the algorithm asymptotically achieves the optimal sample complexity. I agree with the reviewers that this paper has made interesting technical contributions and I am happy to recommend acceptance.
train
[ "9NQourvfrJ", "OuTcmdXEpDI", "StQP6zThUJH", "xMC347E01aI", "Z4W6fO8nX0X", "FvzYzKyxOPn", "Kipk5MtHl-i", "v4gzVbbPUPY", "tVB2u8sMRDD", "fnAuU5qq-tn", "a6QQlSAU_ay", "91eACLQ6eV", "saOzEWMVCc", "v6tqkymSM2K" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the clarifications.\nI would recommend that the authors include a short remark with the particular example in point 3 above differentiating the mean and CVaR objectives. It would be helpful for a reader to understand qualitatively when these objectives differ to contrast the claim made for the case ...
[ -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 3, 3, 2, 3 ]
[ "tVB2u8sMRDD", "v4gzVbbPUPY", "fnAuU5qq-tn", "Kipk5MtHl-i", "nips_2021_4wVlNqBJXg", "Z4W6fO8nX0X", "a6QQlSAU_ay", "saOzEWMVCc", "91eACLQ6eV", "v6tqkymSM2K", "nips_2021_4wVlNqBJXg", "nips_2021_4wVlNqBJXg", "nips_2021_4wVlNqBJXg", "nips_2021_4wVlNqBJXg" ]
nips_2021_9Qu0U9Fj7IP
SyMetric: Measuring the Quality of Learnt Hamiltonian Dynamics Inferred from Vision
A recently proposed class of models attempts to learn latent dynamics from high-dimensional observations, like images, using priors informed by Hamiltonian mechanics. While these models have important potential applications in areas like robotics or autonomous driving, there is currently no good way to evaluate their performance: existing methods primarily rely on image reconstruction quality, which does not always reflect the quality of the learnt latent dynamics. In this work, we empirically highlight the problems with the existing measures and develop a set of new measures, including a binary indicator of whether the underlying Hamiltonian dynamics have been faithfully captured, which we call Symplecticity Metric or SyMetric. Our measures take advantage of the known properties of Hamiltonian dynamics and are more discriminative of the model's ability to capture the underlying dynamics than reconstruction error. Using SyMetric, we identify a set of architectural choices that significantly improve the performance of a previously proposed model for inferring latent dynamics from pixels, the Hamiltonian Generative Network (HGN). Unlike the original HGN, the new SyMetric is able to discover an interpretable phase space with physically meaningful latents on some datasets. Furthermore, it is stable for significantly longer rollouts on a diverse range of 13 datasets, producing rollouts of essentially infinite length both forward and backwards in time with no degradation in quality on a subset of the datasets.
accept
This paper has three reviewers, and the initial review are two borderline reject, and one borderline accept. After very long and detailed rebuttal and discussion process between authours and reviewers, two reviewers had raised their scores. In general, this paper has a theoretically grounded working solution to a valid and important problem in the field. The negative points/drawbacks come from that the results are not convincing and partially negative results which are very hard to impossible to support empirically. And the approach needs to learn a mapping from the learnt state space to ground truth which is difficult to assess in quality itself and can invalidate the proposed measures if the mapping is "off". Thus, the initial meta reviewer would tend to borderline acceptance.
train
[ "LXMc2msREmp", "tZUn9BQGMgv", "HAfIwbQTyet", "24Pw1ji1cHe", "iaysWJ5YGO", "6gu7GgOIyT6", "7EyLYbIsmtH", "bBRFqiJC00Q", "6Ic2qjJ6lx6", "zvZIEbnnUs2", "hQHhwf4vyU", "sE0an8vetUO" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "This paper mainly focuses on the application of Hamiltonian Dynamic in computer vision tasks. The authors propose a new metric to evaluate how the learned latent space coincide with the ground truth. Based on the novel metric, they further improve the previous HGN with multiple modifications in different aspects, ...
[ 6, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1 ]
[ "nips_2021_9Qu0U9Fj7IP", "24Pw1ji1cHe", "nips_2021_9Qu0U9Fj7IP", "6Ic2qjJ6lx6", "hQHhwf4vyU", "nips_2021_9Qu0U9Fj7IP", "6gu7GgOIyT6", "nips_2021_9Qu0U9Fj7IP", "HAfIwbQTyet", "LXMc2msREmp", "sE0an8vetUO", "7EyLYbIsmtH" ]
nips_2021_RX6PrcpXP-
Learning with Holographic Reduced Representations
Ashwinkumar Ganesan, Hang Gao, Sunil Gandhi, Edward Raff, Tim Oates, James Holt, Mark McLean
accept
This paper revisits the use of Holographic Reduced Representation (HRR) as a way to combine symbolic reasoning with a neural system. It proposes a new method to efficiently train HRRs. The authors demonstrate that replacing a traditional output layer of a network with a semantically meaningful HRR layer is very effective, in terms of drastically reducing the output layer size and also reducing the overall network size. The results suggest that this is a promising approach that will likely spark future work. The paper led to substantial discussion among the reviewers and authors, and the authors' informative clarifications helped make all reviewers much more confident in the accuracy and value of the work. With the authors incorporating their insightful replies into the final version of the paper, this will make a solid NeurIPS paper with fairly broad interest.
train
[ "hQh2-JLpquD", "Jy0cM46Lk47", "iWY2cmkt7a", "XGet0z31ohB", "mbXQWXBK-v-", "-S2aF2XeJew", "DYRnPEk7FX9", "NP8sAQNavak", "HKuIXwpyRt1", "ACiGdZ0ee7i", "f23kTBc2kmK", "Txy9YFO5lYj", "ZIQYoZUK1_M", "0gBRM3_1UWM", "HOFULUeAiNL", "FhZ88ttIOai", "MnTqXcvmZdn" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you for raising your score, we are glad we were able to satisfy your concerns in discussion and the extra content of our appendix. We appreciate the time spent in reading our paper and the review discussions. ", " I appreciate the authors' competent reply and the enthusiasm of the other reviewers about th...
[ -1, -1, 6, 7, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, -1, 4, 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "Jy0cM46Lk47", "mbXQWXBK-v-", "nips_2021_RX6PrcpXP-", "nips_2021_RX6PrcpXP-", "-S2aF2XeJew", "FhZ88ttIOai", "HKuIXwpyRt1", "nips_2021_RX6PrcpXP-", "f23kTBc2kmK", "MnTqXcvmZdn", "Txy9YFO5lYj", "0gBRM3_1UWM", "MnTqXcvmZdn", "NP8sAQNavak", "XGet0z31ohB", "iWY2cmkt7a", "nips_2021_RX6Prcp...
nips_2021_K4Su8BIivap
Learning Barrier Certificates: Towards Safe Reinforcement Learning with Zero Training-time Violations
Training-time safety violations have been a major concern when we deploy reinforcement learning algorithms in the real world.This paper explores the possibility of safe RL algorithms with zero training-time safety violations in the challenging setting where we are only given a safe but trivial-reward initial policy without any prior knowledge of the dynamics and additional offline data.We propose an algorithm, Co-trained Barrier Certificate for Safe RL (CRABS), which iteratively learns barrier certificates, dynamics models, and policies. The barrier certificates are learned via adversarial training and ensure the policy's safety assuming calibrated learned dynamics. We also add a regularization term to encourage larger certified regions to enable better exploration. Empirical simulations show that zero safety violations are already challenging for a suite of simple environments with only 2-4 dimensional state space, especially if high-reward policies have to visit regions near the safety boundary. Prior methods require hundreds of violations to achieve decent rewards on these tasks, whereas our proposed algorithms incur zero violations.
accept
I agree with the reviewers that this paper deserves to be accepted; it has a novel foundational contribution and is well written. The paper presents a new approach that iteratively generates a barrier certificate, an environment model and a policy, in a way that ensures that there are no safety violations even during training. The algorithm requires an initial safe (but possibly suboptimal) policy, and optimizes it while ensuring that there will not be any safety violations during training. As pointed out by one of the reviewers, the main downside of the paper is that the experiments are not very ambitious; the approach is evaluated on only a few problems with a very low-dimensional state space. However, the task of training with no safety violations is already challenging even for these relatively simple environments. There are also a few places where the explanations could be a little clearer, for example, the way algorithm 2 is presented, it is not entirely clear where the initial barrier certificate comes from. But overall, this is a very strong paper with a solid contribution to the state of the art in safe reinforcement learning.
val
[ "4mgR9qd819f", "7YyHCbM6GmN", "hGBQ8vMP7bN", "yvv4PW4cUtH", "TU0RpOfIQgi", "kdQrzFGVYC", "HZx7KXZm7kl", "pDmkc-3V09_", "9ACVLiTDJ-", "XakN8QT3x83" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " The response clarifies most of my questions. Thank you. I still keep the score unchanged due to limited evaluations using simple environments.", " We thank the reviewer for the consideration of our response and the suggestions to include the comparisons. We will clarify the assumptions of our methods and prior...
[ -1, -1, 7, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "pDmkc-3V09_", "yvv4PW4cUtH", "nips_2021_K4Su8BIivap", "HZx7KXZm7kl", "nips_2021_K4Su8BIivap", "XakN8QT3x83", "hGBQ8vMP7bN", "9ACVLiTDJ-", "nips_2021_K4Su8BIivap", "nips_2021_K4Su8BIivap" ]
nips_2021_1oR_gQGp3Rm
On the Second-order Convergence Properties of Random Search Methods
We study the theoretical convergence properties of random-search methods when optimizing non-convex objective functions without having access to derivatives. We prove that standard random-search methods that do not rely on second-order information converge to a second-order stationary point. However, they suffer from an exponential complexity in terms of the input dimension of the problem. In order to address this issue, we propose a novel variant of random search that exploits negative curvature by only relying on function evaluations. We prove that this approach converges to a second-order stationary point at a much faster rate than vanilla methods: namely, the complexity in terms of the number of function evaluations is only linear in the problem dimension. We test our algorithm empirically and find good agreements with our theoretical results.
accept
The paper considers the problem of finding second order stationary points via only functional evaluations. The paper considers specifically random search based methods and demonstrates a random search method that establishes convergence to SOSPs within O(d/eps^2) function evaluations. The analysis and results are solid and the algorithmic contribution is strong. The main criticism towards the paper is that of novelty of results compared to existing results on zeroth order optimization convergence to SOSPs which have been shown to achieve the same rate. The relative novelty here is that the paper focuses specifically on random search method as opposed to approximating gradient type of methods that exist in literature. There are benefits of random search methods as elucidated by the authors. Overall the paper is right on borderline. I am recommending accept based on the reviewers' unanimous agreement with its contributions. I strongly suggest the authors to do a very clear comparison with Flokas et al stating their result and the comparison of their result with it.
val
[ "_7CoNTMZH83", "PXKiY7CWHd7", "ABADhcvQ_6E", "GQk8t3kcrz6", "sYHOvWr01DQ", "m2OoNRmrR3", "m1hbzdibsG3", "RXZfpGV6FD", "KWBnw_mW7Qk", "WRbiIUG24S6", "AtKkI1vYOwG", "GL7I4_nLN53", "jqQfgYLh4aE", "7J1jtBNbVzU" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Dear AC,\n\nThank you for the opportunity to clarify this point.\n\nAs we briefly discussed in the paper, the work of Floukas et al. belongs to the category of approaches that approximate the gradient using finite difference, see equation (1) in the arxiv version. In contrast, our work analyzes the convergence pr...
[ -1, -1, -1, 6, -1, 6, 6, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, -1, -1, 3, -1, 3, 2, -1, -1, -1, -1, -1, -1, 3 ]
[ "PXKiY7CWHd7", "nips_2021_1oR_gQGp3Rm", "KWBnw_mW7Qk", "nips_2021_1oR_gQGp3Rm", "RXZfpGV6FD", "nips_2021_1oR_gQGp3Rm", "nips_2021_1oR_gQGp3Rm", "GL7I4_nLN53", "jqQfgYLh4aE", "7J1jtBNbVzU", "m2OoNRmrR3", "m1hbzdibsG3", "GQk8t3kcrz6", "nips_2021_1oR_gQGp3Rm" ]
nips_2021_fiPtD7iXuhn
Noether’s Learning Dynamics: Role of Symmetry Breaking in Neural Networks
In nature, symmetry governs regularities, while symmetry breaking brings texture. In artificial neural networks, symmetry has been a central design principle to efficiently capture regularities in the world, but the role of symmetry breaking is not well understood. Here, we develop a theoretical framework to study the "geometry of learning dynamics" in neural networks, and reveal a key mechanism of explicit symmetry breaking behind the efficiency and stability of modern neural networks. To build this understanding, we model the discrete learning dynamics of gradient descent using a continuous-time Lagrangian formulation, in which the learning rule corresponds to the kinetic energy and the loss function corresponds to the potential energy. Then, we identify "kinetic symmetry breaking" (KSB), the condition when the kinetic energy explicitly breaks the symmetry of the potential function. We generalize Noether’s theorem known in physics to take into account KSB and derive the resulting motion of the Noether charge: "Noether's Learning Dynamics" (NLD). Finally, we apply NLD to neural networks with normalization layers and reveal how KSB introduces a mechanism of implicit adaptive optimization, establishing an analogy between learning dynamics induced by normalization layers and RMSProp. Overall, through the lens of Lagrangian mechanics, we have established a theoretical foundation to discover geometric design principles for the learning dynamics of neural networks.
accept
Following in the tradition of physics, the paper considers the implications of differentiable symmetries exhibited by a given learning task for the dynamics of the associated learning process. Although similar ideas have previously appeared in the literature ([14], in particular), an explicit study of the suggested Lagrangian framework is conceptually new. The reviewers found this framework interesting and insightful, and appreciated the unique perspective it provides on the connection between batch normalization and the popular optimization method RMSProp.
train
[ "Bxis9an8rvw", "AdfVQapo8P", "7on9pXqGMJx", "6-ngIBwxYr", "WNLaVlPUpVv", "vTzF4pnqDx", "PTT4VKXteO", "QW6pXf00_kv", "xidvNctg9Ie", "3XpiY3x5jua", "eqgyu-ab3cL" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer L9t2,\n\nThank you for carefully understanding, thoroughly summarizing, and positively evaluating our work as: “results are novel and interesting”, “clearly written and cites relevant related works”, “The hypotheses are adequately tested.\", and \"The work reads complete and insightful.”\n\nWe thank...
[ -1, 8, -1, -1, -1, -1, -1, 7, -1, 8, 7 ]
[ -1, 2, -1, -1, -1, -1, -1, 4, -1, 3, 3 ]
[ "3XpiY3x5jua", "nips_2021_fiPtD7iXuhn", "6-ngIBwxYr", "AdfVQapo8P", "eqgyu-ab3cL", "nips_2021_fiPtD7iXuhn", "QW6pXf00_kv", "nips_2021_fiPtD7iXuhn", "PTT4VKXteO", "nips_2021_fiPtD7iXuhn", "nips_2021_fiPtD7iXuhn" ]
nips_2021_qeaT2O5fNKC
A Theory of the Distortion-Perception Tradeoff in Wasserstein Space
The lower the distortion of an estimator, the more the distribution of its outputs generally deviates from the distribution of the signals it attempts to estimate. This phenomenon, known as the perception-distortion tradeoff, has captured significant attention in image restoration, where it implies that fidelity to ground truth images comes on the expense of perceptual quality (deviation from statistics of natural images). However, despite the increasing popularity of performing comparisons on the perception-distortion plane, there remains an important open question: what is the minimal distortion that can be achieved under a given perception constraint? In this paper, we derive a closed form expression for this distortion-perception (DP) function for the mean squared-error (MSE) distortion and Wasserstein-2 perception index. We prove that the DP function is always quadratic, regardless of the underlying distribution. This stems from the fact that estimators on the DP curve form a geodesic in Wasserstein space. In the Gaussian setting, we further provide a closed form expression for such estimators. For general distributions, we show how these estimators can be constructed from the estimators at the two extremes of the tradeoff: The global MSE minimizer, and a minimizer of the MSE under a perfect perceptual quality constraint. The latter can be obtained as a stochastic transformation of the former.
accept
Reviewers found this paper to provide an interesting and welcome application for optimal transport, even if the theoretical contribution is limited from a technical perspective. After some deliberation, we converged on accepting this paper. In the revision, please provide the following: * More experimental evidence corroborating that a convex combination of two estimators results in visually pleasing results. * Better introduction to the perception-distortion tradeoff for non-experts * Optionally: Detailed discussion (and additional theoretical results if possible) for distortions/divergences beyond the MSE/Wasserstein-2 case.
train
[ "gpehNbEumDr", "PFmARxnWV-", "1AVky_S_YT_", "PgpFIftsgxS", "3VvCTi2CVY0", "Upj4QH_ZYqv", "XBGkK57KlKG", "vyTNRAVaYVK", "-up8FcR2kU7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper characterizes the perception-distortion in the context of (image) enhancement tradeoff, complementing prior results. For example, in image super resolution, this tradeoff describes the relationship between the point-wise reconstruction quality (distortion) and how well the distribution of the reconstruct...
[ 7, -1, 5, -1, -1, -1, -1, 8, 6 ]
[ 3, -1, 2, -1, -1, -1, -1, 4, 2 ]
[ "nips_2021_qeaT2O5fNKC", "3VvCTi2CVY0", "nips_2021_qeaT2O5fNKC", "-up8FcR2kU7", "gpehNbEumDr", "1AVky_S_YT_", "vyTNRAVaYVK", "nips_2021_qeaT2O5fNKC", "nips_2021_qeaT2O5fNKC" ]